• SlopppyEngineer@lemmy.world
    link
    fedilink
    English
    arrow-up
    62
    ·
    6 days ago

    It always reminds me how serious people were trying to build steam powered aircraft. I imagine they had a bunch of “if we can just get some lighter material” kind of discussions right until some bicycle guys used an internal combustion engine to make history.

  • GHiLA@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    4 days ago

    Consumer: buys next phone(and car) with the least amount of Ai possible

    Hey society!

    This is broken!

    This isn’t how capitalism works! We have to choose stuff with our wallets, not them with their investors.

    Fuck them, right?

  • Sam_Bass@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    5
    ·
    edit-2
    5 days ago

    AI is shit. Poor programming results in heavy errors and intrusive break ins during benign operations. Worst is that the corpos that adopt it shove it into your systems in a way that makes it unremovable

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      5 days ago

      Poor programming?

      I’m sorry, LLMs are shit for various reasons, but “poor programming” isn’t one of them. And I bring this up because branding it as such suggests there is a “good programming” LLM that doesn’t have the inherent problems that any such system would have. Which just isn’t a thing with the way LLMs work.

  • bunchberry@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 days ago

    I don’t get it. Who is claiming that if we build one more LLM we will solve AGI? Maybe I just live under a rock. Top comment here is saying people believe LLMs will “solve climate change.” Who believes that? I do not know what any of this is on about, I have never seen these people.

    • tempest@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 days ago

      People who don’t call the tech LLM and just refer to it by AI, that’s who

  • IsThisAnAI@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    19
    ·
    6 days ago

    It’s always amazing to see how folks latch on to the extreme vs the reality.

    ML and AI tools are quite helpful. Yes they make mistakes but at the end of the day it reduces human effort. It’s really not hard to see the usefulness.

    • hark@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      3
      ·
      5 days ago

      Reduces human effort in what? Certainly for producing garbage, but it increases my human effort in having to wade through that garbage.

      • Lumidaub@feddit.org
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        1
        ·
        5 days ago

        The soul-crushing effort of socialising and producing art, an effort that is eating all that mental and physical energy which would be better utilised in the mines to make more profits for billionaires. /s

        • AngryMob@lemmy.one
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          5
          ·
          5 days ago

          What about people who have artistic thoughts but have trouble getting them out of their head? I would argue that is most people because most of us arent artists. We also arent going to pay a commission for every idea we have. A simple image generator can be valid for that.

          Also you are ignoring those who may even refine their prompt generated images (which are usually what people see as ai slop) into something better using all the new tools and techniques available now (inpainting, controlnets, regional guidance, etc). I dont think that is any less of an artistic process or artistic outlet than doing it with photoshop or with physical media.

          • Lumidaub@feddit.org
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            2
            ·
            5 days ago

            I am VERY convinced you have heard all the counterarguments to these, several times, and you do not need me to reiterate them.

      • Donkter@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        3
        ·
        5 days ago

        It reduces effort in summarizing reports or paper abstracts that you aren’t sure you need to read. It reduces efforts in outlining formulaic types of writing such as cover letters, work emails etc.

        It reduces effort when brainstorming mundane solutions to things, often by knocking off the most obvious choices but that’s an important step in brainstorming if you’ve ever done it.

        Hell, I’ve never had chat GPT give me the wrong instructions when I ask it for a basic cooking recipe, and it also cuts out all of the preamble.

        If you haven’t found uses for them, you either aren’t trying too hard or you’re simply not in an industry/job that can use them for what they are useful for. Both of which are ok, but it’s silly to think your experience of not using them means that no one can use them for anything useful.

        • hark@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          Creating a lot of filler “content” is also another use for them, which is what I was getting at. While I have seen some uses for AI, it overwhelmingly seems to be used to create more work than reduce it. Endless spam was bad enough, but now that there’s an easy way to generate mass amounts of convincingly unique text, it’s a lot more to wade through. Google search, for example, used to be a lot more useful, and results that were wastes of time were easier to spot. That summaries can include inaccuracies or outright “hallucinations” makes it mostly worthless to me since I’d have to at the very least skim the original material to verify just in case anyway.

          I’ve seen AI in action in my industry (software development). I’ve seen it do the equivalent of slapping together code pieced together from Stack Overflow. It’s impressive that it can do that, but what’s less impressive are clueless developers trusting the code as-is with minimal verification/tweaks (just because it runs, doesn’t mean it’s correct or anywhere close to optimal) or the even more clueless executives who think this means they can replace developers with AI or that tasks are a simple matter of “ask the AI to do it”.

        • Squirrelanna@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          5 days ago

          Just because you haven’t personally gotten an egregiously wrong answer doesn’t meant it won’t give one, which means you have to check anyway. Google’s AI famously recommended adding glue to your pizza to make the cheese more stringy. Just a couple of weeks ago I got blatantly wrong information about quitting SSRIs with its source links directly contradicting it’s confidently stated conclusion. I had to spend EXTRA time researching just to make sure I wasn’t being gaslit.

          • Donkter@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            4 days ago

            Google’s AI is famously shitty. ChatGPT, and especially the most modern version is very good.

            Also don’t use LLMs for sensitive stuff like quitting SSRIs yet.

            • Squirrelanna@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              That’s the thing. I didn’t want to use it. The AI’s input was entirely unsolicited and luckily I knew better than to trust it obviously. I doubt the average user is going to care enough to get a second opinion.

        • AngryMob@lemmy.one
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          5 days ago

          To add on to your comment. Even beyond job/industry, its like your cooking example. I spin up an llm locally at home for random tasks. An llm can be your personal fitness coach, help you with budgeting, improve your emails, summarize news articles, help with creative writing, christmas shopping list ideas, brainstorm plants for your new garden, etc etc. they can fit into so many simple roles that you sporadically need.

          Its just so easy to fall into the trap of hating them because of the bullshit surrounding them.

          • Hexarei@programming.dev
            link
            fedilink
            English
            arrow-up
            3
            ·
            5 days ago

            Yeah as long as you double check their work and don’t assume their facts are accurate they’re pretty useful in a lot of ways.

      • Hexarei@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        5 days ago

        I’ve found it to be pretty good at transforming and/or extracting data from human input. For example, I’ve got an app that handles incoming jobs, and among the sources of those jobs is “customer sent an email”. Pretty neat to give an LLM a JSON schema and tell it to fill the details it can figure out from the email. Of course, we disclose to the user that the details were filled in by AI and should be double checked for accuracy - But it saves our customers a lot of time having the details sussed out from emails that don’t follow a specific format.

    • iAvicenna@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      edit-2
      5 days ago

      extreme is tech bros hyping ML and AI for not what it is to get shareholders to pay millions to projects that will likely not achieve its end goals. Anyone in the genuine ML and AI domain should be pissed because it is going to reduce interest and trust in these domains when the bubble bursts and then real researchers will be left to pick up the pieces whereas the tech bros will likely move onto the next thing.

      Things that chatGPT, gen AI etc can do now? They are already crazy wild to me. But somehow to create more hype about it, they are advertised as being one step away from AGI or one step away from flawlessly pipelining creative processes. It is neither of these yet and from what it seems throwing more data to it will likely not be it either. But of course if you come up with a plan like “we need to double our compute bro and then we will have AGI bro” then you can get investors to pay double or quadruple of what they paid before. So in summary, they are basically con men.

    • Kichae@lemmy.ca
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      2
      ·
      6 days ago

      Now, include the environmental costs of some of these tools, and whether they’re a) running at a loss or not in order to gain market share, and b) whether they’re the tools people are even using.

      Do we still come out ahead? Are the minutes saved - if there are truly any - actually saved, or just shoveled onto someone else’s plate as environmental damage?

      What’s the big picture here? Because society honestly should not give a flying fuck if your job becomes slightly easier at the cost of everybody else.

      • KeenFlame@feddit.nu
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        6 days ago

        Yeah it’s absolutely not the inhuman giant inhuman entities that spew actual sewage and poison straight out into nature just for profits that’s the problem anymore it’s what the energy is used for

    • JollyG@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      5 days ago

      The day to day reality for me at least is that the new hyped up llms are largely useless for work and in some cases actually detriments. Some people at work use them a lot, but the heavy users tend to be people who were bad at their jobs, or at least bad at the communication aspect of their jobs. They were bad at communicating before and now, with the help of chat gpt, they are still bad at communicating, except they have gotten weirdly obstinate about their crappy work output.

      Other folks I know have tried to use them to learn new things but gave up on them when they kept getting corrected by subject matter experts.

      I played around with them for code generation but did not find it any faster than just writing and debugging my own code.

    • ddplf@szmer.info
      link
      fedilink
      English
      arrow-up
      10
      ·
      6 days ago

      Yes, they can be useful at times. This does not mean you can just ditch all the human effort and algorithmic solutions and fill every nook and cranny with AI. Which is exactly where we’re at currently. And it’s turning out dreadful.

  • Narri N.@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    24
    ·
    edit-2
    6 days ago

    just one more SSRI bro, i promise bro the next SSRI will work bro please i need one more SSRI bro Edit: okay, well maybe instead of insinuating that none of the SSRIs work, I should have claimed that all of them so many potentially crippling side effects, that prescribing these “medications” should be considered significantly more as an absolute last-resort solution - together with inpatient care - than be given out like candy as they are today. But I also understand that it is the cheapest option available, as the best option is therapy, which there are multiple of, all of them requiring quite a lot of time and work, all of which cost significantly more than what anyone is ready to pay. But that shit is long, so take it as you will

    • TimewornTraveler@lemm.ee
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      5 days ago

      no… that’s not how that works.

      But I also understand that it is the cheapest option available, as the best option is therapy,

      outcomes of both treatments combined are superior to outcomes from either alone. SSRIs are just a tool to help you retrain your brain more easily during the course of behavioral modifications, which a therapist typically helps you identify and implement

      they’re powerful which makes them difficult to use, i get it. finding the right medication can be exhausting, because you need to build up the drug in your body to have an effect and you need to titrate off to safely stop the drug. so it’s a long game of trial and error.

      But I can assure you that psych meds are ridiculously important for managing certain conditions.

      i really wish the fuckers who tote pharmaceutical population control conspiracies would just spend a weekend in the Before Times when people with mundane mental disorders by today’s standards were locked up and abused. yeah, totally randy, it’s lexapros fault your life is a mess, you’d be way better having a manic episode in 1700

      • Narri N.@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        okay yeah, I think this might be the most sensible answer here. I myself tend to get “a bit” frothing at the mouth when it comes to these things, because of personal experiences. So sorry everyone, I got carried away.