• renzev@lemmy.world
    link
    fedilink
    English
    arrow-up
    67
    ·
    2 days ago

    Much like uber and netflix, all of these ai chatbots that are available for free right now will become expensive, slow, and dumb once the investor money runs out and these companies have to figure out a business model. We’re in the golden age of LLMs right now, all we can do is enjoy the free service while it lasts and try not to make it too much a part of our workflow, because inevitably it will be cut off. Unless you’re one of those people with a self-hosted LLM I guess.

    • stoly@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      ·
      2 days ago

      Not LLM but there Google Assistant has gotten much more stupid over the past several years. They realized that it was too expensive and had to lobotomize it.

    • spireghost@lemmy.zip
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      1
      ·
      2 days ago

      This. AI Hype beasts keep saying “This is the worst AI will ever be” and “It’ll just get better” but really it’s just going to get worse as they actually try to turn the bubble into a profit

    • domdanial@reddthat.com
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      4
      ·
      2 days ago

      I was about to say, a selfhosted LLM means I’m not competing with every market analysis tool, customer service replacement, and 10 y/o kid bombarding the service with junk. It doesn’t need to be ultra fast if I’m the only one using the hardware.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        2 days ago

        and who’ll supply the model and training and updates and data curation, dom? is it as manna from heaven? do you merely step upon the path and receive the divine wisdom of fresh llm updates?

        fucking hell

        • domdanial@reddthat.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          Honestly, the data used to create these models was ripped from the public and I think that they are owed back to the public. OpenAI started as a non profit, and I think it should stay that way.

          The FOSS model works well enough for other projects and I think that corporate AI will be exactly the same as the industrial revolution, progress at the cost of humanity. This isn’t a problem to solve, it’s a solution looking for problems.

        • Knock_Knock_Lemmy_In@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          3
          ·
          1 day ago

          Base open source model.
          Topic expert models.
          Community lora.
          Program extensions.

          Look what comfy UI + Stable Diffusion can achieve.

          • Architeuthis@awful.systems
            link
            fedilink
            English
            arrow-up
            11
            ·
            1 day ago

            Base open source model just means some company commanding a great deal of capital and compute made the weights public to fuck with LLMaaS providers it can’t directly compete with yet, it’s not some guy in a garage training and RLFH them for months on end just to hand the result over to you to fine tune for writing caiaphas cain fanfiction.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 day ago

        And with the pruned llama models, it runs really quickly on a 2070.

    • Robust Mirror@aussie.zone
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      7
      ·
      2 days ago

      Once they are cut off self hosted focus will explode and will see huge improvements in terms of ability and ease of use.