• @CeeBee@lemmy.world
      link
      fedilink
      386 months ago

      It’s getting there. In the next few years as hardware gets better and models get more efficient we’ll be able to run these systems entirely locally.

      I’m already doing it, but I have some higher end hardware.

      • Xanaus
        link
        fedilink
        46 months ago

        Could you please share your process for us mortals ?

        • @CeeBee@lemmy.world
          link
          fedilink
          66 months ago

          Stable diffusion SXDL Turbo model running in Automatic1111 for image generation.

          Ollama with Ollama-webui for an LLM. I like the Solar:7b model. It’s lightweight, fast, and gives really good results.

          I have some beefy hardware that I run it on, but it’s not necessary to have.

        • @Ookami38@sh.itjust.works
          cake
          link
          fedilink
          26 months ago

          Depends on what AI you’re looking for. I don’t know of an LLM (a language model,think chatgpt) that works decently on personal hardware, but I also haven’t really looked. For art generation though, look up automatic1111 installation instructions for stable diffusion. If you have a decent GPU (I was running it on a 1060 slowly til I upgraded) it’s a simple enough process to get started, there’s tons of info online about it, and it’s all run on local hardware.

          • @CeeBee@lemmy.world
            link
            fedilink
            26 months ago

            I don’t know of an LLM that works decently on personal hardware

            Ollama with ollama-webui. Models like solar-10.7b and mistral-7b work nicely on local hardware. Solar 10.7b should work well on a card with 8GB of vram.

            • @ParetoOptimalDev
              link
              16 months ago

              If you have really low specs use the recently open sourced Microsoft Phi model.

    • arthurpizza
      link
      fedilink
      English
      86 months ago

      This technology will be running on your phone within the next few years.

        • arthurpizza
          link
          fedilink
          English
          36 months ago

          I mean, that’s already where we are. The future is going to be localized.

      • @pkill@programming.dev
        link
        fedilink
        16 months ago

        Yeah if your willing to carry a brick or at least a power bank (brick) if you don’t want it to constantly overheat or deal with 2-3 hours of battery life. There’s only so much copper can take and there are limits to minaturization.

        • arthurpizza
          link
          fedilink
          English
          76 months ago

          It’s not like that though. Newer phones are going to have dedicated hardware for processing neural platforms, LLMs, and other generative tools. The dedicated hardware will make these processes just barely sip the battery life.

          • @MenacingPerson@lemm.ee
            link
            fedilink
            English
            16 months ago

            wrong.

            if that existed, all those AI server farms wouldn’t be so necessary, would they?

            dedicated hardware for that already exists, it definitely isn’t gonna be able to fit a sizeable model on a phone any time soon. models themselves require multiple tens of gigabytes of storage space. you won’t be able to fit more than a handful on even a 512gb internal storage. the phones can’t hit the ram required for these models at all. and the dedicated hardware still requires a lot more power than a tiny phone battery.

    • aubertlone
      link
      fedilink
      36 months ago

      Hey me too.

      And I do have a couple different LLMs installed on my rig. But having that resource running locally is years and years away from being remotely performant.

      On the bright side there are many open source llms, and it seems like there’s more everyday.

    • dream_weasel
      link
      fedilink
      -43
      edit-2
      6 months ago

      Ha. Lame.

      Edit: lol. Sign out of Google, nerds. Bring me your hypocrite neckbeard downvotes.