• CeeBee@lemmy.world
    link
    fedilink
    arrow-up
    40
    arrow-down
    2
    ·
    1 year ago

    It’s getting there. In the next few years as hardware gets better and models get more efficient we’ll be able to run these systems entirely locally.

    I’m already doing it, but I have some higher end hardware.

      • CeeBee@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        Stable diffusion SXDL Turbo model running in Automatic1111 for image generation.

        Ollama with Ollama-webui for an LLM. I like the Solar:7b model. It’s lightweight, fast, and gives really good results.

        I have some beefy hardware that I run it on, but it’s not necessary to have.

      • Ookami38@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Depends on what AI you’re looking for. I don’t know of an LLM (a language model,think chatgpt) that works decently on personal hardware, but I also haven’t really looked. For art generation though, look up automatic1111 installation instructions for stable diffusion. If you have a decent GPU (I was running it on a 1060 slowly til I upgraded) it’s a simple enough process to get started, there’s tons of info online about it, and it’s all run on local hardware.

        • CeeBee@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          I don’t know of an LLM that works decently on personal hardware

          Ollama with ollama-webui. Models like solar-10.7b and mistral-7b work nicely on local hardware. Solar 10.7b should work well on a card with 8GB of vram.

          • ParetoOptimalDev
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            If you have really low specs use the recently open sourced Microsoft Phi model.