Is there any computer program with AI capabilities (the generative ones seen in ChatGPT; onlineText-to-Picture generators, etc.) that is actually standalone? i.e. able to run in a fully offline environment.

As far as I understand, the most popular AI technology rn consists of a bunch of matrix algebra, convolutions and parallel processing of many low-precision floating-point numbers, which works because statistics and absurdly huge datasets. So if any such program existed, how would it even have a reasonable storage size if it needs the dataset?

  • Ziggurat@sh.itjust.works
    link
    fedilink
    arrow-up
    55
    ·
    4 months ago

    There is tons of “standalone” software that you can run on your own PC

    • For Text generation, the easiest way is to get GPT4All package which allows you to run text generation model in CPU on your own PC

    • For image generation, you can try to get Easy difusion package which is an easy to use stable diffusion package, then if you like-it, time to try the “comfyUI”

    You can check !localllama@sh.itjust.works and !imageai@sh.itjust.works for some more information

    • deranger@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      4 months ago

      I’ve wanted to try these out for shits and giggles - what would I expect with a 3090, is it going to take a long time to make some shitposts?

      • chicken@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        8
        ·
        4 months ago

        3090s are ideal because the most important factor is vram, and those are at the top of the plateau for vram until you get into absurdly expensive server hardware. Expect around 3 seconds for generating a 512x512 image or 4 words per second generating text at around GPT 3.5 quality.

      • tyler@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        4 months ago

        I did a bunch of image generation on my 3080 and it felt extremely fast. Enough that I was able to set it up as a shared node in one of those image generation nets and it outperformed most other people in the net.

      • Ziggurat@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        4 months ago

        With SD 1.5 my old GTX 970was doing fine (30 second per image) I upgraded to a Radeon 7060 and with SDXL get like 4 images in these 30 seconds (but sometimes crash my Pac when loading a model)

  • KoboldCoterie@pawb.social
    link
    fedilink
    English
    arrow-up
    20
    ·
    4 months ago

    Stable Diffusion (AI image generation) runs fully locally. The models (the datasets you’re referring to) are generally around 3GB in size. It’s more about the processing power needed for it to run (it’s very GPU-intensive) than the storage size on disk.

  • Wahots@pawb.social
    link
    fedilink
    arrow-up
    13
    ·
    4 months ago

    https://lmstudio.ai/

    You can load up your own datasets, has some of its own, too. Most of these are pretty good, but run on synthetic data. Storing and processing something the size of chatgpt would bankrupt most people.

    This program can use significant amounts of computer resources if you let her eat. I recommend closing other programs and games.

  • astrsk@kbin.run
    link
    fedilink
    arrow-up
    10
    ·
    4 months ago

    GPT4ALL for chat and Automatic1111 for generative with downloaded models works great. The former does not require a gpu but the later generally does.

  • ichbinjasokreativ@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    4 months ago

    Stable diffusion and ollama for image and text generation locally. Super easy to do on linux and support gpu acceleration out of the box

  • CaptDust@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    4 months ago

    Local LLMs can be compressed to fit on consumer hardware. Model formats like GUFF and Exl2 can be loaded up with a offline hosted API like KobaldCPP or Oobabooga. These formats lose resolution from the full floating point model and become “dumber” but it’s good enough for many uses.

    Also noting these models are like, 7, 11, 20 Billion parameters while hosted models like ChatGPT run closer to 8x220 Billion

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      4
      ·
      4 months ago

      Though bear in mind that parameter count alone is not the only measure of a model’s quality. There’s been a lot of work done over the past year or two on getting better results from the same or smaller parameter counts, lots of discoveries have been made on how to train better and run inferencing better. The old ChatGPT3 from back at the dawn of all this was really big and was trained on a huge number of tokens but nowadays the small downloadable models fine-tuned by hobbyists would compete with it handily.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          4
          ·
          4 months ago

          Makes it all the more amusing how OpenAI staff were fretting about how GPT-2 was “too dangerous to release” back in the day. Nowadays that class of LLM is a mere toy.

  • Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    4 months ago

    The AI, image, and audio models that can run on a typical PC have all been broken down from originally larger models. How this is done affects what the models can do and the quality, but the open source community has come a long way in making impressive stuff. First question is more hardware - do you have an Nvidia GPU that can support these types of generations? They can be done through CPU alone, but it’s painfully much slower.

    If so, then I would highly recommend looking into Ollama for running AI models (using WSL if you’re using Windows) and ComfyUI for graphical generation. Don’t let the workflow of complicated ComfyUI scare you, starting from the basics with plenty of Youtube help out there it will make sense. As for TTS, there’s a lot of constant “new stuff” out there, but for actual local processing in “real time” (still takes a bit) I have yet to find anything to replace my Coqui TTS copy with Jenny as the model voice. It may take some digging and work to get that together, it’s older and not supported anymore.

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      4 months ago

      I don’t think they break them down. For most models the math requires to start at the beginning and train each model individually from ground up.

      But sure, a smaller model generally isn’t as capable as a bigger one. And you can’t train them indefinitly. So for a model series you’ll maybe use the same dataset but feed more into the super big variant and not so much into the tiny one.

      And there is something where you use a big model to generate questions and answers and use them to train a different, small model. And that model will learn to respond like the big one.

      • Rhaedas@fedia.io
        link
        fedilink
        arrow-up
        4
        ·
        4 months ago

        The breaking down I mentioned is the quantization that forms a smaller model from the larger one. I didn’t want to get technical because I don’t understand the math details myself past how to use them. :)

        • hendrik@palaver.p3x.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          4 months ago

          Ah, sure. I think a good way to phrase it is to say they lower the precision. That’s basically what they do, convert the high precision numbers to lower precision fomats. That makes the computations easier/faster and the files smaller.

          And it also doesn’t apply to text, audio and images. As far as I know quantization is mainly used with LLMs. It’s also possible with images and audio, but generally they don’t do that. As far as I remember it leads to degradation and distortions pretty fast. There are other methods like pruning used with generative image models. That brings down their size substantially.

  • Björn Tantau@swg-empire.de
    link
    fedilink
    arrow-up
    5
    ·
    4 months ago

    Krita has an AI plugin that’s pretty painless to setup if you’ve got an nVidia card. AMD has to be done manually or you can fall back to slow CPU generation. It uses ComfyUI in the background.

    • iturnedintoanewt@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      4 months ago

      Just wanted to thank you, as I hadn’t had any luck running any other SD software on my AMD setup with Nobara. But after a couple of fixes to get rocm running, this one runs, and runs pretty fast. Thanks!

  • Grimy@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    4 months ago

    You need a GPU for any kind of performance.

    For text I suggest: Ollama backend - command line interface, very easy to download models with one line of code. Supports most models and you can talk with the model inside the terminal so it’s stand alone OpenWebUI - easy install with docker and is meant to work easily with ollama. Comes with web search features and uploading pdfs. A bunch of different community tools and modules are available.

    For img I suggest either: Automatic1111 - Traditional UI using gradio. Lots of extras you can download through the UI to do different things. ComfyUI - Node based UI, a bit more complicated but more powerful than automatic1111

    For models, you can go on civitai and just download whatever you need and drop it into their respective folders for both auto and comfy.

    For text, there’s also LMStudio which is very user friendly. It is closed source and much slower than ollama from my experience though. I have a 4060 in my laptop (8gb VRAM) and I’m getting an image every 2 secs about using stable diffusion 1.5 models and text speed is on par with chatgpt with the smaller 8b-9b model. For text I suggest gemma2 which is probably the best small model out right now.

  • Sabata@ani.social
    link
    fedilink
    arrow-up
    2
    ·
    4 months ago

    If you have a good GPU, you should be able to run a model without issue. The big ones are technically usable with tweaking but so slow enough to be useless on normal hardware. A small model may be 4-8 gb, but a larger one could be 100+gb. You don’t need the training data(if its even public) to run them, only if your building or retraining the model. There’s a crap ton of different software to run AI on.

    To get started assuming you got a beefy PC, you need a model and software to interact with it. I started with mistral7b and textGenWebUi and been trying out different software and models. Text gen has the basics to load and chat to a model and is a good starting point.

    Model-https://mistral.ai/technology/#models Software-https://github.com/oobabooga/text-generation-webui

    For Images, you can choose models at based on what the sample images look like, they tend to be specialized for certain styles or content. You can add LORAs to further change how the output looks(think specific characters or poses). It’s very much trial and error getting good images.

    Models-https://civitai.com/ (potentially NSFW) Software-https://github.com/vladmandic/automatic

    There’s more models and software out their than I can keep track of, so if something is crap you should be able to find an alternative. Youtube guides are you friend.

  • Starbuck@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    4 months ago

    If you are into development, the setup I use is ollama running codegemma:7b along with the Continue.dev plugin for vscode.

  • JackGreenEarth@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 months ago

    I use Krita with the AI Diffusion plugin for Image Generation, which is working great, and Jan for text Generation, using the Llama 3 8B Q4 model. I have a NVIDIA GTX 1660 Ti with 6GB of VRAM and both are reasonably fast.

  • NaN@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    4 months ago

    For LLMs, I’ve had really good results running Llama 3 in the Open Web UI docker container on a Nvidia Titan X (12GB VRAM).

    For image generation tho, I agree more VRAM is better, but the algorithms still struggle with large image dimensions, ao you wind up needing to start small and iterarively upscale, which afaik works ok on weaker GPUs, but will gake problems. (I’ve been using the Automatic 1111 mode of the Stable Diffusion Web UI docker project.)

    I’m on thumbs so I don’t have the links to the git repos atm, but you basically clone them and run the docker compose files. The readmes are pretty good!

  • sunzu@kbin.run
    link
    fedilink
    arrow-up
    3
    arrow-down
    2
    ·
    4 months ago

    Do you have 24gb GPU.

    If so… Then you can get decent results from running local models

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      5
      ·
      4 months ago

      You can get decent results with much less these days, actually. I don’t have personal experience (I do have a 24GB GPU) but the open source community has put a lot of work into getting models to run on lower-spec machines. Aim for smaller models (8B parameters is common) and low quantization (the values of the parameters get squished into smaller numbers of bits). It’s slower and the results can be of noticeably lower quality but I’ve seen people talk about usable LLMs running CPU-only.