• tal
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    and run local LLMs.

    Honestly, I think that for many people, if they’re using a laptop or phone, doing LLM stuff remotely makes way more sense. It’s just too power-intensive to do a lot of that on battery. That doesn’t mean not-controlling the hardware – I keep a machine with a beefy GPU connected to the network, can use it remotely. But something like Stable Diffusion normally requires only pretty limited bandwidth to use remotely.

    If people really need to do a bunch of local LLM work, like they have a hefty source of power but lack connectivity, or maybe they’re running some kind of software that needs to move a lot of data back and forth to the LLM hardware, I think I might consider lugging around a small headless LLM box with a beefy GPU and a laptop, plug the LLM box into the laptop via Ethernet or whatnot, and do the LLM stuff on the headless box. Laptops are just not a fantastic form factor for heavy crunching; they’ve got limited ability to dissipate heat and tight space constraints to work with.

    • areyouevenreal@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Yeah it is easier to do it on a desktop or over a network. That’s what I was trying to imply. Although having an NPU can help. Regardless I would rather be using my own server than something like ChatGPT.