Wondering about services to test on either a 16gb ram “AI Capable” arm64 board or on a laptop with modern rtx. Only looking for open source options, but curious to hear what people say. Cheers!

  • pezhore@infosec.pub
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 hours ago

    I put my Plex media server to work doing Ollama - it has a GPU for transcoding that’s not awful for simple LLMs.

    • y0shi@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 hours ago

      That sounds like a great way of leveraging existing infrastructure! I host Plex together with other services in a server with intel transcoding capable CPU. I’m quite sure I would get much better performance with the GPU machine, might end up following this path!