• tormeh@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 hours ago

    Ollama is apparently going for lock-in and incompatibility. They’re forking llama.cpp for some reason, too. I’d use GPT4All or llama.cpp directly. They support Vulkan, too, so your GPU will just work.