I’ve been looking into self-hosting LLMs or stable diffusion models using something like LocalAI and / or Ollama and LibreChat.
Some questions to get a nice discussion going:
- Any of you have experience with this?
- What are your motivations?
- What are you using in terms of hardware?
- Considerations regarding energy efficiency and associated costs?
- What about renting a GPU? Privacy implications?
Thanks for the post, super appreciate the posting of other communties. I think this is a great way to grow Lemmy and create discoverability for niche communities, I’ll keep that in mind myself on future opportunities.