Just figured out there are 10 places called Lisbon dotted around the US, according to the search.
Just figured out there are 10 places called Lisbon dotted around the US, according to the search.
Me with four open cli terminals righ now:
https://i.kym-cdn.com/photos/images/original/001/617/650/91a.jpg
Got one more for you: https://gossip.ink/
I use it via a docker/podman container I’ve made for it: https://hub.docker.com/repository/docker/vluz/node-umi-gossip-run/general
I got cancelled too and chose Hetzner instead. Will not do business with a company that can’t get their filters working decently.
Not close enough for V.A.T.S.
Lovely! I’ll go read the code as soon as I have some coffee.
I do SDXL generation in 4GB at extreme expense of speed, by using a number of memory optimizations.
I’ve done this kind of stuff since SD 1.4, for the fun of it. I like to see how low I can push vram use.
SDXL takes around 3 to 4 minutes per generation including refiner but it works within constraints.
Graphics cards used are hilariously bad for the task, a 1050ti with 4GB and a 1060 with 3GB vram.
Have an implementation running on the 3GB card, inside a podman container, with no ram offloading, 1 vcpu and 4GB ram.
Graphical UI (streamlit) run on a laptop outside of server to save resources.
Working on a example implementation of SDXL as we speak and also working on SDXL generation on mobile.
That is the reason I’ve looked into this news, SSD-1B might be a good candidate for my dumb experiments.
Oh my Gwyn, this comment section is just amazing.
Goddammit! Don’t tell that one, I use it to impress random people at parties.
HateLLM will be a smash. /s
That’s wonderful to know! Thank you again.
I’ll follow your instructions, this implementation is exactly what I was looking for.
Absolutely stellar write up. Thank you!
I have a couple of questions.
Imagine I have a powerful consumer gpu card to trow at this solution, 4090ti for the sake of example.
- How many containers can share one physical card, taking into account total vram memory will not be exceeded?
- How does one virtual gpu look like in the container? Can I run standard stuff like PyTorch, Tensorflow, and CUDA stuff in general?
Just pip install mscandy -U
If at all true this would be world-changing news.
I use this: https://cloudhiker.net/explore
DS1 to DS3, I lost count of the hours.
Messing around with system python/pip and newly installed versions till all was broken and then looking at documentation.
This was way back on the 00’s and I’m still ashamed on how fast completely I messed it up.