Me with four open cli terminals righ now:
https://i.kym-cdn.com/photos/images/original/001/617/650/91a.jpg
Me with four open cli terminals righ now:
https://i.kym-cdn.com/photos/images/original/001/617/650/91a.jpg
Got one more for you: https://gossip.ink/
I use it via a docker/podman container I’ve made for it: https://hub.docker.com/repository/docker/vluz/node-umi-gossip-run/general
Lovely! I’ll go read the code as soon as I have some coffee.
I do SDXL generation in 4GB at extreme expense of speed, by using a number of memory optimizations.
I’ve done this kind of stuff since SD 1.4, for the fun of it. I like to see how low I can push vram use.
SDXL takes around 3 to 4 minutes per generation including refiner but it works within constraints.
Graphics cards used are hilariously bad for the task, a 1050ti with 4GB and a 1060 with 3GB vram.
Have an implementation running on the 3GB card, inside a podman container, with no ram offloading, 1 vcpu and 4GB ram.
Graphical UI (streamlit) run on a laptop outside of server to save resources.
Working on a example implementation of SDXL as we speak and also working on SDXL generation on mobile.
That is the reason I’ve looked into this news, SSD-1B might be a good candidate for my dumb experiments.
Oh my Gwyn, this comment section is just amazing.
Goddammit! Don’t tell that one, I use it to impress random people at parties.
HateLLM will be a smash. /s
If at all true this would be world-changing news.
Messing around with system python/pip and newly installed versions till all was broken and then looking at documentation.
This was way back on the 00’s and I’m still ashamed on how fast completely I messed it up.