It’s been a while since I’ve updated my Stable Diffusion kit, and the technology moves so fast that I should probably figure out what new tech is out there.
Is most everyone still using AUTOMATIC’s interface? Any cool plugins people are playing with? Good models?
What’s the latest in video generation? I’ve seen a lot of animated images that seem to retain frame-to-frame adherence very well. Kling 1.6 is out there, but it doesn’t appear to be free or local.
There are some really good sd1.5-based models even by current standards, though. Nothing wrong with that.
It’s all about getting a good workflow set up. That’s why i wish I could make sense of comfyui, but alas it still eludes me.
The basic design is to create a small image of what you want, upscale that image, then run that image through the model again to fill in the details (all in one workflow). You can also just copy someone else’s and change the prompt lol. Can usually just drag and drop an image into ComfyUI if it was created with it, it retains the whole workflow. I cannot stress how important the upscaling is, it’s pretty amazing the details that it creates
InvokeAI lets you use an A1111 style interface or nodes-based workflow. Unfortunately, it isn’t compatible with ComfyUI workflows. I haven’t really done much with nodes, but I want to experiment more and figure it out.