Dreambooth Stable Diffusion training in just 12.5 GB VRAM, using the 8bit adam optimizer from bitsandbytes along with xformers while being 2 times faster.
Shows the Silver Award... and that's it.
- By - 0x00groot
Imagic Stable Diffusion training in 11 GB VRAM with diffusers and colab link.
I'm in this with you.
- By - 0x00groot
Now possible to use Dreambooth Colab Models in AUTOMATIC1111's Web UI!
Gives 100 Reddit Coins and a week of r/lounge access and ad-free browsing.
- By - Pfaeff
Made a Hugginface Dreambooth models to .ckpt conversion script that needs testing
Shows the Silver Award... and that's it.
Thank you stranger. Shows the award.
When you come across a feel-good thing.
- By - ratwithashotgun
[D] DreamBooth Stable Diffusion training in 10 GB VRAM, using xformers, 8bit adam, gradient checkpointing and caching latents.
Gives 100 Reddit Coins and a week of r/lounge access and ad-free browsing.
Thank you stranger. Shows the award.
- By - 0x00groot
DreamBooth Stable Diffusion working on Google Colab Free Tier, Tested on Tesla T4 16GB GPU.
Gives 700 Reddit Coins and a month of r/lounge access and ad-free browsing.
- By - 0x00groot
Is that crystal clear water?
It was pretty clear.
I'm planning to buy helios 300, what is your longterm review on helios? is it great? no issues?
Great performance, nice display. Bit bulky. Battery life not that good.
alright thanks bro, I think i made up my mind I'll go with this. how about the thermal is it good?
Yeah, thermals are pretty good. No throttling.
Replace the battery ?
Did you zoom in at all?
No. Just direct
I went to use this collab today but it was not working.
Checking
Thanks bro it's work, but i don2 fund the model file to download
I have updated the colab, now you can convert to ckpt.
Does it train each subject seperately and then merge?
No, they are trained together
It seems there is some new issue. When launching the training, this error happens:
U need to accept the licence for sd 1.5
Hi
This was fixed 15 hours ago, it happened due to accelerate package update. Can u try again ? Still facing the issue ?
Does this use model v1.5 or is it still running on v1.4?
You can specify what to use with MODEL_NAME variable.
I decided to copy paste the model into automatic1111 anyway. I made one based on a photo of Atul from spiritfarer with a loose description of him as "uncle frog spirit person" and it's actually the single best cartoon generator I've ever worked with. I've spent dozens of hours trying to make these things and this paper beat all of them on accident. What a time to be alive!
Oh wow. That's really interesting. I'll have to look into it.
Does it result in a ckpt file?
Yes but the embeddings aren't directly usable in automatic.
does this work with automatic?
No, currently won't work directly with Automatic.
I see that the library has been updated to now cache latents automatically unless disabled. Nevertheless, with a Tesla T4, I'm seeing 15GB RAM with 512x512, caching latents, fp16, train_batch_size=1, gradient_accumulation_steps=2, gradient_checkpointing=TRUE, and use_8bit_adam=TRUE. I would have expected 11.56 based on your chart, curious where the extra 3.5G of usage is coming from.
Strange, is any inference pipeline loaded into memory ?
Does this colab actually use the class folder? I don't see it being created in the code but it's used as a parameter.
It does, with prior preservation loss
Would there be any difference between the ckpt file and what was used by diffusers results wise or is that offering the same kind of quality?
Same
Hi, now it works!
Try adding words like young woman, girl, female to prompt. It will push it towards your desired output.
THANK YOU SIR, I will try that.
That would be too much. Will likely hurt the training
OSError: Error no file named diffusion_pytorch_model.bin found in directory ./scampuss\unet.
I have updated mine with a version that came out just before it and it's working.
Can we convert the model made from a previous session with this colab ?
Yes. Just add that path to OUTPUT_DIR variable
Does "8 bit adam optimizer" produce a checkpoint that is compatible to other SD repos?
Not yet. Need a converter from diffusers to original SD
Is there a script that do that?
Not yet
[удалено]
OUTPUT_DIR
I got these errors:
Try setting the huggingface token and execute it again
Maybe, but it looks like this repo is using precompiled versions of xformers for each GPU type on colab. This might just be to save time though as the colab from
I have also added precompiled wheels for colab later.
lol what
https://www.reddit.com/r/MachineLearning/comments/xphdks/d_dreambooth_stable_diffusion_training_in_just/?utm_medium=android_app&utm_source=share
Metaphors aside, do you think we'll get there this year
https://www.reddit.com/r/StableDiffusion/comments/xphaiw/dreambooth_stable_diffusion_training_in_just_125/
From Joe Penna's comment, seems like it breaks things
It's likely the difference between diffuser version from XavierXiao's repo. Will work on getting that close next.
How long will my 3090 ti take to train a model?
should be done in 20 minutes.
damn, nice! Now i just need a notebook for that :P
https://colab.research.google.com/github/ShivamShrirao/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb
Well now all I need is a 24GB GPU
Even less now, just 12.5 GB VRAM
Basically what dreambooth is doing right now but with simmilar hardware requirements to that of the basic version. I'm sure that we'll get that at some point but being able to have consistant elements throughout different images would be a huge improvement. I wouldn't even care if training would take 8 hours as long as I don't have to rent a 32 gb vram GPU.
It's possible in 18GB VRAM now, and it's even faster.
Yes, it was. I’ve already submitted my findings and results. Just curious to learn more about other possible approaches (I tried SRGAN and NOISE2NOISE).
Oh cool. I also created the same assignment with similar examples to make a poc 2-3 days back. May be it's the same haha, if so I'm looking forward to see your findings.
Y'll sound like you work for Carvana :D
Carvana is one of our clients though :p
Witcher 3
Lots of witcher 3 fans I see in this sub
It in general just has a lot of fans.
We use triton inference server with dynamic batching.
Negative 45k.
but are you able to use that laptop for gaming?
Yup, I made them buy it for me and others and I have full control. Also bought a 3090 for office for our AI training, so can play on that on weekends or when free.
Have you looked at Triton by OpenAI? May be helpful.