1. I'm planning to buy helios 300, what is your longterm review on helios? is it great? no issues?

  2. Great performance, nice display. Bit bulky. Battery life not that good.

  3. alright thanks bro, I think i made up my mind I'll go with this. how about the thermal is it good?

  4. Yeah, thermals are pretty good. No throttling.

  5. I went to use this collab today but it was not working.

  6. I have updated the colab, now you can convert to ckpt.

  7. It seems there is some new issue. When launching the training, this error happens:

  8. U need to accept the licence for sd 1.5

  9. This was fixed 15 hours ago, it happened due to accelerate package update. Can u try again ? Still facing the issue ?

  10. You can specify what to use with MODEL_NAME variable.

  11. I decided to copy paste the model into automatic1111 anyway. I made one based on a photo of Atul from spiritfarer with a loose description of him as "uncle frog spirit person" and it's actually the single best cartoon generator I've ever worked with. I've spent dozens of hours trying to make these things and this paper beat all of them on accident. What a time to be alive!

  12. Oh wow. That's really interesting. I'll have to look into it.

  13. Does it result in a ckpt file?

  14. Yes but the embeddings aren't directly usable in automatic.

  15. No, currently won't work directly with Automatic.

  16. I see that the library has been updated to now cache latents automatically unless disabled. Nevertheless, with a Tesla T4, I'm seeing 15GB RAM with 512x512, caching latents, fp16, train_batch_size=1, gradient_accumulation_steps=2, gradient_checkpointing=TRUE, and use_8bit_adam=TRUE. I would have expected 11.56 based on your chart, curious where the extra 3.5G of usage is coming from.

  17. Strange, is any inference pipeline loaded into memory ?

  18. Does this colab actually use the class folder? I don't see it being created in the code but it's used as a parameter.

  19. Would there be any difference between the ckpt file and what was used by diffusers results wise or is that offering the same kind of quality?

  20. Try adding words like young woman, girl, female to prompt. It will push it towards your desired output.

  21. That would be too much. Will likely hurt the training

  22. OSError: Error no file named diffusion_pytorch_model.bin found in directory ./scampuss\unet.

  23. I have updated mine with a version that came out just before it and it's working.

  24. Can we convert the model made from a previous session with this colab ?

  25. Yes. Just add that path to OUTPUT_DIR variable

  26. Does "8 bit adam optimizer" produce a checkpoint that is compatible to other SD repos?

  27. Not yet. Need a converter from diffusers to original SD

  28. Try setting the huggingface token and execute it again

  29. Maybe, but it looks like this repo is using precompiled versions of xformers for each GPU type on colab. This might just be to save time though as the colab from

  30. I have also added precompiled wheels for colab later.

  31. https://www.reddit.com/r/MachineLearning/comments/xphdks/d_dreambooth_stable_diffusion_training_in_just/?utm_medium=android_app&utm_source=share

  32. Metaphors aside, do you think we'll get there this year

  33. https://www.reddit.com/r/StableDiffusion/comments/xphaiw/dreambooth_stable_diffusion_training_in_just_125/

  34. It's likely the difference between diffuser version from XavierXiao's repo. Will work on getting that close next.

  35. How long will my 3090 ti take to train a model?

  36. damn, nice! Now i just need a notebook for that :P

  37. https://colab.research.google.com/github/ShivamShrirao/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb

  38. Basically what dreambooth is doing right now but with simmilar hardware requirements to that of the basic version. I'm sure that we'll get that at some point but being able to have consistant elements throughout different images would be a huge improvement. I wouldn't even care if training would take 8 hours as long as I don't have to rent a 32 gb vram GPU.

  39. It's possible in 18GB VRAM now, and it's even faster.

  40. Yes, it was. I’ve already submitted my findings and results. Just curious to learn more about other possible approaches (I tried SRGAN and NOISE2NOISE).

  41. Oh cool. I also created the same assignment with similar examples to make a poc 2-3 days back. May be it's the same haha, if so I'm looking forward to see your findings.

  42. Y'll sound like you work for Carvana :D

  43. Carvana is one of our clients though :p

  44. Lots of witcher 3 fans I see in this sub

  45. It in general just has a lot of fans.

  46. We use triton inference server with dynamic batching.

  47. but are you able to use that laptop for gaming?

  48. Yup, I made them buy it for me and others and I have full control. Also bought a 3090 for office for our AI training, so can play on that on weekends or when free.

  49. Have you looked at Triton by OpenAI? May be helpful.

Leave a Reply

Your email address will not be published. Required fields are marked *

News Reporter