1. That's the aspect ratio. The 768 model doesn't seem to be able to handle stretching the content without stretching faces at the moment - they all look horsey. If you reduce down to a 1:1 with the same seed it will get rid of the distortion.

  2. Well now make someone specific like Daniel Radcliffe with it. You can't :)

  3. I don't know if Daniel Radcliffe's in there yet but there are some faces that are pretty easy to generate.

  4. With a finetuned checkpoint or default? I haven't managed to get close to this quality with 1.5 for this style.

  5. Yep - that's pretty horrible. I don't have any DreamStudio credits but might do some tests on Discord to see how it compares.

  6. I kinda wanna thank them for exploring such places .... And documenting it so that we plebs can also appreciate such places

  7. I'm claustrophobic so have no idea why I browse through these videos. This is one of the worst I've seen:

  8. Thanks - it looks like it only posted half the vid. It looks better on Twitter and a bit more detail on how it was made.

  9. For real, this could've fooled me as a Dalle E 2 image.

  10. Yep. I'd be very interested to see any of the intermediate images as this isn't like any SD generations I've seen. Lovely image but looks very Dall-E-like to me.

  11. This sounds like a disaster. There was a pretty comprehensive artists study for Stable Diffusion but this kicked off all the anti ai art sentiment and ended up with death threats. A whole bunch of artists understood it to mean that SD was specifically trained to emulate and replace them.

  12. I've been testing some cinematic composition using Mads Mikkelsen. Stable Diffusion didn't understand Cowboy Shot.

  13. 1.5 is a tiny update, doubling the resolution will come in 2.0 (or whatever they will call it).

  14. I think the Devs said that their training doubled the steps from 1.4 to 1.5

  15. I think 2 is a different model and will struggle to run on consumer hardware.

  16. This is so cool! how do you get Stable diffusion to generate all the inbetween looks of each cat . I am so confused on how you are doing that. Do you know of a way to Que the seed. So Say I wanted 100 cats using a certain prompt (downloaded version) is there a way to Que them to render over night?

  17. do you have any ability to control what part of the frequency spectrum the db meter is linked to, or how to tweak the transfer function?

  18. Cool. Thanks. I'm guessing it's using rotation_3d_y to rotate around y -axis. Any chance you could share your movement settings?

  19. I'm starting to save the settings to different files, but I'm afraid I've overwrite the settings for this video. However I can put here what I learned i.e:

  20. Perfect - thanks for this. Those are nice camera movements - I'll try something along these lines.

  21. How do you make the animations? Do you have something that just takes that last image and creates off that?

  22. The Deforum Diffusion notebook does exactly that but with a few extra tweaks. It works in a similar way to Disco Diffusion.

  23. What do you mean when you say the keyframes were guided by music? I was trying to find any correlation in the video between the music and what I was seeing, and I couldn't really tell anything.

  24. Yeah, the zoom slows down and speeds up depending on the volume of the music. Maybe not so noticeable but I liked the end animation. I'll tweak the keyframes next time or base it on just one channel.

  25. I didn’t know about using negative parameters, sweet!

  26. It looks like he edited the code to do this."I tweaked the code for CFG to use the negative prompt instead of an empty string". I'm curious to know how.

  27. Stable Diffusion by default at the moment is direct text input. There's no clip or pre-processing like Dalle-2 and Midjourney. It doesn't understand phrases like that so it probably wouldn't work.

  28. Does this work with img2img? It would be interesting to render at lower res for the composition and then to upscale and put through img2img using this effect?

  29. This works really well. Do you mind if I share on Twitter as an example of what can be done with Deforum Diffusion?

  30. Interesting to see. Apparently double the number of steps have been used to train the model between these checkpoints. My tests haven't been as dramatic but have consistently shown improvements.

  31. It looks like they have an original sketch from MS Paint they're putting into SD as an init image for img2img They are using the result of this process as the init image next time around and so on. By using multiple iterations of this process the original idea is tidied up by SD and results in a great looking image.

  32. No one was given 1000, where did you read that?

  33. Only very few early testers who were individually invited got more credits when they were smoothing out bugs and feedback was requested. The site has changed quite a bit since then. All Beta testers get 200.

  34. 1000 generations if you generate at 512x512 with only 50 steps, if you do it higher then it uses more credits all the way up to 28.2 credits to generate one image at 1024x1024 with 150 steps

  35. I think there are diminishing returns after 50 steps and it stops people from switching on 150 steps by default when they're generating 9 images. It only takes a few seconds to run your prompt and seed again to generate a full image. I've had pretty decent results at 512x512 and upscaling.

  36. The bots will stop generating images in all channels on Discord shortly. You will still be able to view previously generated images.

  37. Stable diffusion is certainly not "beyond Dall-e 2" in terms of quality.

  38. For the Beta Stable Diffusion is Direct Text input so you're not comparing like for like.

  39. Hah, thanks. That's me. Feature should be rolled out to main servers later today.

  40. Thank you :). Do you know if neural network training for this particular stable diffusion model is finished?

  41. Don't think it's finished training. New checkpoint this morning and model's getting better with each one.

  42. I was invited directly a while ago and was testing at earlier checkpoints - an image of mine has been used on the sign up page.

  43. For a few more images and details check out this post on Twitter:

Leave a Reply

Your email address will not be published. Required fields are marked *

News Reporter