1. The way it keeps sounding like it's taking a horrifyingly suggestive turn and it just continues to be a story about grey paint 🤣

  2. These are incredible 😄 This one isn't as code-y but I still liked it.

  3. No change, your custom models will still be on the 1.x version. The reduced NSFW outputs only applies if you're running an SD2 model.

  4. Yep, looks like the one OP found is probably the third one there (cmdr2)? In which case it would be fine, quite widely used I think. I use the first one (NMKD).

  5. Very honored to be added to the OP 😄 Here's Mouse 2: Mouses

  6. Started with: "destroyed steampunk metal android robot head skull, intricate gears brass, art by greg rutkowski"

  7. Made a 'remastered version' with darker robot remains to increase emphasis on the mouse :)

  8. Tried dreamboothing some amateur fantasy sketches I made with an Egyptian/African/Roman theme. Results weren't great, the model isn't worth sharing (it's not very consistent and mostly gives rather flat/goofy drawings, much like what went in haha), but it was interesting to see how it influences the result when combined with existing artists who have much more realistic styles. Just goes to show the adaptability of this tech and how easy it is to make its output more unique. Here's the ruler of a great desert empire!

  9. Hey, I thought your design was really cool so I made an AI art rendition:

  10. I used the 1.5-prunded 7.7GB file to do my training. don't know anything about VAE files, could you explain.

  11. I've heard some suggestions that the 4GB emaonly file might give better results than the larger one, maybe that has something to do with it?

  12. Prompt: "portrait of a dark wizard, lightning mage, electricity, blue sparks, tower, library, gothic, by greg rutkowski and andree wallin"

  13. Wow loving the grimdark WD generations :)

  14. SD GUI 1.6.0 Changelog:

  15. Thanks so much for your work on this GUI! Might be worth adding to the changelog that you can now increase emphasis on words in the prompt with ( ) and decrease it with { } because that's a really handy feature :)

  16. This one's a trained concept rather than a full model. Go to files, download the learned_embeds.bin and you can load it with an offline GUI that supports these, using the regular SD 1.4 model. (Just tried it with NMKD, works fine, you can use to reference the style in your prompt)

  17. Happy cake day and thank you for the help! I am using AUTOMATIC1111's webui. Do you know if I would put the .bin file in the embeddings folder? I created an embedding myself, but I think because it was a hypernetwork, it was a .pt file. So I'm not sure if it's the same folder for the .bin file.

  18. I think both .bin and .pt go in there, I don't use AUTOMATIC1111 though so can't test. Try renaming the file to trigger_studio.bin and then using trigger_studio in the prompt.

  19. Thank you for your response and help! I plan on being a patreon member this is great work you are doing. Is yours stable diffusion 1.4?

  20. (I'm not NMKD but) yep it downloads base SD 1.4 when you install. Others like Waifu you can download and put in the Data/models folder.

  21. Helmet on #6 is amazing 😮

  22. Love this, very creative idea!

  23. That final image is so cool!

  24. Would you want one of these as a pet?

  25. Original prompt was "portrait of goddess athena, clouds, acropolis, by greg rutkowski and artgerm" (apologies to Greg and Stanley for crimes against originality 😞 but I really loved this result). Some inpainting and outpainting to fix face & arms, extend image and add the owl.

  26. I think the first part is "I hear a voice and, am I losing my mind?"

  27. "voice and" and "Cosmic tide" were right! I messaged him in Instagram.

  28. Really pleased with how this one turned out!

  29. Thanks! Do you find the NMKD program is quite good? I have no ides which one to try on my pc.

  30. I've found it pretty good, quite user-friendly as it installs the required stuff automatically. As with any local installation, how fast it runs, and how big a resolution it will allow, will depend on what graphics card you have.

  31. I have it downloaded, which command do you click to get it to run. Ive been using a different one "Stable Diffusion UI" but I wanted to try the nmkd version to see if i get better results. Cant get it to launch

  32. Did you complete the install process? If it's all set up correctly, it should run just by pressing 'Generate'. If you're having technical trouble I'd post in

  33. I like this artwork but I also would like to see the full cat, is it possible to generate the same picture but with more stuff ? I got the prompt and the seed.

  34. For this you would ideally use a function called outpainting/uncropping (part of the 'inpainting' feature, where the AI fills in parts of an existing image, but instead filling in outside an image). Dalle-2 can do this for example. These should be possible in Stable Diffusion as well pretty soon hopefully.

  35. I'm confused. Stable Diffusion is now being integrated into Midjourney and Nightcafe (and possibly others?). I'm on the wait-list for Stable Diffusion, do I still need to wait or can I just access their technology using secondary platforms?

  36. Stable Diffusion went open-source this week so you have plenty of ways to access it already: third-party sites like Nightcafe, their own website DreamStudio (now in beta, a bit buggy and basic, NC is probably better for now but it should hopefully get some advanced functionality in future), on your own PC if you have a good enough graphics card (and don't mind following somewhat techy steps to install it), etc. :)

Leave a Reply

Your email address will not be published. Required fields are marked *

News Reporter