Greg Rutkowski.

When you come across a feel-good thing.

Gives 700 Reddit Coins and a month of r/lounge access and ad-free browsing.

Thank you stranger. Shows the award.

  • By - FS72


















  1. I haven't been able to replicate anything close. looks like this was made by

  2. I'm just here to have fun, it doesn't have to go any deeper than that. Let the people have their free entertainment!

  3. OP isn't talking about people like you though. Continue having fun! OP is talking about the people who go type a sentence into a program, hit render, then act like they have the same skill sets of Greg Rutkowski who took years to hone his talent that most AI art users never could achieve even if you gave them 100 years. AI art has brought out a lot of talent-less people who have gotten delusional and don't understand they didn't create the piece, the AI and the art it learned off did.

  4. Ahh I can see how that's tiresome. This tech is powerful and seemingly limitless, which could have a magnifying effect on people with big egos. There are so many components and developer making this possible, so it's something to be grateful for

  5. sorry, I'm not sure what you mean? All of those prompts were img2img

  6. thanx for the lengthy reply , yeah that doesnt look as straightforward(automagical) as it seems in the op. but it wont be that long till we can get from pencil drawing to fully rendered piece in one go .. 2 years maybe

  7. yup. sometimes pieces look really good straight up, especially in Midjourney, but that feels like a gamble. Relying on a gamble for clean results isn't reliable enough. Compared to rendering the old fashioned way, my Stable Diffusion workflows are still quicker & mentally less demanding so I'll be looking forward to improvements too!

  8. Dang you're smooth. I'm sorry, I've made your tool an essential part of my artistic nonsense and it's not your responsibility hahaha thank you so much what a person and what great work. You are a great person!

  9. sweet, does that mean you were able to solve the issue and it works now?!

  10. Sorry for the incomplete response before - I hadn't had a chance to check if it worked yet.

  11. I forked the Stable Diffusion code onto my own github account and fixed the line of code, so everything is fixed and working now! No need to do the work around

  12. Honestly, as an artist, the composition, the character design and the background is way better on the sketch. The only pro I see on the SD version is the shading and the coloring, plus the castle in the background and the steampunk elements on the girl make no sense at all, it looks good but it makes no sense. A fully rendered and colored version of the sketch would be ten times better imho.

  13. the rendering is the most amazing part of this, it's like JC Leyendecker mixed with some others, hopefully OP can share the style prompt

  14. That looks fantastic, Stable Diffusion really brings our ideas to life, like the details of the city and the shape of her hoodie!

  15. What will be the point of all those imitations then? Greg helped visualize IPs for several beloved franchises, so his work will always have more value than generic fantasy art. I hope in the future people will look beyond the superficial aspects of art and use the tech to create more meaningful things. As a piece of novelty tech there will be a lot of rubbish created, but this can also make art more accessible to people who would otherwise not have the time to express themselves. It seems to me this is a grey area that will be resolved with time

  16. Yes, I agree the most likely scenario is corporations screwing people over with this tech. When corporations have the money & influence to create legislation for their own benefit, it gives them an advantage over the general public & artists.

  17. Unless people are ripping off the IPs and recreating the identical stories of other artists & studios, there is nothing more to generative art other than emulating the superficial style of derivative work.

  18. funnily enough I was throwing some real Unreal Engine screenshots in img2img to enhance them. Your result from Dalle2 looks even closer to what I imagine Unreal Engine to be than the actual renders! hyped to try this one out

  19. How does this differ from LDSR? I was very pleased by LDSR so far.

  20. I'm not sure how LDSR works. Can you share a link?

  21. Sorry I confused LDSR with GoLatent. I meant GoLatent used in

  22. Ah yeah, if you want consistent results to the input image GoBig/GoLatent is a better way to do it. I prefer to add new/different details to the final upscaled result which is why I set my script up for modifying in Photoshop. Upscaling is an incredible thing on it's own, I just prefer to have some control over the final result. I'm never 100% happy with the standard results of any of them

  23. history is not a great guide to the future. this isn't mere technology that makes the boring or dangerous or exhausting part easier. this makes the education and training part obsolete. Do you know any studio musicians? 20 years ago, that was a job. Playing instruments for recordings. Now software does that, and the studio musicians looked for a different line of work.

  24. how is training and education obsolete from these tools? I think there's still room to teach workflows.

  25. yes, that's what I'm bemoaning: it will be a death sentence for illustrators, who became illustrators, not python programmers.

  26. yeah it's sad for those who love the art form. I was using black and white thinking when I wrote that comment though. Looking at stop motion films vs 3D, while stop motion is uncommon they're still produced these days. As long as a group of people believe & value the art work and have a means of financing it, they can create whatever they want. So it's only a death sentence in the sense of the mainstream production artist

  27. Thank you, that is really great! I finally had the time to try it on one image, it works very well!

  28. Ah sorry, some typos in my numbers, the initial image was 2048x3072 and the output one was 2816x4224, but yes, in must comes from the size of the tiles. I start a first image at first, without going to the end, and on my second run I tried a bigger size of tiles, hopping there would be less tiles to prompt (when I was doing that manually I was using 512x768px tiles, so I didn't have as many as with your script, and I didn't saw where to change the height of the tiles, but I didn't really search), but if I understand well what happens is that you feed SD with 512x512px tiles and the setting here is about the size of the img2img output, so yes it makes the image bigger, makes perfect sense.

  29. I added a little script at the end of the tool to copy those tiles to Gdrive.

  30. I'm a huge fan of img2img. To get coherent results from a simple image, you need to run it through on an extreme init_strength at first to get something, then feed that image back as a new init_image. Keep feeding the image back while lowering the intensity until you get something coherent.

  31. I wanted to add shading at the least. Go all the way to digital painting if possible (like LOL character art).

  32. that's interesting. it may be because the one that turned out good is an upper body shot, so SD has more resolution to work with as well as less to solve. You could try cropping the other character to the upper body and see how it goes.

  33. Could you go into more detail on how you fix the hands on Dalle? Do you mask the hands and do a prompt like "perfect hands" on there?

  34. Hey thanks so much for this! It's very helpful! Also been using your colab too for the past two days and it rocks. :)

  35. No worries. Glad you figured out how to use it! I feel it's bloated now because I'm trying to add in all the features to 1 cell.

  36. Definitely very interested if this will be able to be ran on my home PC instead of a google collab

  37. yeah that would be a longer term goal of mine. I'm using Google Collab until I install my RTX. The Collab is the easiest way to share for now as well

  38. There’s a few people working on things coming out where you can do editing within an image that I think will really benefit the style of art where it can do the body and then the face separately but in the same image. Heck, that could be applied to any part of the body with the right training. We could swap out clothing and armor or weapons or change the pose of the hands… It’s very limitless!

  39. Yeah we live in crazy times! I'm sure you also saw that demo of Stable Diffusion working in Photoshop to composite a landscape. There's a lot to take in, like I spent a couple weeks on text to image then a week exploring img2img. With time I'll get to those crazier workflows, but it's already a lot to take in!

  40. love the tests you did on Twitter. img2img is so much fun for the process of bringing sketches to life quickly! The Chuck Norris one turned out amazing

  41. Appreciate it. I'm surprised everyone seems to be sleeping on Img2Img right now. It has amazing potential.

  42. I'm surprised too. Once more artists see its potential we'll probably see an explosion in img2img being used. I'm looking forward to see what other ideas people have. I was thinking it would be cool to run on old animation frames and see how well it holds up for a few seconds

  43. It's incredible the mentality difference between Dall-E 2 and Stable Diffusion. If my first attempt doesn't have anything good on Dall-E 2 I just give up right there. Could be a good result 1 click away, but they don't incentivize experimentation.

  44. I'm having trouble with hands as well, but I found that cropping a close of up just the hands, inpainting hands with Dall-E 2 then taking that result back into Stable Diffusion for a style pass works the best. Probably quicker to just paint over the hands

  45. Honestly, I'm fine with reddit's actions, but the clickbait media articles that center only on porn are so damn infuriating. This tech is changing entire industries, and what's going to get the most clicks? An article about porn? Terrific "journalism" pandering to the lowest common denominator for that 0.001 cent ad banner impression revenue.

  46. my friend asked me what it would look like if we used a clip art stock image as a base with img2img and

  47. Don't take the comments too seriously.

  48. Yeah it's all good. The tool is so insane... I just have to release the thoughts from my head. It's going to be a fun time riding this wave and seeing what the other Stable Diffusion users invent!

  49. I didn't say it was the most creative thing in the world, just that it is creative. Not artistically but by using a combination of tools to create something from nothing.

  50. You have the right idea about this tech, also thanks for spreading my post! Better to inform people of the power of these new tools.

  51. with img2img there doesn't seem to be a seed, it uses the input image as a starting point. There's parameters for strength, samples, iterations and CFGScale.

  52. I'm pretty sure it uses the seed for the noise it adds to the initial image. If I run img2img on the same image with different seeds I get different results. If I use the same seed I get the exact same result. If I repeatedly feed the output back into img2img with the same seed it quickly starts to develop strange black-and-white patterns, as if it's picking up patterns in the random noise.

  53. Okay you're probably right, the collab file I'm using doesn't seem to have a seed option. There's still so much to learn about these tools!

Leave a Reply

Your email address will not be published. Required fields are marked *

News Reporter