Pregnant AI Artwork
Hyper Preg trained model
  2 of 7  
  • 15 Vote(s) - 4.33 Average
  • 1
  • 2
  • 3
  • 4
  • 5
CosmicKiwi
(October 26, 2022, 12:30 pm)thojm
(October 26, 2022, 12:18 pm)CosmicKiwi I’ll be honest, I tried to get an AI program to work and I got terribly confused! The directions to get it operating were less than informative

by chance, would you be able to help me? That would be amazing if you could! I wouldn’t mind doing a morph or two for you, if you’re into those. My works are in my only thread if you want examples!

Thank you for taking the time to read this and I’ll respect your decision either way! Keep up the work, Thojm!
There are a lot of great Youtube guides on getting Automatic1111's webui installed and running.  I usually suggest this one https://www.youtube.com/watch?v=6MeJKnbv1ts
If you have specific questions feel free to ask or DM me
Thank you so much! When I get more time I’ll be checking that video. I’ll keep you updated too, on if my tiny brain manages to figure it all out. Lol

I appreciate it Thojm!
Liked by Fyra yu Chan 1999 (Nov 22, 2022)
JaxsonStark
Thanks for sharing man, definitely gonna see what kinda results I can get with this.
Liked by Fyra yu Chan 1999 (Nov 22, 2022)
DeadmanSenji
This looks really promising, thank you for sharing!
Liked by Fyra yu Chan 1999 (Nov 22, 2022)
Fatbellylover217
Wow these all look great, can’t believe an AI can create art like this. Keep it up!
Liked by Fyra yu Chan 1999 (Nov 22, 2022)
subthresh15
had a first crack at it. never used a diffusion model before, didn't realise how long you actually have to work an image to get anything specific. this is mostly just inpainting after i got lucky with an img2img output. felt more like a kitbash than ai. i found tweaking the position of limbs, belly, etc. in photoshop with liquify and then passing the liquified bits back through another inpaint pretty useful for getting specifics. fucking around with the denoise amount and the mask type was helpful too.

has anyone found any good prompts for getting the image sharper? i've tried stuff like sharp focus, 4k, photorealistic, high detail, etc. to varying degrees of success. does the sampler type make much of an impact? weighting blurry very heavily in the neg prompt worked pretty well.

anyhow, these models are great OP. i found switching between them to actually be super helpful. the bellies in v2 are much more coherent but it's more difficult to get them looking photoreal as opposed to cartoony. has anyone tried merging these checkpoints with other ones out there yet?
   
Liked by Njameson (Apr 23, 2023), secreta87 (Jan 10, 2023), ukkpkkmkkk (Dec 31, 2022), caddyova273 (Dec 25, 2022), Tcorker (Dec 8, 2022), Fyra yu Chan 1999 (Nov 22, 2022), PreggyHyper (Nov 9, 2022), Bellylord123 (Nov 3, 2022)
thojm
(Edited)
(Edited)
(October 27, 2022, 5:37 pm)subthresh15 had a first crack at it. never used a diffusion model before, didn't realise how long you actually have to work an image to get anything specific. this is mostly just inpainting after i got lucky with an img2img output. felt more like a kitbash than ai. i found tweaking the position of limbs, belly, etc. in photoshop with liquify and then passing the liquified bits back through another inpaint pretty useful for getting specifics. fucking around with the denoise amount and the mask type was helpful too.

has anyone found any good prompts for getting the image sharper? i've tried stuff like sharp focus, 4k, photorealistic, high detail, etc. to varying degrees of success. does the sampler type make much of an impact? weighting blurry very heavily in the neg prompt worked pretty well.

anyhow, these models are great OP. i found switching between them to actually be super helpful. the bellies in v2 are much more coherent but it's more difficult to get them looking photoreal as opposed to cartoony. has anyone tried merging these checkpoints with other ones out there yet?
The negative prompt you use has just as much effect as the normal prompt.  I mentioned earlier that all the photos I've posted have the prompt in the exif data that you can extract with Automatic1111's webui or Notepad++.  Try starting with some of those.

The V2 model was trained on Danbooru tags, so it only works well when your prompt is in the tag format like "tag1, tag2, tag3" instead of a normal sentence.  ",hyper realism" does a decent job.

I should really retrain the V1 model on SD 1.5 since it was my first real attempt, and I've learned more since then...


An interesting prompt for V1 I use to test new models is:

Prompt:    A thicc ((hyperpreg)) pixar girl with a huge booty. Artstation HQ, high level texture render, cgsociety, photorealistic, pixar style
Negative prompt:    deformed, blurry, bad anatomy, disfigured, poorly drawn face, mutation, mutated, extra limb, ugly, poorly drawn hands, missing limb, blurry, floating limbs, disconnected limbs, malformed hands, blur, out of focus, long neck, long body, (text) deformed, blurry, bad anatomy, disfigured, poorly drawn face, mutation, mutated, extra_limb, ugly, poorly drawn hands ((ditigal ilustration)), ((art))
Steps: 40, Sampler: Euler, CFG scale: 15, Seed: 207030188, Size: 512x512, Model hash: 9453d9a1, Batch size: 6, Batch pos: 0
       
Liked by Spongechu123 (Dec 23, 2023), Tcorker (Dec 8, 2022), Fyra yu Chan 1999 (Nov 22, 2022), Bellylord123 (Nov 3, 2022)
JBowman_uk
Did I miss out on this? I saw the post today, and no-one seems to be seeding v2
Liked by Fyra yu Chan 1999 (Nov 22, 2022)
bo27
I can't seem to not get it to create an animated drawing, how do you get this to create photo realistic images?
subthresh15
(October 27, 2022, 9:00 pm)thojm
(October 27, 2022, 5:37 pm)subthresh15 had a first crack at it. never used a diffusion model before, didn't realise how long you actually have to work an image to get anything specific. this is mostly just inpainting after i got lucky with an img2img output. felt more like a kitbash than ai. i found tweaking the position of limbs, belly, etc. in photoshop with liquify and then passing the liquified bits back through another inpaint pretty useful for getting specifics. fucking around with the denoise amount and the mask type was helpful too.

has anyone found any good prompts for getting the image sharper? i've tried stuff like sharp focus, 4k, photorealistic, high detail, etc. to varying degrees of success. does the sampler type make much of an impact? weighting blurry very heavily in the neg prompt worked pretty well.

anyhow, these models are great OP. i found switching between them to actually be super helpful. the bellies in v2 are much more coherent but it's more difficult to get them looking photoreal as opposed to cartoony. has anyone tried merging these checkpoints with other ones out there yet?
The negative prompt you use has just as much effect as the normal prompt.  I mentioned earlier that all the photos I've posted have the prompt in the exif data that you can extract with Automatic1111's webui or Notepad++.  Try starting with some of those.

The V2 model was trained on Danbooru tags, so it only works well when your prompt is in the tag format like "tag1, tag2, tag3" instead of a normal sentence.  ",hyper realism" does a decent job.

I should really retrain the V1 model on SD 1.5 since it was my first real attempt, and I've learned more since then...


An interesting prompt for V1 I use to test new models is:

Prompt:    A thicc ((hyperpreg)) pixar girl with a huge booty. Artstation HQ, high level texture render, cgsociety, photorealistic, pixar style
Negative prompt:    deformed, blurry, bad anatomy, disfigured, poorly drawn face, mutation, mutated, extra limb, ugly, poorly drawn hands, missing limb, blurry, floating limbs, disconnected limbs, malformed hands, blur, out of focus, long neck, long body, (text) deformed, blurry, bad anatomy, disfigured, poorly drawn face, mutation, mutated, extra_limb, ugly, poorly drawn hands ((ditigal ilustration)), ((art))
Steps: 40, Sampler: Euler, CFG scale: 15, Seed: 207030188, Size: 512x512, Model hash: 9453d9a1, Batch size: 6, Batch pos: 0
yeah i had a look at them in the exif viewer, they're useful, ty. i tried a bit more with V2 to get photorealism, and it's sort of like, it doesn't look like a drawing per se, but it's definitely still obviously digital art/stylised rather than a "photo" of a hyperpreg woman in the real world. obviously that's tricky, because there aren't any "photos" of real hyperpreg women anywhere on the internet, let alone in a danbooru dataset. morphs are the closest we have, and a lot of them are pretty dodgy looking. approximating photographic textures from a danbooru dataset is always gonna make things look pretty stylized, even with strong negative weightings of art and digital art, etc. V1 is interesting because it can do properly "photographic" looking images, and also regularly get a decent-sized belly. but it's far less coherent/consistent with belly size/not deforming stuff than V2. i'd be super interested in a revamped V1. what was it trained on originally?

i think for now i might be stuck with inpainting if i want properly large bellies in V1 with a more photographic style. there seems to be an inherent tradeoff between belly size and photorealism, which makes sense considering there are no women of hyperpreg size irl, and hence nothing totally relevant in the training set.
Liked by Fyra yu Chan 1999 (Nov 22, 2022)
subthresh15
i'm also mucking around a bit with checkpoint merges of v1 and v2. mixing v1 with a little (~0.15 weighted sum) v2 does seem to help with coherence, but you can definitely notice even at that weight a little bit of the danbooru sheen.
Liked by Fyra yu Chan 1999 (Nov 22, 2022)

Related Threads Author Replies Views Last Post
New hyperpreg SDXL model! ImpossibleBard 0 1,547 February 29, 2024, 7:58 am
Last Post: ImpossibleBard
I think I got my model where I want it to be bo27 150 93,817 September 4, 2023, 5:06 pm
Last Post: Bellylovr1998
AI Edit/ AI model request Unreg-36377 0 1,154 June 6, 2023, 8:32 pm
Last Post: Unreg-36377
Training a new model Nope2468 7 4,626 November 18, 2022, 7:05 pm
Last Post: thojm

Users browsing this thread: 1 Guest(s)