找回密码
 立即注册
查看: 1249|回复: 0

[传统艺术] [SD 1.5][Checkpoint]Babes Babes 1.1 no vae

[复制链接]

3869

主题

36

回帖

4329

积分

管理员

积分
4329
发表于 2023-6-7 21:42:49 | 显示全部楼层 |阅读模式

基础数据

样图
基本模型:
SD 1.5
小模型:
Checkpoint 
下载链接:
您必须是网站正式用户方可进行下载。

To improve your results click on "Show More" below for my recommendations.

Generate Babes online ... Support Babes ... Discord Babes Art

This model was inspired by SamDoesSexy Blend. Influenced by: SDHero-Bimbo-Bondage, Pit Bimbo, Analog Diffusion, Dreamlike Diffusion, Redshift Diffusion. Core influence: MidJourney v4, Studio Ghibli, CopeSeetheMald v2, F222, SXD 0.8.
Notice: If you see skin artifacts and noise, please add "freckles" to the negative prompt. If you want freckles, write in your positive prompt: "(freckles:0.7)", values under 0.8 seem to produce normal freckles in my tests.

Are your results not 100% identical to any specific picture?

  1. Hires-fix is going through extensive changes. Currently, the new system is not 100% compatible with the old results, and nobody knows if it will ever be. For images with old hires-fix, make sure that upscaling is set to SwinIR_4x with "Upscale latent space image when doing hires. fix", it is what I used for hires-fix.

  2. Use VAE: vae-ft-mse-840000-ema-pruned for better colors. Download it into "stable-diffusion-webui/models/VAE" folder. Select it in the settings.

  3. I use xformers - it's a small performance improvement that might change the results. It is not a must to have and can be hard to install.

  4. WebUI is updated constantly with some changes that influence image generation. Many times technological progress is prioritized over backward compatibility.

  5. Hardware differences may influence changes. I've heard that a bunch of people tested the same prompt with the same settings, and the results weren't identical.

  6. I have seen on my own system, that when running as part of a batch, may change a little bit the results.

  7. I suspect there are hidden variables inside modules we can't change that produce slightly different results due to internal state changes.

  8. Any change in image dimension, steps, sampler, prompt, and many other things, can cause small or huge differences in results.

Do you really want to get the exact result from the image? There are a few things that you can do, and possibly get even better results.

  1. Make a single word changes to prompt/negative prompt and test, and push it slowly to your desired direction.

  2. If the image has too much of something or doesn't have enough of something, try to use emphasis. For example, too glossy? use "(glossy:0.8)", or less, or remove it from the prompt, or add it to the negative. Want more, use values 1.1-1.4, then additional descriptors in the same direction.

  3. Use variations - use the same seed, and to the right of the seed check "Extra". Set "Variation strength" to a low value of 0.05, generate a few images, and watch how big the changes are. Increase if you want more changes, and reduce if you want fewer changes. That way you can generate a huge amount of images that are very similar to the original, but some of them will be even better.

Recommendations to improve your results:

  1. Use VAE for better colors and details. You can use VAE that comes with the model or download "vae-ft-mse-840000-ema-pruned from (https://huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main) , ckpt or safetensors file into "stable-diffusion-webui/models/VAE" folder. In the settings find "SD VAE", refresh it, and select "vae-ft-mse-840000-ema-pruned"(or the version included with the model). Click "Apply settings" button on the top. The VAE that comes with the model is "vae-ft-mse-840000-ema-pruned", you don't need both, use the one that you downloaded, it will work very well with most of the other models too.

  2. Use hires-fix, first pass around 512x512, second above 960x960, and keep the ratio between the two passes the same if possible.

  3. Use negatives, but not too much. Add them when you see something you don't like.

  4. Use CFG 7.5 or lower, with heavy prompts, that use many emphases and are long, you can go as low as 3.5. And generally try to minimize the usage of emphasis, you can just put the more important things at the begging of the prompt. If everything is important, just don't use emphasis at all.

  5. Make changes cautiously, changes made at the beginning of the prompt have more influence. So every concept can throw your results drastically.

  6. Read and use the manual (https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features).

  7. Learn from others, copy prompts from images that look good, and play with them.

  8. DPM++ 2M Karras is the sampler of choice for many people, including me. 40 steps are plenty, and some people use much less.

  9. Discord server for help, sharing, show-offs, experiments, and challenges.

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

Archiver|手机版|小黑屋|金房子

GMT+8, 2024-10-30 20:12 , Processed in 0.043489 second(s), 23 queries .

© 2023 金房子|AI发烧友社区

快速回复 返回顶部 返回列表