[SD 1.5][TextualInversion]Deep Negative V1.x V1 75T
<p>This embedding will tell you what is <strong>REALLY DISGUSTING</strong>??</p><p>So please put it in <strong><u>negative prompt</u></strong>?</p><p></p><h3> What does it do?</h3><p>These embedding learn what disgusting compositions and color patterns are, including faulty human anatomy, offensive color schemes, upside-down spatial structures, and more. Placing it in the negative can go a long way to avoiding these things.</p><p>-</p><img src="https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/00f10479-531c-4dc8-8021-f2af1c697700/width=525" /><p></p><p></p><h3>What is 2T 4T 16T 32T?</h3><p>Number of vectors per token</p><p></p><h3> What is 64T 75T?</h3><p><strong>64T</strong>: Train over <u>30,000</u> steps on mixed datasets.</p><p><strong>75T</strong>: embedding limit maximum size, training 10,000 steps on a <u>special dataset</u> (generated by many different sd models and special reverse processing)</p><p></p><h3>Which one should choose?</h3><ul><li><p><strong>75T</strong>: The most ”easy to use“ embedding, which is trained from its accurate dataset created in a special way with almost <strong>no side effects</strong>. And it contains enough information to cover various usage scenarios. But for some <u>"good-trained-model"</u> may hard to effect</p><p>and, change about may be subtle and not drastic enough.</p></li><li><p><strong>64T</strong>: It works for all models, but has side effect. so, some tuning is required to find the best weight. <u>recommend</u>: [( NG_DeepNegative_V1_64T :0.9) :0.1]</p></li><li><p><strong>32T</strong>: Useful, but too more</p></li><li><p><strong>16T</strong>: Reduces the chance of drawing bad anatomy, but may draw ugly faces. Suitable for raising <strong>architecture</strong> level.</p></li><li><p><strong>4T</strong>: Reduces the chance of drawing bad anatomy, but has a little effect on light and shadow</p></li><li><p><strong>2T</strong>: ”easy to use“ like T75, but just a little effect</p></li></ul><p></p><h3>Suggestion</h3><p>Because this embedding is learning how to create <strong>disgusting concepts</strong>, it cannot improve the picture quality accurately, so it is best used with <u>(worst quality, low quality, logo, text, watermask, username)</u> these negative prompts.</p><p>Of course, it is completely fine to use with other similar negative embeddings.</p><p></p><h3>More examples and tests</h3><ul><li><p>draw building: <a target="_blank" rel="ugc" href="https://imgur.com/5aX9yrP">https://imgur.com/5aX9yrP</a></p></li><li><p>hand fix: <a target="_blank" rel="ugc" href="https://imgur.com/rDlsrgS">https://imgur.com/rDlsrgS</a></p></li><li><p>portrait (with <a target="_blank" rel="ugc" href="https://civitai.com/models/4514/pure-eros-face">PureErosFace</a>): <a target="_blank" rel="ugc" href="https://imgur.com/1Lqq595">https://imgur.com/1Lqq595</a> <a target="_blank" rel="ugc" href="https://imgur.com/V5kXBXz">https://imgur.com/V5kXBXz</a></p></li><li><p>fusion body fix:</p><img src="https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/ac167975-eadc-4c28-e87e-0d8ed2bec000/width=525" /><img src="https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/61bd2d45-b21e-47dd-a9c4-cbad326dc200/width=525" /><p></p></li></ul><p></p><h3>How is it work?</h3><p>I tried to make SD learn what is really disgusting with deepdream algorithm, the dataset is imagenet-mini (1000 images chosen randomly from the dataset again)</p><p>deepdream is <strong>REALLLLLLLLLLLLLLLLLLLLLY</strong> disgusting ? and process of training this model really made me experience physical discomfort ?</p><p></p><h3>What next?</h3><p><strong>sd-2.x</strong> embedding training~</p><p></p><p>Looking forward your reivew and suggestions?</p><p><strong>-</strong></p><p><strong>my discord server, find me here:</strong></p><p><a target="_blank" rel="ugc" href="https://discord.gg/v5HFg47J6U"><strong>https://discord.gg/v5HFg47J6U</strong></a></p><p><strong>-</strong></p>
页:
[1]