0xSoul 发表于 2023-6-7 19:24:02

[SD 1.5][Checkpoint]SPYBG's Toolkit for Digital Artists V 4.5

<h3>SPYBG's ToolKit for Digital Artists</h3><p><strong>Official YouTube Channel:</strong><a target="_blank" rel="ugc" href="https://www.youtube.com/@spybgtoolkit"><strong> </strong></a></p><p><strong>Patreon: </strong><a target="_blank" rel="ugc" href="https://www.patreon.com/SPYBGToolkit"><strong></strong></a></p><p><strong>Latest Video:</strong></p><div data-youtube-video><iframe width="100%" height="480" allowfullscreen="true" autoplay="false" disablekbcontrols="false" enableiframeapi="false" endtime="0" ivloadpolicy="0" loop="false" modestbranding="false" origin playlist src="https://www.youtube.com/embed/MAYPGPikR_o" start="0"></iframe></div><p></p><p>Hello everyone, my name is Valentin from the Bulgarian AI Art Community, but people know me as SPYBG, I'm a 3D Character Artist by profession, been working this for many years now. For everyone curious what I do professionally you can find my artstation here: <a target="_blank" rel="ugc" href="https://www.artstation.com/spybg">https://www.artstation.com/spybg</a></p><p>I was experimenting with AI like many of you when it first came out. And I wanted to create something that will help me with my creativity for my personal projects and so on ,and eventually i saw the potential for artists to use the thing i was making in a professional environment so.. for the last 2 Months I've been creating custom datasets for characters, and after a request of a close friend of mine who is a Technical Lead in one studio where they make environments, I've decided to make environment dataset as well for my custom model.</p><p>Since I know a lot of artists who got upset and so on about "people using their art" I went in a different direction. All of my datasets (training images I created for this) are made by me.. and it took a lot of time to make them. But I was smart and used AI tools to create what I need, so all of my datasets (for characters and environments) are AI generated so no</p><p>artists input was used in the making of this model, except my own input.</p><p>I trained my model on 100 steps, with 1926 images, the model was trained with 194 000 steps in total. (yea I know it's a lot but the results speak for them self's).</p><p>Character Dataset: 766 custom made by me images.</p><p>Environment Dataset: 1160 custom made by me images.</p><p></p><p>Special Thanks to Suspirior! He helped me with some tips and tricks, and also ideas. And he was the first one who Beta-tested my model so, big thanks buddy! I'll include some of his tests as well here.</p><p>Tips for using my model:</p><p>I would recommend using some of this settings, they provide the best results at least for me. But feel free to experiment.</p><p><strong>Sampler:</strong> DPM++2M Karras</p><p><strong>Steps:</strong> 150 steps (lower steps also work but for this training data 150 works the best based on my testing)</p><p>Recommended Resolution: 768x768 (The model i used as base for training is a custom modified base of Protogen 3.4 merged with older versions of my toolkit (v2.0), and based on that I've trained my model with 768x768 datasets so, I recommend to use 768x768 and 768x1280, or higher resolutions).</p><p>Note: with version 4.0 and above I've used the basic 1-5-pruned model and i've finetuned it properly</p><p><strong>CFG Scale</strong>: 5 ~ 7 works best</p><p>Trigger words: tk-char (for characters) tk-env (for environments) why tk? (tk stands for Toolkit)</p><p></p><p><strong>IMPORTANT:</strong> <em>If you want to get the best results when creating characters use my model in img2img with the images I provided in the</em><strong><em> </em></strong><a target="_blank" rel="ugc" href="https://drive.google.com/drive/folders/1SgXASpyzjXUwNFGoOMqsppgnBrI95XK0?usp=share_link"><strong><em>templates directory</em></strong></a><em>, in order to get much more clean and professional looking images. txt2img while it's great for environments for characters is heavily unpredictable sometimes and when making character concept art we want consistency. So I personally recommend you to use my </em><a target="_blank" rel="ugc" href="https://drive.google.com/drive/folders/1BNNKypBRk6olWmBaoAZEVVYxh57gdnsT?usp=sharing"><strong><em>template images</em></strong></a><em> or any of yours. that's why I've provided different character sheets made by me in order to get more consistent results.</em></p><p></p><p>Example prompts:</p><p>CHARACTER examples:</p><p>"photograph of (((male))) tk-char warrior, highly detailed, award winning image, 16k"</p><p>or</p><p>"photograph of (((male))) tk-char style warrior, highly detailed, award winning image, 16k"</p><p></p><p>"photograph of (((female))) tk-char warrior, highly detailed, award winning image, 16k"</p><p>or</p><p>"photograph of (((female))) tk-char style warrior, highly detailed, award winning image, 16k"</p><p>while you can use tk-char by it's self as a trigger, you can also use tk-char style as well. Try them both, see what results you get.</p><p></p><p>Note: Include (((male))) or (((female))) in front of tk-char to specify what character you want when creating the prompt. After that use what ever you want to define the prompt better. And also keep your prompts short, while using longer prompts can be cool, check out some of the templates from my images and you'll see how with little you can get decent results.</p><p>Also here is a link to some of my "demo" images, use those as templates in img2img or use any of your images. but mine will give you good results if you're making character concept art (there are two versions available, basic full body with different proportions and silhouette that's 1:1 aspect ratio, and closeup with head variations that's 2:1 aspect ratio)</p><p><strong>Link to template images: </strong><a target="_blank" rel="ugc" href="https://drive.google.com/drive/folders/1SgXASpyzjXUwNFGoOMqsppgnBrI95XK0?usp=share_link"><strong></strong></a></p><p>Environment examples:</p><p>"photograph of tk-env ancient environment style, Persian city, with people walking in it, in ancient Persia, with palm trees in the city, and flowers everywhere, award winning image, highly detailed"</p><p>just include tk-env in your prompt to activate the trained data.</p><p>I recommend you to add negative prompts for best results, any will work, but here is the one I use.</p><p><strong>NEGATIVE PROMPT:</strong> (((signature))), (((text))), (((watermarks))), deformed eyes, close up, ((disfigured)), ((bad art)), ((deformed)), ((extra limbs)), (((duplicate))), ((morbid)), ((mutilated)), out of frame, extra fingers, mutated hands, poorly drawn eyes, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), cloned face, body out of frame, out of frame, bad anatomy, gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), (fused fingers), (too many fingers), (((long neck))), tiling, poorly drawn, mutated, cross-eye, canvas frame, frame, cartoon, 3d, weird colors, blurry</p><p><em>Note: With my latest release of my model (v4.5 you don't need to use any negative prompts, yes you heard me correct..) but still if you want to use any those are good starting point.</em></p><p>____________________________________________________________________________</p><p><strong>VAE: </strong>I would recommend to use the base SD 1.5 VAE from stable diffusion for better results</p><p>____________________________________________________________________________</p><p><strong>SD UPSCALE &amp; Ultimate SD Upscale:</strong> If you want to upscale generated image I would recommend to use the automatic1111 SD Upscale with value of 0.35 (noise strength) scale of 2 and upscale it with R-ESRGAN General 4xV3</p><p>for me this gives the best results.</p><p>____________________________________________________________________________</p><p>Since my model is based on 1.5 all embeddings done with the 1.5 model will work fine with my custom model. I'll include some of the great ones with links bellow and update the list while I go.</p><p></p><p><strong>EMBEDDINGS:</strong></p><p><a target="_blank" rel="ugc" href="https://civitai.com/models/6496/spybgs-toolkit-character-enhancer"><strong></strong></a><strong> - My own tool designed to improve even more your character creations used in combination with my model</strong></p><p><em>Note: Lower the strength of the LORA embedding so it fixes some things on your models but it doesn't over-take the design you're going after.</em></p><p></p><p><a target="_blank" rel="ugc" href="https://civitai.com/models/3036/charturner-character-turnaround-helper"><strong></strong></a> - Great for generating character concepts from front side and back views (Use it with combination with the templates. (Front_Side_Back) from my <a target="_blank" rel="ugc" href="https://drive.google.com/drive/folders/1SgXASpyzjXUwNFGoOMqsppgnBrI95XK0?usp=share_link"><strong>template images</strong></a> for even better results!</p><p><em>Note: My model now supports Multiple views of the same character when creating an image in txt2img but still check this addon it's great!</em></p><p></p><p>___________________________________________________________________________</p><p>Feel free to use/merge and experiment with my model for anything you want.</p><p>If you want to credit me for using it feel free, but its all right. All I want is for people and artists to have something they can use in a production pipeline, or just experiment for fun.</p><p>This is the closest I got to making it a possibility.</p><p>And yes you can train with this model your own images of you or anything you want.</p><p>but i would recommend to do TI embeddings of your own images for additional optimal results.</p><p>P.S. share your results ,would love to see what you guys make!</p><p></p><p></p><p>Cheers!</p><p>Your friendly neighborhood 3D Character Artist</p><p>Valentin</p><p></p>
页: [1]
查看完整版本: [SD 1.5][Checkpoint]SPYBG's Toolkit for Digital Artists V 4.5