DFL 2.0 Models and Custom Pretraining/Training Datasets Sharing Thread.
[size=large]THIS IS A SHARING THREAD, POST MODELS ONLY.[/size]
[size=large]ASKING QUESTIONS REGARDING DFL/TRAINING METHODS/TERMINOLOGY/PRETRAINING/RTM WILL RESULT IN A WARNING!!!!![/size]
1. If you have an issue with specific a model ask the user which made it via a private message first, if they don't reply and you believe it's an unusual issue that absolutely requires a new thread, make one in the QUESTIONS section, NOT IN THIS THREAD, if it's a common issue that is explained in the guide or FAQ you'll get a warning, don't spam and create new threads for common issues like not knowing what a pretrained model or out of memory errors, RTM or really anything HERE.
2. If you notice a dead link message the user that made a given model first, only if they don't reply within few days post here about link reupload.
3. To share a model upload all model files, including summary (loosely in a folder, zips/rars not allowed, especially password protected) to cloud storage of your choice.
NOTE: If you notice a model uploaded as a password protected zip/rar be aware it might contain a virus, password protection may prevent antivirus/antimalware scanners from detecting a threat, report such links so that those users can be banned.
If your SRC is one celebrity and DST is random faces then that's an RTM workflow, recommended way to go about it is use of LIAE-UD or LIAE-UDT model, you can read more about training RTM models in the guide: //deep.whitecatchel.ru/literotica/forums/thread-guide-deepfacelab-2-0-guide[/SIZE]
Consider posting RTM models in this sharing thread instead: //deep.whitecatchel.ru/literotica/forums/thread-sharing-dfl-2-0-readytotrain-model[/SIZE]
Add following information about your model:
- model resolution
- network dimensions (dims)
- adabelief (used or not)
- iteration count
- what batch size was used during most of the training (when only RW was enabled or disabled).
- if you used your own custom datasets (SFW or NSFW faces included, specific race/gender only, other specific faces)
- if you applied XSeg to your dataset prior to pretraining/training
- training method (pretrained, trained, pretrained and trained)
Optionally mention which GPU and release (date) of DFL you've trained your model on or if you used a fork of DFL instead of the iperov's version.
GENERAL PRETRAINING METHODS EXPLAINED
1. Pretrain: Use of pretrain option only, default or custom dataset (if all faces are SFW then set is SFW, if some were of adult performers the set is NSFW). Use default pretrain dataset, if it's custom specify it.
2. Training: random training where both SRC and DST datasets feature the same random collection of faces or SRC and DST are two different random dataset of faces, if all faces are SFW then dataset is SFW, if some faces (for example in DST) are NSFW then the dataset is NSFW.
3. Pretrained and Trained (combination of both).
TEMPLATE:
Architecture - res: xxx, AB or non AB, dims: xxx/xx/xx/xx, x.xxx.xxx iterations, Batch Size:, Face Type (SFW/NSFW - Dataset type and description - Training method - XSeg applied or no Xseg)
For DF based architectures you can use bold green letters (DF, DF-D, DF-U, DF-UD) and for LIAE architectures use bold yellow letters (LIAE, LIAE-U, LIAE-D, LIAE-UD) but this color coding is optional, it just makes it easier to spot specific models quickly while browsing the thread instead of using the search option which can only work if everyone applies the template precisely.
Dataset description part can be also color codded using following colors:[/align]
- for default datasets, SFW use green
- for slightly modified default set, cleaned up, added extra faces, SFW use yellow
- for custom pretraining/training sets or specific celebrity dataset, SFW use orange
- if DST is NSFW even if SRC is SFW or entire dataset being NSFW or specific celebrity SRC and NSFW DST use red
Examples:
DF-D - res: 448, dims: 512/96/64/24, 805.100 iterations, Batch size 12, Whole Face (NSFW - Jennifer Lawrence SRC, NSFW DST, FFHQ Pretrained and Trained - XSeg Masked)
LIAE-UD - res: 288, dims: 320/88/96/21, 1.234.567 iterations, Batch size 6, Whole Face (NSFW - Female Only SRC, Adult Performer DST - Custom Dataset Pretrained and Random Trained - XSeg Masked)
DF-UD - res: 320, dims: 256/64/64/18, 560.000 iterations, Batch size 9, Full Face (SFW - Custom Female Only (SRC same as DST) - Random Trained - No XSeg)
DF-U - res: 224, dims: 288/72/64/22, 300.000 iterations, Batch size 8, Head (SFW - FFHQ with extra celebrity photos - Pretrain - XSeg Masked)
LIAE-U - res: 512, dims: 384/128/128/32, 650.241 iterations, Batch size 4, Whole Face (SFW - FFHQ - Pretrain - No XSeg)
Dataset descriptions don't need to be written exactly the same as the examples but they should have enough information for a person that downloads the model to know what was the SRC, DST, how the model was trained and if the faces were masked.[/b][/color]
[size=large]THIS IS A SHARING THREAD, POST MODELS ONLY.[/size]
[size=large]ASKING QUESTIONS REGARDING DFL/TRAINING METHODS/TERMINOLOGY/PRETRAINING/RTM WILL RESULT IN A WARNING!!!!![/size]
1. If you have an issue with specific a model ask the user which made it via a private message first, if they don't reply and you believe it's an unusual issue that absolutely requires a new thread, make one in the QUESTIONS section, NOT IN THIS THREAD, if it's a common issue that is explained in the guide or FAQ you'll get a warning, don't spam and create new threads for common issues like not knowing what a pretrained model or out of memory errors, RTM or really anything HERE.
2. If you notice a dead link message the user that made a given model first, only if they don't reply within few days post here about link reupload.
3. To share a model upload all model files, including summary (loosely in a folder, zips/rars not allowed, especially password protected) to cloud storage of your choice.
NOTE: If you notice a model uploaded as a password protected zip/rar be aware it might contain a virus, password protection may prevent antivirus/antimalware scanners from detecting a threat, report such links so that those users can be banned.
If your SRC is one celebrity and DST is random faces then that's an RTM workflow, recommended way to go about it is use of LIAE-UD or LIAE-UDT model, you can read more about training RTM models in the guide: //deep.whitecatchel.ru/literotica/forums/thread-guide-deepfacelab-2-0-guide[/SIZE]
Consider posting RTM models in this sharing thread instead: //deep.whitecatchel.ru/literotica/forums/thread-sharing-dfl-2-0-readytotrain-model[/SIZE]
Add following information about your model:
- model resolution
- network dimensions (dims)
- adabelief (used or not)
- iteration count
- what batch size was used during most of the training (when only RW was enabled or disabled).
- if you used your own custom datasets (SFW or NSFW faces included, specific race/gender only, other specific faces)
- if you applied XSeg to your dataset prior to pretraining/training
- training method (pretrained, trained, pretrained and trained)
Optionally mention which GPU and release (date) of DFL you've trained your model on or if you used a fork of DFL instead of the iperov's version.
GENERAL PRETRAINING METHODS EXPLAINED
1. Pretrain: Use of pretrain option only, default or custom dataset (if all faces are SFW then set is SFW, if some were of adult performers the set is NSFW). Use default pretrain dataset, if it's custom specify it.
2. Training: random training where both SRC and DST datasets feature the same random collection of faces or SRC and DST are two different random dataset of faces, if all faces are SFW then dataset is SFW, if some faces (for example in DST) are NSFW then the dataset is NSFW.
3. Pretrained and Trained (combination of both).
TEMPLATE:
Architecture - res: xxx, AB or non AB, dims: xxx/xx/xx/xx, x.xxx.xxx iterations, Batch Size:, Face Type (SFW/NSFW - Dataset type and description - Training method - XSeg applied or no Xseg)
For DF based architectures you can use bold green letters (DF, DF-D, DF-U, DF-UD) and for LIAE architectures use bold yellow letters (LIAE, LIAE-U, LIAE-D, LIAE-UD) but this color coding is optional, it just makes it easier to spot specific models quickly while browsing the thread instead of using the search option which can only work if everyone applies the template precisely.
Dataset description part can be also color codded using following colors:[/align]
- for default datasets, SFW use green
- for slightly modified default set, cleaned up, added extra faces, SFW use yellow
- for custom pretraining/training sets or specific celebrity dataset, SFW use orange
- if DST is NSFW even if SRC is SFW or entire dataset being NSFW or specific celebrity SRC and NSFW DST use red
Examples:
DF-D - res: 448, dims: 512/96/64/24, 805.100 iterations, Batch size 12, Whole Face (NSFW - Jennifer Lawrence SRC, NSFW DST, FFHQ Pretrained and Trained - XSeg Masked)
LIAE-UD - res: 288, dims: 320/88/96/21, 1.234.567 iterations, Batch size 6, Whole Face (NSFW - Female Only SRC, Adult Performer DST - Custom Dataset Pretrained and Random Trained - XSeg Masked)
DF-UD - res: 320, dims: 256/64/64/18, 560.000 iterations, Batch size 9, Full Face (SFW - Custom Female Only (SRC same as DST) - Random Trained - No XSeg)
DF-U - res: 224, dims: 288/72/64/22, 300.000 iterations, Batch size 8, Head (SFW - FFHQ with extra celebrity photos - Pretrain - XSeg Masked)
LIAE-U - res: 512, dims: 384/128/128/32, 650.241 iterations, Batch size 4, Whole Face (SFW - FFHQ - Pretrain - No XSeg)
Dataset descriptions don't need to be written exactly the same as the examples but they should have enough information for a person that downloads the model to know what was the SRC, DST, how the model was trained and if the faces were masked.[/b][/color]