>>405
>built-in artist mix
If only it could transfer between far from each other models in terms of weights, having prepared dataset will be easier to use new models
>By the way, since you added more data on this lora, maybe my question is silly and/or stupid but, does this affect the other styles on a good or bad way?
It affects it more in a good way, because it generalizes everything that has enough data between the styles, just like how full finetune on whole booru fills the holes of drawing porn in style of non-h artist. Also retains some gradient info about previous steps, which somehow helps to achieve better styles fidelity the longer the process goes. Single loras will always just carry on all the biases with it, simple example, it's rare to find diverce dataset, which includes at least different lighting, which might lead to a situation when even backlighting could not work, without i2i hackery
>Why a multi style lora instead of a single lora for each one of them?
Well I explained already, just because it should be more versatile. Some fuckups could happen anyway, mostly from unrevised tagging, for example pillardotinc is pretty scuffed in this version, because crops was not autotagged and it ended up as БЕЗНОГNМ kek, even prompting legs will not always help it, most of the other fuckups are from uncensored misstagging on some of the other styles are also occuring on censored pics, which leads to a pure hell when all censor/uncensor tags are just doing random cen/uncen pics. Overall it's just, the more data, params and gputime you have, the better should be the result, if everything else is done right, but even xl is too big to train it reasonably as checkpoint with adam with high batch within 24, which is only why I'm going for a lora
There is also some whole dataset style bias, which will also be different between the versions, but I'm pretty much happy how the latest look
https://files.catbox.moe/yd8u55.png
https://files.catbox.moe/ld2asp.png
https://files.catbox.moe/ore76c.png