>>17456
you got a point, I intended to have a tutorial out already for my training style to go alongside the release of the UI, that got delayed because I ended up just working on the UI some more. I really should get around to formatting and publishing it so that there is examples of how to train LoCon out there, because they really
are useful.
>>17465
thank you! I'm testing out a bunch of models to see if I can create a unique style, those were all just gens I made during that.
>>17466
I could have sworn I made the UI
really intuitive and easy to use. I am also surprised to hear that people are struggling, especially when I haven't gotten any real bug reports on the github.
>>17467
I've definitely found that LoCon shine in compatibility. I think because they train on more layers the weights are more even which means the weights are not all bunched up on a few layers, meaning you can use them at strength 2 and not see any frying, it also means you can use two or more at 1 together and not see frying. I also find that it
does produce much better results with smaller file sizes, but that might be because of the way I train. I don't usually go higher than dim32 conv dim32 for locon I make, usually getting into dim16/alpha8 conv dim8/alpha 1. this usually results in a very close fitting style LoCon that is like 28mb in size. sometimes I do need to go higher, but even then it's more or less usually less file size than the equivalent lora would be, in my testing that is. Like I said above though, I really need to get around to posting the updated guide which explains things like style lora training guidelines among other things.
>>17468
A friend of mine did some testing on them and found that a lot of the recommended args aren't actually good, but did find that you can use these settings to seemingly better encourage the lora to train on as many layers as possible as evenly as possible:
network_dropout = 0.5
min_snr_gamma = 10
scale_weight_norms = 5.0
the other args seem to be less impressive.