>>35288
How are the other LoRA types? I been just using the normal original vanilla LoRA. I haven't really bothered to look in to the other variants - as I think I still got a plenty to master just on the regular.
There are some actually really interesting kinds available there for sure with really cool properties.
Oh. And I recommend trying other base models (Which been trained instead of merged). I have solved many problems I come across by using a different model. Those LoRAs also work in all base models - only struggle with the merge extravaganzas which barely perform on general level to begin with.
However if it doesn't train on the SDXL base model, it ain't training like that at all and you need to rethink.
>>35291
Adafactor constant is not that good really. Really hard to adjust. Adafactor seems to only work with Adafactor scheduler. I personally use cosine for all and it works well.
I use DadaptAdam with Cosine and it always works. Might not be the best, but the one I know will work every time. Most difficult part is setting the repeats to just right amounts. It seems to be around 7-10; 6 if the thing is obvious but below that it is just nothing. Above 10 it becomes inflexible.
But if you struggle with bias to style, like it making only photographic, only anime, only painted like... etc. Throw in few random images of different styles. Doesn't need to relate to the subject itself. Like if I want to train something with only photographs, I put in a random cartoon character and it adds a lot of flexibility. And so forth. Also you can increase image quality (if your model is learning "low quality" attributes) but putting in few really good images of unrelated matter.
Same thing with fixations. You can put in like partially exposed diaper, and you get extra flexibilty. But treat this like spice, just a small dash is enough.