/hdg/ - Stable Diffusion

Anime girls generated with AI

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 15.00 MB

Total max file size: 50.00 MB

Max files: 4

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

US Election Thread

8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

/hdg/ #10 Anonymous 03/16/2023 (Thu) 06:38:08 No. 11276
https://gitgud.io/GautierElla/8chan_hdg_catbox_userscript Not the anon who made it but still putting the catbox script in the OP for visibility
>>11276 now make sex stickers
I'm really not liking locons for characters. I feel like character locons forget the character it's meant to be quite often if you try to draw the character doing something they wouldn't normally be doing, if you attempt to modify their looks or if you try to prompt something weird while a lora would normally remember the character it's meant to draw and you can prompt the character in all sorts of weird shit or draw them wearing different things. I haven't tried the Loha some anon made yet but this is so far my experience with locons so far.
>>11279 as the resident schizo rationalizer, I guess it's a little too easy to learn when you have the whole network available to you
>>11280 Style or Concept locons have been great so far. Characters, not so much since you would normally use a character lora/locon/loha to draw that specific character and if it forgets how to draw the character or stops looking like the character then it kind of defeats the purpose.
Posting it again in this thread in hopes someone can help out with a fix. The LOHA extension does not work on colab. Locons work perfectly but Loha does not work at all. The error given is this one below Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/extra_networks.py", line 75, in activate extra_network.activate(p, extra_network_args) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/Lora/extra_networks_lora.py", line 23, in activate lora.load_loras(names, multipliers) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 151, in load_loras lora = load_lora(name, lora_on_disk.filename) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/a1111-sd-webui-locon/scripts/main.py", line 161, in load_lora assert False, f'Bad Lora layer name: {key_diffusers} - must end in lora_up.weight, lora_down.weight or alpha' AssertionError: Bad Lora layer name: lora_te_text_model_encoder_layers_0_mlp_fc1.hada_w1_a - must end in lora_up.weight, lora_down.weight or alpha And link to the colab being used is here https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb
UOOOOOOH FLAT CHEST AND CUNNY EROTIC UOOOOH ToT
Didn't realize we started a new thread... oops
so are some of us just reverting back to Lora? I tried out 1megapixel training with LoHA for the past few days and only an intense optimizer like lion has been able to learn my characters (1 mega pixel training with 1 batch btw)
3D Open pose editor doesn't work in my gradio for some weird reason, wonder if I have to git pull my webui to make it work but that's a lot of shit that's going to break ugh
what is it with all these LoHAs and Locons being Uma Musume based on Civitai
>>11295 Catbox?
(2.35 MB 1024x1536 catbox_460zsk.png)

(2.26 MB 1024x1536 catbox_jqhyls.png)

>>11296 Cygames goes after artists making uma porn in the name of respecting the real world horses, it's a way for uma coomers to finally get what they want since human artists aren't going to draw it
>>11286 This.
ryusei constantly draws broads with huge veiny tits but basically never goes full lewd so this is very cathartic
>>11306 based veinchad
Shoebill anon here back from my short 5 days vacation What did I miss? 50 fucking captchas holy shit i fucking hate these
>>11309 I only get one captcha like once a week....
(1.80 MB 1536x872 catbox_w9zcyu.png)

Has anyone else run into this issue? This is with all extensions disabled
>>11305 >>11306 Nice >>11309 >What did I miss? A bunch of dildo riding gens, so very important stuff.
>>11310 I meant I had to retry 50 times before I got one right
>>11313 Yeah the captcha is pretty godawful but at least it serves a purpose since I can't see an AI being able to solve those lmao
>>11311 what, that your first image looks deep fried as fuck? Never.
(2.59 MB 1536x1536 catbox_5pa67o.png)

(2.71 MB 1536x1536 catbox_slzwui.png)

(2.66 MB 1536x1536 catbox_74eyw5.png)

Testing time for the rebaked Yinpa lora like a promised also the catbox script now works after updating so thanks, anon!
>>11316 and somehow I broke it again as one does not want to be uploaded via the script
(1.22 MB 840x1256 catbox_51rozj.png)

(1.27 MB 840x1256 catbox_d8bz43.png)

(1.39 MB 840x1256 catbox_vcn07b.png)

(1.24 MB 840x1256 catbox_of4pat.png)

Post succubus mesugakis
>>11318 I haven't had any issues with using tampermonkey so far, while on firefox
>>11318 You could try with tampermonkey and see if that makes it better?
(1.18 MB 1152x960 catbox_kzro50.png)

(1.48 MB 960x1152 catbox_p0ei07.png)

(1.44 MB 960x1152 catbox_8wqnsf.png)

testing out the c@box script
>>11322 seems like it worked!
all my attempts at making a good hutao lora with around 100 carefully selected and tagged images have failed. It somehow always overfits, especially with her outfit and hat. If I have a special tag for her hat and her outfit, even with half of the training images not having her in the hat or her outfit, the resulting lora still always force her into the hat or outfit. Does anyone have any ideas? I might forget about manual tagging and just download and autotag all 7k images of her on danbooru and train it off of that
(1.11 MB 944x1416 01917-2603849826.png)

(1.33 MB 1024x1408 01956-3916177106.png)

(1.66 MB 1024x1408 01939-3212984346.png)

>>11325 I need a catbox for the second picture.
(2.20 MB 1024x1536 catbox_69yae9.png)

(2.20 MB 1024x1536 catbox_1sf4xg.png)

(2.08 MB 1024x1536 catbox_7an565.png)

>>11319 Sure, why not.
(1.38 MB 960x1248 catbox_d3cg0t.png)

>>11327 I love how nahida has become the standard litmus test for how well a model does cunny
What the fuck is Loli Diffusion, anons?
>>11331 sky freedom?
>>11331 >>11332 undercooked nora higuma
>>11330 it's basically a mix that somebody created to make genning lolis easier. I don't know the exact mix, but they have "official" versions on huggingface, which is mixed with models not known for hentai, and unofficial which is mixed with models that are completely meant for hentai, like grapefruit, or AOMv2_hard. Have I used the models? no, and I don't know if I ever will, but, seems like it's basically doing what they set out to do.
>>11334 why make a mix rather than a finetune? Would just be easier to finetune a model on loli pics if you want an actual easier way to gen lolis
(1.86 MB 1024x1536 catbox_9j9vz9.png)

(1.55 MB 1024x1536 catbox_55qqk0.png)

(2.16 MB 1024x1536 catbox_cghcax.png)

>>11335 seems like they are trying to make a dataset for a finetune eventually, but for now a mix is what they make, though... why not just make a LoRA for the loli body type? wouldn't that just be better than making a whole model for it?
(1.42 MB 1024x1280 00009-1435270985.png)

flat
(838.79 KB 760x1088 catbox_glipzk.png)

>>11336 true then you can just merge it into a model if it's a Locon
(1.77 MB 1024x1536 catbox_fikrkx.png)

(1.51 MB 1024x1536 catbox_yvw3id.png)

(1.86 MB 1024x1536 catbox_uh3yog.png)

(1.64 MB 1024x1536 catbox_13kvms.png)

>>11340 yeah, and it's probably more than enough, we definitely don't need a full model to make genning loli easy
(1.06 MB 768x1152 00001-460181438.png)

(1.05 MB 768x1152 00003-460181438.png)

(1.04 MB 768x1152 00005-460181438.png)

(1.20 MB 896x1280 00057-3273018088.png)

>>11132 No prob! Speaking of squatting, is the anon that made one of the original squatting ladies here? It's pretty impressive how that prompt holds up these days. >>11163 >>11168 >>11180 >>11186 >>11212 >>11225 >>11251 Thank you for documenting your process, Nene anon! Hopefully this run turns out to be a success! >>11202 Thank you for getting back to me on this and for the cute flattie Honks! Here's a flattie Ely for you in return!
>>11343 catbox?
(1.56 MB 1024x1536 catbox_frbw46.png)

(1.58 MB 1024x1536 catbox_s9bovo.png)

(1.91 MB 1024x1536 catbox_3yd29m.png)

(1.83 MB 1024x1536 catbox_51g8qb.png)

alphonse lora. was wondering for a while why a lot of my gens looked burned in places across all my different configs, turns out that's just his style in a lot of pictures. putting "chromatic aberration" in the negatives helps a bit alpha128: https://files.catbox.moe/488k90.safetensors alpha64: https://files.catbox.moe/gaicls.safetensors
(1.99 MB 1024x1536 catbox_q8nqme.png)

(2.12 MB 1024x1536 catbox_qtdog6.png)

(1.98 MB 1024x1536 catbox_o86hd8.png)

(1.61 MB 1024x1536 catbox_yjtf8v.png)

>>11346 NTA, love the style, fucking hate the chromatic aberration. It genuinely hurts my eyes
>>11349 >chromatic aberration yeah I hate this meme post-effect too
>>11349 >>11350 totally agreed, blame the original artist for that. it sucks that so much of his talent has to go to waste for a stupid PS filter. though maybe this could be a good application of ML postprocessing in the future.
(1.48 MB 1024x1408 38482-2815144844.png)

>>11329 She just werks

(800.44 KB 704x1000 00044-2472342632.png)

If i don't end up a faggot from uncensoring this many penises color me surprised.
>>11351 Chromatic Aberration (more like abomination) isn't a PS filter, it's a phenomenon that occurs when (shit) lens fail to focus colors on the same point. In my case it genuinely fucking hurts my eyes and the only game with CA that I can play for more than 30 mins is GTA V (it's subtle but it's there) I think I first encountered it in Payday 2 and I couldn't tell why my eyes were burning after 10 mins. But yeah, it got turned into a PS filter and post-processing technique for some reason. You can make a case for something like lens flare from time to time but CA needs to fucking die.
(1.62 MB 1024x1536 catbox_illyhe.png)

(1.64 MB 1024x1536 catbox_om76it.png)

(1.51 MB 1024x1536 catbox_q6nv43.png)

(1.57 MB 1024x1536 catbox_o6rpsx.png)

>>11353 we will all revel in your success once you are done!
>>11355 i havent trained a cocnept lora in so long tho idk what the default parameters are as of now, also i kinda divided the dataset in 2 parts well more like 4 but dunno if i should train them one by one or try to train them at the same time, 1 is for pics with disembodied penises which i honestly would probably not consider if the dataset ends up being big enough, another one is for coop naizuri which has almost no images so it might be counterproductive and then naizuri from the side and pov naizuri which idk if i should train together or separately
(1.54 MB 1024x1536 catbox_kzx93t.png)

(1.52 MB 1024x1536 catbox_b5g41j.png)

(1.61 MB 1024x1536 catbox_1papd6.png)

>>11356 hmm, if you train them together, do make sure to properly separate out the tags, an "activation tag" will likely be necessary, You usually can get multi concepts to work if you properly set up the repeats on the folders so that they are all roughly equal. You might actually have an easier time if you use a locon or loha, so give those a shot as well if you want to try
>>11353 the possible sacrifice of your sexuality will not be in vain anon!
(1.77 MB 1024x1536 00007-1107800142.png)

(1.21 MB 1024x1024 00003-1107800142.png)

(1.40 MB 1280x1024 00001-460469483.png)

(1.63 MB 896x1280 00003-801957636.png)

>>11148 >>11209 >>11223 Unicorn cute! Thank you for sharing! >>11165 Biiiiig >>11191 I dig this aesthetic. Space wheat to make space bread... >>11211 Thank you for the update! >>11289 Thank you for sharing! I'm getting great results even at 0.4 weight. >>11353 >>11356 Ganbatte, Nene anon...
(1.63 MB 1024x1536 catbox_1xmpox.png)

(1.66 MB 1024x1536 catbox_wc4lcl.png)

(1.62 MB 1024x1536 catbox_wjo6w5.png)

>>11359 i'm glad you are enjoying the unicorn lora, the hair is a bit inconsistent, so I will probably retag and bake it again with those not pruned to make it easier to separate... might actually just go full on schizo and make a dataset for as many of her outfits as I can. the bleed might be immense though, if too many are close enough also, I didn't know that loli white hair bunny toga would be so perfect
(722.55 KB 3063x4096 FiGfbl5UUAMGxTf.jpg)

Does she have a LoRA?
(128.79 KB 293x333 EpuHunZXYAY3vpo.png)

The catbox script is great and all but I wish there was one that reduced the size of thumbnails in this site by like 25% because they're too big on the default layout. Some scriptchad please hear my plea. I feel like all this site needs to be perfect is a better autoupdater and smaller thumbnails.
>>11361 pretty sure i saw one the other day
(1.58 MB 1024x1536 catbox_5oq43f.png)

(1.64 MB 1024x1536 catbox_ceejd9.png)

(1.64 MB 1024x1536 catbox_h318h0.png)

(1.68 MB 1024x1536 catbox_vfamot.png)

>>11362 isn't that something we can change the site css for? we are allowed to do that right?
(1.36 MB 840x1256 catbox_ryudiv.png)

(1.18 MB 840x1256 catbox_5h8kzc.png)

>>11328 >>11343 Very nice I finally finished an akio watanabe style lora... by throwing out the dataset and making a Popotan one instead. I was going extract the cg from the original disk but ended up just downloading them from exhentai and running them through an ESRGAN jpeg remover. It's not all that strong without composable lora or at a higher weight but I'm calling this one done. https://mega.nz/folder/2FlygZLC#ZsBLBv4Py3zLWHOejvF2EA/folder/rMNSlLxQ
>>11362 Try putting this in your User CSS .uploadCell img:not(.imgExpanded) { max-width: 100px; max-height: 100px; object-fit: contain; } in the settings, it's the gear icon Although, I don't understand how to persist the changes, is that something one would have to create an account for?
(1.51 MB 896x1280 00015-1983116638.png)

(1.58 MB 896x1280 00021-3144381159.png)

(1.38 MB 896x1280 00033-329839024.png)

(1.38 MB 896x1280 00045-3832804132.png)

>>11353 I'm so sorry, Nene anon, but your situations continue to make for quality prompts. https://files.catbox.moe/xxx9yp.png
>>11364 You can change the site CSS for yourself and I have also offered to add changes people would want to the default board CSS if anyone makes a cool CSS script that most people would want to see as default but we'd need to vote on that to make sure it is what the majority wants.
>>11366 what ERSGAN jpeg remover did you use?
Git pulled to try the UniPC sampler and it seems broken. Keeps giving me really shit results as if I had like 10 steps (tried 10 to 25 and 25 looks way worse than 2M Karras)
>>11367 Nice, this works like a charm. Now all I can wish for is a better updater. Dollchan had a better thread updater that actually works on 8chan but the script comes with shittons of bloat and conflicts pretty badly with the default 8ch built-in extension that can't be disabled so I never liked using it.
(1.64 MB 1024x1536 catbox_21yvwl.png)

(1.64 MB 1024x1536 catbox_abrl92.png)

(1.77 MB 1024x1536 catbox_tpku5z.png)

(1.66 MB 1024x1536 catbox_784lcq.png)

>>11369 fair and valid! I personally don't have an issue with the thumbnail size, but I'm trash at css so I will probably not make a theme or anything out of it... unless I have a bunch of time to spend and nothing to spend it on, because I'd need to learn a lot.
>>11298 >can't right click Uhhh, how do I use this extension now?
>>11374 I don't think the right click to see metadata is in the extension yet, you are gonna have to clilck the green icon to download the catbox then see it from there
(70.07 KB 512x704 catbox_5njzgw.jpg)

(68.75 KB 512x704 catbox_y0dpo0.jpg)

(69.74 KB 512x704 catbox_xj65bc.jpg)

(64.72 KB 512x704 catbox_vljiub.jpg)

Nahida and dentist Also JPEG metadata test
>>11370 I used the JPG (80-100%) model from this page https://upscale.wiki/wiki/Model_Database#JPEG_Artifacts I used to use this guy's script https://github.com/JoeyBallentine/ESRGAN Then I found out they recently made a GUI which is surprisingly polished and worked perfectly https://github.com/chaiNNer-org/chaiNNer
Is it possible to fix VAEs so that I don't have to use --no-half-vae?
>>11378 I'm pretty sure only NAI vae has the issue which requires it, though I could be wrong
>>11379 NAI derivates also have it
>>11378 Stop using NAI vae and switch to ema-560000, mse-840000, or kl-f8-anime2. >>11380 There are no derivatives, they're just the same file renamed
>>11380 aren't NAI derivates just NAI? I'm pretty sure at least
>>11117 >>11109 Someone with a public github account mind opening an issue about this on sd-scripts repo? I hope I'm wrong because thinking about it, the fix would might require substantially reworking how kohya injects the data
>>11381 >>11382 >There are no derivatives Refslave and perfectcolors are derivates, crosskemono MIGHT be a derivate AOM VAE *IS* the NAI VAE and anythingv3 VAE might also be the NAI one, don't remember. Pastelmix VAE is WD
(1.64 MB 1024x1536 catbox_abrl92.png)

(1.77 MB 1024x1536 catbox_tpku5z.png)

(1.66 MB 1024x1536 catbox_784lcq.png)

(1.64 MB 1024x1536 catbox_uwcg6z.png)

>>11384 I know that anythingv3 is just NAI vae, it was the exact same file size. can I get a source on refslave and perfectcolors though? I thought they were completely separate, not that I use either.
>>11346 god i love his bellies, shame mixing it with aki99 is getting a little dangerous
>>11385 Source: NaN errors, especially when upscaling. My bad, refslave is also the NAI one. But perfectcolors definitely isn't. Crosskemono is also the same as pastelmix (which is WD and explains why it's not giving me NaNs) Point still stands though, VAEs that are based on NAI's are just as NaN-prone if not more.
(182.15 KB 1074x647 Untitled-1.jpg)

Updated the /hdg/ catbox script, now can display metadata by right clicking the download icon https://gitgud.io/GautierElla/8chan_hdg_catbox_userscript/-/raw/master/out/catbox.user.js
>>11387 fair enough! thanks for looking into it a bit more though. hmm, I wonder if there are other NAI derivatives out there, most are just NAI renamed
>>11389 Perfectcolors seems to be impossible to find now, I got it from some catbox link on 4chins months ago, doesn't match any md5 I have I can upload it if you want
>>11390 I think I already have it, just to check is the md5 this? I don't actually use it much because I like kl-f8-anime2 more A41E5CEA8F0836253B63E5F80CF6BB5F >>11388 great job on the scripts anon! glad we got somebody working on it.
>>11391 >A41E5CEA8F0836253B63E5F80CF6BB5F Yep. Produces NANs just like NAI's
(1.33 MB 896x1280 00007-3417241705.png)

I just spent a bunch of time troubleshooting why LoRAs stopped working in my webui all of a sudden, but at least it got me to update my extensions. >>11360 I'll be looking forward to the Unicorn rebake whenever you get around to it! Also, your Toga gens remind me of Nazuna from Call of the Night, haha
(1.86 MB 1072x1600 catbox_0wpcr0.png)

(1.84 MB 1072x1600 catbox_21f5hr.png)

love that team >>11346 works insanely well with refslave
>>11392 ah I see >>11393 to be honest, I just wanted to try something out, as it turns out, loli white haired bunny girl toga hits hard, I'm actually using a locon of her I baked a bit ago, might release it sometime tomorrow, but I didn't make the dataset, so I should double check to make sure they aren't gonna release it.
>>11388 works great for me on Firefox+tampermonkey, cheers! now we just need the metadata embed on URLs in comments too
>train style lora >previews look good >pop it in the webui to generate some samples >it spits out pic related >realize I trained it on my old 512x704 dataset instead of my new 768x1024 dataset and I have to do it all over again. I need to bweeeh
>Have lora that looks burnt >Use the cosine down trick some anon mentioned >Looks good now Wizardry.
>>11400 what would that trick be, using less restarts?
>>11401 He's referring to a cosine dynamic CFG threshold(?), not an LR scheduler.
Has anyone made an updated Klee lora? Autism has made people make a lora that generates perfect Nahidas but where is the Klee one?
(1.57 MB 896x1280 00017-1817323577.png)

>>11395 Good on ya for checking in with them before releasing. Glad you've been able to get some genning time in! >>11353 >>11368 Nene sits alone at her desk, sullen from the circumstances she found herself in. She could have become the lead UI designer for Playrix (acclaimed creator of hit mobile game Gardenscapes) by downloading the shady wallpaper files DM'd to her on Pixiv, but now she's stuck downloading pictures of dicks. https://files.catbox.moe/43eu6n.png
>>11361 there's one on gayshit
(1.73 MB 1024x1536 catbox_iwss5g.png)

(1.68 MB 1024x1536 catbox_fdn75u.png)

(1.79 MB 1024x1536 catbox_pyfa18.png)

(1.72 MB 1024x1536 catbox_8ewvf7.png)

>>11404 from what I gathered, he plans on releasing it as a locon version of his lora version, which is good, I'll share when it does get released for sure! side note, I was testing out an AOMv2_hard locon extraction for these images, still didn't get rid of the armpits, sad, but hey, this prompt seems to be really good for deep penetration
>>11380 Honestly I think my original Klee lora is in a pretty good place. I'm going to try a low dim version with my old dataset eventually but here's the dataset if anyone else wants to take a shot at it. It probably needs some revision and retagging https://files.catbox.moe/kydmtx.7z
>>11408 Whoops meant to quote >>11403
>>11408 Wouldn't characters benefit from high dim? Is there any need to go for lower dim on them?
>>11410 At some point it uses the extra dimensions to learn extra bullshit instead of getting more accurate with the dataset sizes we're at. Been a while since somebody tested how high you can go though, as opposed to how low
>>11412 Maybe someone with a lot of time should check what's the sweet spot then but my experience with the low dim locons people posted is that they forget the character really easily
>>11413 Don't think there will ever be a general sweet spot really, it's going to at least vary with character complexity and the nature of the training data. I'm just mildly curious about upper bounds again (not enough to bother)
>>11407 It's impressive that you use multiple lora yet it isn't fried. I guess lora model is easier to mix with compared to styles and character
(1.28 MB 1024x893 catbox_y7lwx1.png)

(1.27 MB 1024x893 catbox_b6z8nz.png)

>>11410 It does seem in this case that low dim isn't the way to go for accurate details. It's good but not better though the original is a little over-trained. I'll have to try locon on this one tomorrow
>>11417 Oh yeah, every single one of those lora are actually locon, locon has a much higher capacity for not frying, I've found. Well trained locon can go up to 1.5 strength and still be good I found. It's very likely that I'm only able to achieve this because of the fact that they are locon >>11418 I've found that dim16 is a pretty good spot for character lora, can't say if that is also true for locon though, as I'm able to get decent results with the toga locon I baked, which was 8/1 linear, 4/1 conv. Though it's a tad bit weak when you don't have tags that pertain to her normal look, such as hair color, the hair style remains consistent, even with the other 3 locon active, without prompting it.
>>11418 Just don't do low dim locons. The locons people have posted here when everyone was going crazy over low dim forget the character too easily if you prompt a lot of stuff, add weird stuff, or try to make them wear something different which kinda sucks.
>>11419 I'm curious on how locon works if it gets paired with lora. Is it compatible?
>>11421 Should be, I just happen to use locon as my default setup. I often would use seggs or riding dildo, both of which are loras. Never has an issue with the other style locons that I use. Tab_head was one I trained, pastel extract was a dim16 conv dim16 extraction of pastel, I can put pastel up to 1 strength but I actually prefer it at 0.6 when using based65, because it becomes too pastel at that point.
Bros is as scripts broken? I can’t make good LORAs anymore even though I have the right amount of training data… maybe the LOHA and Locon training is broken? I don’t know I’m not even making complicated characters
(1.60 MB 1072x1600 catbox_z9tarq.png)

kobayashi is cute!
(875.11 KB 760x1088 catbox_b12dxm.png)

We move
>>11427 it's aliiiiiive
>>11428 profile view doest work for shit tho, since its a locon and i had to put everything together probably the naizuri token got overly trained into the pov side
>>11429 I wonder if you could generate synthetic data from inpainting profile paizuri
>>11430 i mean i might as well just end the dataset since i got tired and couldnt finish today but i wanted to try and make a quick one, ill probably just train one for each view and pray.
(578.88 KB 576x832 00603-1362851749.png)

(871.17 KB 760x1096 00546-701269787.png)

>>11431 epoch 8 goes hard tho
Anyone know a good place to get hires anime magazine scans? I've noticed that the boorus have begun slacking on newtype/megami magazine scans lately and/or they aren't digital. Trying to do an experiment.
Naizuri locon v1, i might redo it in the following days if i get some time to finish the dataset but i'm gonna dream of mosaic censored cocks for a long while. It does work well at a strength of 1 but you might need to up it if you want to do profile view or lower it to mix it with other lora's, also legit impossible to prompt dark-skinned females (sorry maidanon) from what i tested, i even tried with a bleached lora and ti and it was gacha so good luck with that. Dunno if i should upload this to civitai cause i like to wget my files from there and i will probably get naenaed. Mega with all my stuff: https://mega.nz/folder/s8UXSJoZ#2Beh1O4aroLaRbjx2YuAPg Direct download: https://files.catbox.moe/ldfggq.safetensors It's a locon so you need the extension
(529.58 KB 576x832 catbox_h4p6mj.png)

Does anyone try to use some LoHa? Is there a different version of LoCon extension? I can use LoCon yet I can't use LoHa on collab.
>move extensions >no errors when gitpulling
(2.32 MB 1024x1536 catbox_wg9aks.png)

(2.25 MB 1024x1536 catbox_0yvcsh.png)

(2.21 MB 1024x1536 catbox_5p68ed.png)

(2.24 MB 1024x1536 catbox_8fp7bv.png)

>>11388 Thanks! >>11398 Another river/lake enjoyer, based. >>11399 Very cute >>11433 >>11435 Nice, I'll give it a try later.
(1.84 MB 1024x1536 00019-1722840336.png)

>>11440 Dangerous choice of footwear for comfy hiking activities.
(1.54 MB 1024x1536 00007-3911332712.png)

>>11441 she only put it on for the photoshoot
is there a hogwarts legacy outfit Lora? I'm working on a sweep tosho Locon and it would be cute if I can get her to wear a harry potter outfit
>>11444 oh do the base models already have enough data to prompt it themselves?
Even switching from violentmonkey to tampermonkey just gives me the same error: Source-Map-Error: NetworkError when attempting to fetch resource. Ressource-Adress: moz-extension://d9d9e589-b975-480d-b8a2-a3cf50f0de13/userscripts/8chan%20/hdg/%20catbox.moe%20userscript.user.js?id=14c5806e-dbe0-4ba1-bdfa-3a3daa803f83 Source-Map-Adress: /sm/4f4f84956292156efc04b79e1131dc7c3a51794b7b3d62b43ed5eaca1b36d71e.map// Do I have to switch from firefox or is there still hope for me?
(204.64 KB 1074x649 Untitled-1.jpg)

>>11396 Updated, right click functionality added to bare catbox links Now I can go back to baking as my script reached feature parity https://gitgud.io/GautierElla/8chan_hdg_catbox_userscript/-/raw/master/out/catbox.user.js
>>11449 >>11448 unironically works on my machine
Sometimes I wonder how we got so many Honkai and Genshin players with none of the schizophrenia that happens in their generals. I like all of you, you're all cool people and I like spending time with you.
>>11447 Is there anything significant in your console (the panel that shows up after pressing F12) after a failed upload? My guess is you have reached some kind of limit of catbox
>>11331 >>11333 Well that image looks good, is this released?
I want to know the strength and weakness of each sampler. Mashing my head through sampler blindly hope to get something right makes my head hurts.
How do you delete a style?
>>11447 BTW catbox.moe is down right now so you may have to try later
>>11452 Just Error: NetworkError when attempting to fetch resource. which seems to be a long known bug
>>11439 Of course, Shoebills live near rivers (though I used "forest, pond" and I'm surprised it didn't return the usual muddy waters from one specific pic in my dataset)
>>11457 https://fileditch.com/ Also keeps metadata and doesn't go down daily
(2.51 MB 2894x4093 97328238_p4.jpg)

>>11459 I had this, a nude alt and an accidental shittier duplicate of the alt in a 320 images dataset and they were enough to turn the water in most lakes/ponds/rivers/wading pics brown (with all the ripples too)
(147.71 KB 1146x412 Untitled-1.jpg)

>>11458 It doesn't sound right because there's no source map for my script Can you try disable source maps in devTools? Press F12 -> F1 then uncheck this box
>>11464 Forgot to mention. This is trained with locon, so you'll need to install the extension.
>>11464 Wow I read that without the r and I was confused for a second
>>11466 ಠ_ಠ I was wondering where the missing R could possibly be in my sentence, then I realised.
>>11463 >500 Internal Server Error This means catbox is down for now, try upload on their main page https://catbox.moe/ and you should see the same error message
>>11468 yup it fails too gotta wait then Thanks for the help anon
>>11454 >euler a is fast and kind of blurry, which is great for ironing over lora conflicts and shitty details >DDIM is fast and produces different things from euler a >dpm++2m and 2s are slow and detailed >karras samplers run each step twice or something >LMS is shit i stick with dpm++2m karras 90% of the time
>>11455 That retard hasn't implemented a button for it yet, you need to delete it in the config file or just overwrite it with the same name
(1.31 MB 1024x1024 catbox_5lpgud.png)

(1.63 MB 1024x1536 catbox_pzdtgj.png)

It seems to work now but I only got one uploaded the other seem to stuck at uploading...
>>11473 How fast is your download/upload speed?
>>11473 doesn't stacking negative embeds like that roast the fuck out of your outputs
>>11474 speed shouldn't be an issue but now it loops even if I reload the site >>11475 not really here are some other results
Why not use imgbb? It never failed me and has an API for uploads iirc
>>11477 imgbb doesn't keep metadata Fileditch is pretty much the only site that does I believe
>>11478 >imgbb doesn't keep metadata It does.
(1.51 MB 1024x1536 catbox_68chi1.png)

(1.48 MB 1024x1536 catbox_dz4xz7.png)

(1.93 MB 1024x1536 catbox_t6xhlj.png)

(1.64 MB 1024x1536 catbox_w2n5ml.png)

>>11476 refreshing the site over and over again seems to have fixed the uploading loop
>>11479 Could use imgbb then
(1.21 MB 960x1152 catbox_5oqciw.png)

(1.13 MB 960x1152 catbox_lvrx95.png)

(1.26 MB 960x1152 catbox_hv4dqe.png)

(1.84 MB 1152x1152 catbox_odajyy.png)

I recall reading someone mention there's scripts now for pruning the least significant weights in a LoRA, does anyone have a link to that?
>>11482 what the fuck why do the veins turn me on
>>11482 veins are so fucking gross
>>11483 >>11484 the duality of man
(1.46 MB 1152x1152 catbox_z9e8fs.png)

>>11483 They highlight how huge and heavy the breasts actually are, like with real breasts they're a clear indication the skin is soft and stretched thin. >>11484 I can understand why people find them gross, though. Obviously, being able to see blood vessels through the skin is kinda visceral.
>>11486 What's the LoRA? I want huge soft tits with veins too
>>11486 >They highlight how huge and heavy the breasts actually are Tits or your skin normally shouldn't have visible veins. It's usually a sign something is wrong with that part of your body and varicose tits really are not attractive in the slightest.
>>11488 ryusei hashida, it's from the one anon with like 200 style loras in gayshit but I'm trying to bake a better one
/g/ is useless as always. Are there any 1600W ATX 3.0 PSUs out there or are 1300W gonna be enough for a 5090/Ti in a couple years?
>>11491 >5090ti Just get 1500w plus more
>>11491 Why not buy a decent psu now and then upgrade in a few years time when the mythical 5090 TI comes out?
>>11491 There are, but I found it incredibly difficult to get my hands on one. And the one I was able to get (thermaltake gf3 1650w) ended up being the reason my 4090 was crashing when doing anything ML related. I ended up just giving up and using a good ATX 2.0 PSU instead and learning to live with the octopus.
>>11482 >pruning the least significant weights in a LoRA that would be handy anyone know what this anon is on about?
(1.63 MB 1024x1536 catbox_zwxmf9.png)

(1.50 MB 1024x1536 catbox_qerct4.png)

(1.47 MB 1024x1536 catbox_62jcwt.png)

(1.69 MB 1024x1536 catbox_c2bodd.png)

I like nuns and I cannot lie
>>11493 Because I have a really bad back and I'd rather install shit once and be done with it. >>11494 I was looking at that one actually. Did you get a bad one or are they all like that?
>>11497 >Did you get a bad one or are they all like that? I RMA'd it and the new one had the same issue, so I'd say stay away from that one.
>>11435 >>11465 Thanks. (Additional Networks also supports LoCon) >>11434 I assume you tried nyaa >>11451 persistent post-nut clarity?
>>11498 That really sucks. The readily available 3.0 ones seem to top out at 1300W max.. I guess I'll go with one of those and pray it will be enough for the 5090. Judging by how jewidia does things now the previous generation's Ti TBP is the new generation's base TBP so the 5090 is probably gonna need that 12VHPWR connector.
>>11501 >>11498 Then again if the 5090 will be the "biggest leap in Nvidia history" I'm expecting the biggest price increase too, 4090s are ~2500€ here so.. 5000€ cards coming soon?
(1.69 MB 1024x1536 catbox_jgqcmv.png)

>>11476 tried to replicate it for comparison but got a completely different picture
(1.35 MB 1024x1536 catbox_obo5v1.png)

>>11502 "Sir, you cannot simply 'buy' a 5090 without giving us half your organs and I would hurry up if I were you. The older you get the more worthless they become!"
>>11491 any psu is sufficient if you undervolt far enough
(1.65 MB 1024x1536 catbox_ifdzdk.png)

>>11503 https://anonfiles.com/K9c9g6fdz5/yinpa_wanone500511_safetensors Sorry anon I forgot to upload the remade version Slightly less burned
>>11504 My motto is "if it costs more than a used 6th gen Civic hatchback then it's not worth buying" >>11505 PC Part Picker says my current estimated wattage is 753W (853 with a regular 4090) but I don't think it accounts for my specific 3090's max power consumption or the 7950X's ACTUAL TDP (which is 230W instead of 170W) I'm gonna power limit the CPU to 125W, maybe 105W (so closer to that 170W TPD) and if I'll probably end up power limiting even this 3090. Assuming the 4090 Ti/base 5090 will use 600W max that's 150W on top so just a tiny bit over 1000W I guess 1300W will be enough?
>>11495 I'm assuming he's referring to lora resizing, sv_ratio is exactly that, it takes anything under a certain weight based on the highest weight and prunes it. It's one of the three modes that were recently introduced in the lore resize script. The Easy Training Scripts has support for it already if you have it.
(1.85 MB 1024x1536 catbox_uxg4ev.png)

>>11506 nice, still very different though
(1.76 MB 1024x1536 catbox_p45xoz.png)

>>11511 I am also using the negativ embeds: badpromptv2, bad-artist, bad-image together with EasyNegative Maybe it is just snake oil but I think it looks better that way especially hands
>>11512 cute
>>11512 Hmm, the embeds do something but I have no idea if they actually help for something like hands, I know easynegative is pretty good to have on its own, because that's what I do, but I don't know if using multiple actually help that much.
(2.01 MB 1024x1536 catbox_uq2ouw.png)

(1.96 MB 1024x1536 catbox_8v79jl.png)

(2.46 MB 1024x1536 catbox_ix185q.png)

>>11512 i know, i was testing if fewer negative embeds would improve things but the pictures were just too dissimilar to compare here's >EasyNegative >EasyNegative, bad_prompt2, bad-artist, bad-hands-5, bad-image-v2-39000 >EasyNegative, bad_prompt2, bad-artist, bad-hands-5, bad-image-v2-39000 plus dynamic thresholding CFG correction
(1.22 MB 1024x1536 catbox_z85otb.png)

>>11515 well like I said it is mostly snake oil But you should not forget that every token and yes even negative ones are making a different picture a simple " " spacebar press could change the img from what someone else has uploaded
>>11494 Do you happen to know if the 1350W has the same problems?
What happened to the anon that said he was going to revisit the Kisaki lora? Since it was originally made when she had 100 pictures and now she has 4200 pictures
>>11518 no clue, never tried it if you buy it on amazon there's like a 100 day return policy if you want to give it a try
>>11520 Last time it happened I got an AX1500i (refurbished instead of new of all things) that caused my 780 Ti to crash before shutting my entire system down. Took a month to get my money back and I'd rather not do that again tbh I'm looking at this https://cultists.network/140/psu-tier-list/ and I think I'll go for the MSI one, it's one of the few with a 1300W version that's actually available here
(74.41 KB 1077x1077 269663.png)

Lmao This based chinaman got fed up with trying to meet civitai's image policy. I mean, how would you on a swimsuit model? https://civitai.com/models/20703/abigail-williams-fate-swimsuit
(1.59 MB 1024x1536 00016-533417726.png)

(1.63 MB 1024x1536 00023-3794501029.png)

(1.76 MB 1024x1536 00001-2165819332.png)

(1.86 MB 1024x1536 00009-3323141658.png)

>>11525 Nice, thanks
(2.25 MB 1024x1536 catbox_x5uaae.png)

>>11525 I gave it the 0.3 Zankuro treatment like I always do. It's quite nice, I'm gonna play around with it some more.
How to make any model kill itself: Add the tag "Upside down" to you prompt
The new samplers feel pretty awful PLMS is about as bad as DPM fast and UniPC is almost good but looks burned/fucked most of the time
Imminent rape
>>11534 looks like he's cleaning her ears
I remember seeing a smartphone hypnosis lora but I forgot to download it and now I can't find it
I hate civitai because nearly everything I download from there is either burned or fucked because of civitai rules and regulations
>>11538 >>11539 the ligne claire lora?
(2.26 MB 1152x1536 00603-727543932.png)

wow haven't been here in a while, almost forgot how much fun it is to prompt cute and degenerate cunny art
(533.40 KB 512x768 00211-3783570693.png)

(2.63 MB 1280x1920 00208-3783570673.png)

(2.42 MB 1280x1920 00210-3783570740.png)

(2.52 MB 1280x1920 00209-3783570686.png)

>>11542 Nice mecha prompts, got a catbox?
>>11543 do you intend to release it? it's a lovely style but the ones on civit aren't great.
>>11546 wonderful, thank you.
Random lora about a porn character I found on civitai. It's pretty burned where I had to go down to NAI vae and use it at 0.5-0.6 for it to be usable. Hilariously bad at porn/sex despite the character being an OC from a porn doujin, only way I could get it to do sex is using the deepmissionary embed.
>>11542 UOH PUFFY MECHA VULVA EROTIC UOOOOOH
i don't think i've used any other models other than based64/based65 because they handle eyes the best, work amazingly with all loras and the work with all kinds of prompts
>>11552 65 has been pretty bad for nsfw for me. I like the model and it certainly generates quality stuff but it's easier to do nsfw/sex on base AOM3
I remember someone asking for a Rozaliya/Liliya Lora in here. Apparently one was already made ages ago in civitai and it works really well. https://civitai.com/models/6700/lilya-and-rozaliya-328-character-lora
>>11553 combine based65 with other weights, it is a good foundation for loras/subnets and even mixing
>>11555 Other weights? What do you mean? And what loras are you using for those pictures?
Has anyone compared the vram usage and its/s of the new sdp attention vs xformers' memory efficient attention? I'm considering upgrading to pytorch 2.0 and pulling the new webui version?
>>11554 Only issue I've noticed with this lora is that it's a mixed Liliya and Rozaliya so all my roza prompts have her trying to turn into Delta with the blue horn and Liliya colors sometimes but it's really good otherwise. Kinda wish the author provided separate loras for them because this is the problem with duo loras or multi character loras.
>final price is 4500€ if I include the 3090 import fees I want to die :) Maybe I WILL offer that LoRA training service after all
(2.14 MB 1280x1920 catbox_bz5pgx.png)

(2.23 MB 1280x1920 catbox_x93rvq.png)

(2.10 MB 1920x1280 catbox_nvvze1.png)

(2.22 MB 1920x1280 catbox_egqs8v.png)

I like bunnies
>>11365 Has anyone tried turning this into a locon? Is it even possible? I feel like something like this would be better as a low dim locon so it doesn't mess with the artstyle too much and works nicer with other loras.
So any of the tech wizards here want to explain to me what exactly is CFG? I thought higher CFG meant how much the AI is going to read my prompt or abide by my prompt while lower CFG meant it'll care less about the prompt and be more creative but higher CFG also tends to end up in burned stuff so I don't understand.
>>11562 well... we could merge it into a model, then reverse the model extracted as a locon, but I have no clue if that would work well, the better option is obviously just retraining it, but if you want to give me a link to it, I'll try and merge it to then convert it into a locon, just to see
>>11564 It's not mine, it's just a suspended on penis lora that seemed to work well as a generic sex lora. https://civitai.com/models/18419/murkys-suspended-on-penis-lora
>>11565 ah yeah, murky's lora, yeah let me try and resize it down first before I do any combining into models
>>11563 https://sander.ai/2022/05/26/guidance.html read this and if you still have any questions, post them here and I'll try to answer them
>>11567 Yes, I like all bunnies
>>11568 Can you explain it in a layperson's terms? I don't understand that at all.
(4.51 MB 1024x1536 image_cleanup (1).png)

(1.53 MB 864x1304 00581-4056250745.png)

(1.26 MB 792x1184 00578-2410520641.png)

(1.37 MB 864x1304 00587-2849671217.png)

>>11572 I can't tell if bunny or loli fenrir
>>11557 for those of you that are curious, I just tested out torch 2.0 with sdp attention vs torch 2.0 with xformers (I had to compile it and install it myself), and sdp is slower by anywhere from 0.5 to 0.2 it/s when generating a 1024x768 image on a 3080. Do not believe leddit and /h/ when they tell you sdp is faster. However, they do have similar memory usage. >>11570 basically, when we want to perform inference(trying to generate pictures), each denoising step does some math on your noisy image. This math is represented by the function circled in black. The red circled function is how the denoiser would try to denoise your image if it's unlabelled(or with your negative prompt if you have one), and the function circled in blue tries to denoise your image according to your positive prompt. you see how the black function is red + gamma* (blue - red)? This means that how SD tries to denoise your image at each step is computed form the difference between denoising without a label vs denoising using your prompt. This difference captures the features you want in your prompt. The gamma in the equation scales this difference, the higher it is, the more prominent the features in your prompt shows up in each denoising step. This gamma values is the CFG scale
>Add tags for cum on face and cum on breasts >All the cum gets focused in the girls pussy and she's overflowing like her insides are filled with cum >Zero cum on face, breasts or anywhere else
>>11575 i don't know how to compile my own xformers because i'm a brainlet but i can at least attest to the fact that pytorch 2.0/2.1+cu118 with sdp is faster than pytorch 1.13+cu117 that i had before. this is on a 3070 fwiw
>>11575 thx, but how is CFG scale different from ((token))?
>>11578 think of it like this if your token is x, and computing the denoising step from your token x is f(x), then 2*f(x) does not necessarily equal f(2x). An example of this is when f(x) = x^2 we can clearly see that 2(x^2) != (2x)^2
(1.72 MB 1008x1344 00608-1728849503.png)

>>11566 ok, resizing it did nothing to help it. guess I can do merging and extraction and give it a try
does anyone know the best upscaler for 3d anime models?
>>11582 nope, i was able to extract a locon from it, but it just didn't un-learn the style, until it was so low that it was not really usable. Gonna have to call this a failure.
(1.61 MB 1008x1344 00616-655973442.png)

>>11586 >that pussy peek I wish it wasn't such a gacha
(1.65 MB 1008x1344 00618-3119542526.png)

>>11587 embrace the gacha, praise the rng gods
>>11555 Catbox for these?
>>11449 Cheers, thanks for doing this. Smart decision to do it in TS too, I just winged it and wrote everything directly in Violentmonkey for mine since I was too lazy to set up a dev environment, lol.
(1.97 MB 1024x1536 00094-550973326.png)

(761.19 KB 632x832 catbox_r6s0cy.png)

(1.76 MB 1024x1536 catbox_ljjcft.png)

(2.02 MB 1664x1264 catbox_sgdvwk.png)

(2.02 MB 1664x1264 catbox_y83ale.png)

more bunnies
>>11592 loving the le malin one
>>11531 feels bad man Us upside down degenerates are going to need the equivalent of a /vt/ fine tune aren't we >>11536 There was a phone PoC but smartphone hpynosis? >>11558 Probably just needs a better fit, major details like that shouldn't mix so easily even in a combo LoRA. >>11578 Here is probably the least technical explanation you can get https://www.youtube.com/watch?v=1CIpzeNxIhU&t=854s In less mathy terms, think of diffusion as actually processing the image twice in each step. Once with all your conditioning, including ((tokens)), once with the "unconditioned" prompt (nowadays hijacked by the "negative" prompt). CFG draws an arrow from the unconditioned version, which you don't want, toward the conditioned one and takes diffusion in that direction, multiplied by that scale. So high values will send the overall image off into the stratosphere. CFG is another hack funnily enough, to replace Classifier Guidance.
>>11594 >There was a phone PoC but smartphone hpynosis? Where's that one?
>>11595 https://mega.nz/folder/83gQXTLT#mrjDP3w_OkxR0ujuVliesw Dojirou. >Trained on a small set of images from a dojirou mange that look like the previews.
>>11596 Oh no, it was definitely phone hypnosis which is different from that. Wonder where I saw it.
>>11597 I am actually very interested as well, hence the reply. We could build a saimin cult in this general
Does there happen to be a Shun (cunny) lora from blue archive?
>>11599 I could have sworn I saw one but I can't find it for the life of me, now I feel like I need to make one, because I need shunny in my life
>>11599 >>11601 There is one but it's kinda outdated because it's from mid jan. An updated one would definitely be great if you make it.
(2.71 MB 1280x1920 catbox_4a9hvy.png)

(2.46 MB 1280x1920 catbox_kszggi.png)

(2.58 MB 1280x1920 catbox_4yvqf3.png)

(2.40 MB 1280x1920 catbox_xfoxyj.png)

>>11602 I'll get on it, might be a day or two because I got a busy weekend ahead of me, I'll make sure to try and get one cooked up asap though side note, the second image here makes me want to gen her all up in mage clothes like she's a black mage from final fantasy, the staff was entirely accidental, and it's a masterpiece
Has /vt/ made a lora of the vampire girl? I see her like every single day in my shorts algorithm
how good are the raiden shogun loras on civitai? I'm thinking of baking one
(96.16 KB 533x799 52632522128_a543261cc5_c.jpg)

>>11605 There are two, the other is realistic
>>11605 does direct dl linking this work idk, I cant comment on the quality because I don't make LoRAs or play Genshin https://civitai.com/api/download/models/5544
(1.75 MB 1088x1456 00624-1491710765.png)

(1.95 MB 1088x1456 00630-963279625.png)

(2.33 MB 1280x1920 catbox_a38kno.png)

(2.38 MB 1280x1920 catbox_myrm6j.png)

(2.40 MB 1280x1920 catbox_go816y.png)

(2.24 MB 1280x1920 catbox_fumez0.png)

found a condom belt lora that works decently well, had to put Hifumi in it. seems like it does cause some hair color shift for some reason though
>>11605 You try the ol embed yet?
Interesting guides for hands with ControlNet from /h/dg, can't vouch though. Was the trick seriously weight>1? My double peace attempts were frying at weight 1... https://files.catbox.moe/8r4y83.webm https://files.catbox.moe/j46t3b.webm
>>11610 Might just be overbaked, i could go schizo mode on it as well and try to tag the number of condoms and their location and see if locon magic makes something better but im still tired from the naizuri one lol
>>11614 It's only slightly so, usually I still get the correct colors, so I'm not concerned about it. Though, I do need to gen some naizuri as well, not tonight though, it's already far too late for me
Tried making a locon of the mimonel lora but idk which one seems best they all look more or less the same last is the og 128 dim one and then there is mimonel1 and mimonel2; 1 has text encoder trained and the other one was set at 0 but like i said i legit cant see a difference.
>>11575 >>11579 Very good explanation, thanks Anon!
The dynamic thresholding seems kino in the art of frying. Never expected to be melt on this way.
How can I force myself to take the low dim pill Anons? I need to start cutting back on disk space use, but training everything at 128/128 just werks...
>>11620 Use dynamic resizing of lora, seems like sv_fro with max dim 32 and value of 0.9-1 gives good results at a fraction of the size
(411.19 KB 512x768 00317-810679365.png)

(1.45 MB 1024x1536 00355-3249398536.png)

(1.45 MB 1024x1536 00367-3249398536.png)

>>11619 that looks awful
(235.76 KB 1031x484 before.jpg)

(237.07 KB 1029x482 after.jpg)

>>11590 Thanks too, at least half of my script is code copied from yours. And since you are here, can you please fix a minor bug in your script? https://gist.github.com/catboxanon/ca46eb79ce55e3216aecab49d5c7a3fb#file-catbox-user-js-L470 The function here has wrong parameter name, as a result the entire function is skipped and the displayed metadata contains replacement characters (white squares on Windows), it should be function extractChunks (data) {
>>11622 noice
(1.57 MB 1024x1536 catbox_bfxpqv.png)

(1.93 MB 1024x1536 catbox_7kur63.png)

(2.07 MB 1024x1536 catbox_91rvrc.png)

>>11591 Cute.
(1.17 MB 944x1232 00210-1270650797.png)

そんなこと…あるけどっ!
>>11623 I know, it's just amuse me that it melting everything instead of simply the character.
(1.91 MB 1024x1536 catbox_u2snwq.png)

(2.00 MB 1024x1536 catbox_rp3fnb.png)

(1.99 MB 1024x1536 catbox_zp5drr.png)

(2.00 MB 1024x1536 catbox_frf0cf.png)

>>11622 Jesus Christ, I'm in danger. Thanks, mommy prompting just got better.
>>11622 Damn, that's great. Thanks!
anyone test out the 4chin post on hdg about the speed improvements by doing >line 240: pip install --upgrade --force-reinstall torch==2.1.0+cu118 torchvision --extra-index-url https://download.pytorch.org/whl/cu118 on the launch.py file?
lmao seems like some dude uploaded based65 to huggingface, that's how the 3DPD tripfag was able to mix my shit with that garbage model they're using
>>11532 thanks for the lora. I'm still impressed by how consistent the suspended on penis lora is for getting actual sex. not perfect with the anatomy of course but pretty hot after so much solo 1girl.
(1.74 MB 1024x1536 00054-1452925160.png)

(1.71 MB 1024x1536 00008-3392044819.png)

(1.83 MB 1024x1536 00010-2429509085.png)

(2.05 MB 1024x1536 00007-23849513.png)

>>11633 I'm a dumbass and didn't do a benchmark before changing the line but it does seem like a noticeable increasing in it/s.
>>11638 yeah it does I got an error with the initial line but this fixed it >torch_command = os.environ.get('TORCH_COMMAND', "pip install torch2.1.0.dev20230317+cu118 torchvision0.16.0.dev20230317+cu118 --extra-index-url https://download.pytorch.org/whl/nightly/cu118")
>>11639 also apparently no xfromers is needed after this upgrade, all you need it >--opt-sdp-no-mem-attention added to the webui user.bat command line args once you're done the install
>>11640 it isn't NEEDED, or are we actively recommended to remove the command line argument for it?
>>11634 It's impressive how he managed to make Based65 look like fucking garbage.
(1.79 MB 1024x1536 00073-2050197541.png)

>>11639 >>11640 no pls read the github pages instead of parroting random shit you read, especially on 4ch /hdg/ at this point replace --xformers with --opt-sdp-attention if you want deterministic results at a slightly lower speed, use --opt-sdp-no-mem-attention
>>11643 ah ok, honestly I thought the info was pretty useful but I can trust this place more to have correct info, thanks anon but the torch upgrade really did boost speeds. It's nice to see desu
>>11643 I wonder if this command improve collab ones. The fact that collab has really low RAM already gimped it enough to use --lowram command.
trying to make a new merge with one of my art style Locon LORAs, I just can merge it normally with the base model mixes I am using correct?
>>11647 are you trying to actually merge it into the model itself? if so, you have to use a script specifically for that, which the Easy Training Scripts has, locon_merge.bat
>>11648 yes I'm trying to merge it into the model itself, I'll use that script then, thanks
>>11643 sdp is worse than xformers
>>11650 elaborate
>>11602 >Smol Shun >https://mega.nz/folder/ZvA21I7L#ZZzU42rdAyWFOWQ_O94JaQ Yup, I made one. It's in my mega but it's pretty baked. I made it with high repeats+high epochs+high dim when loras were still new. If I were to revisit it, I would prune some images from the dataset and change some settings. Another anon is already on it now, so I'll leave it to him.
why the fuck do people bake VAEs into their models, fucking niggers ruined my mix bleh
>>11654 personally I've found it just looks better, the loras in the center two columns had the NAI VAE baked in while the rightmost one had no VAE and that seems to be the case even when WD's VAE is used, I think it just makes everything pop way too much
>>11655 rightmost is literally the only one that doesn't look fried with any vae applied
>>11656 this is comparing training with --vae=animefull-latest.vae.pt with sd-scripts as opposed to nothing, AND what happens when those models are used with a VAE in webui
(2.28 MB 1536x1536 catbox_5quy4q.png)

(2.77 MB 1536x1536 catbox_vbk242.png)

(2.39 MB 1536x1536 catbox_0lkypw.png)

(2.55 MB 1536x1536 catbox_9957eb.png)

>>11634 I mean the mix gets linked from pixeldrain or whatever it was anyway.
(2.14 MB 1536x1536 catbox_8si024.png)

(2.40 MB 1536x1536 catbox_uoxq9f.png)

(2.34 MB 1536x1536 catbox_tlotfb.png)

(2.02 MB 1536x1536 catbox_38ladk.png)

(2.70 MB 1024x1536 catbox_641oms.png)

Bark Souls
(2.89 MB 1536x1536 catbox_9ef5bb.png)

(2.79 MB 1536x1536 catbox_wmmlpp.png)

(2.90 MB 1536x1536 catbox_jodt23.png)

(2.53 MB 1536x1536 catbox_z9zjj7.png)

(2.10 MB 1536x1536 catbox_nm2i60.png)

>>11662 would you be so kind to tell how you got that awesome lighting? The catbox img has no meta data so...
(2.32 MB 1024x1536 catbox_lqyqv0.png)

>>11664 There you go same prompt
(8.75 MB 2448x3120 00008.png)

(1.04 MB 960x1152 catbox_c1mwkd.png)

(1.12 MB 960x1152 catbox_lp54w5.png)

(1.22 MB 1152x1152 catbox_0ja9z0.png)

(822.89 KB 1152x1152 catbox_5pn657.png)

>>11662 I've also been playing with funny colors (with a noise offset lora: https://huggingface.co/Aotsuyu/Kukicha/resolve/main/darkKukiv1.safetensors)
I want to prompt sex with nahida but I have no good ideas for it so i'm just going to post about it SEX SEEEEEEEEEXXXXXXXX
>>11669 why not just go with something simple, like missionary or cowgirl? or do you want to do something much more complex?
>>11671 Because i'm not feeling the spark of inspiration. The light of creativity.
>>11672 fair enough, I've been really wanting a straddling sex LoRA, because that's really hard to prompt without getting pure body horror, might train one eventually
(411.10 KB 1920x2304 catbox_ozz163.jpg)

(416.12 KB 1920x2304 catbox_snow1f.jpg)

>>11482 this anon here after listening to this anon: >>11510 left: 144mb 128dim lora right: 26mb dynamic dim lora after sv_ratio 5 resizing in addition to the large amount of space saved some details actually look sharper after resizing
>>11654 best when they bake it with VAEs that cause errors
(4.24 MB 2048x3072 catbox_a03hdq.png)

(4.88 MB 2048x3072 catbox_sl84g3.png)

(3.65 MB 2048x3072 catbox_ikt5ku.png)

(3.36 MB 2048x3072 catbox_lon45n.png)

>>11663 having trouble finding that borntodie lora, do you know where it's being hosted?
(42.28 KB 1190x167 garbage.png)

the absolute state of /hdg/. we are approaching civitai tier
>>11674 glad it worked for you! back when the resizing script updates first were proposed I actually wanted to add the option to the scripts then, because I was excited about being able to resize things down while still keeping most of the model! It was a good thing I waited though, because two other modes, sv_fro and sv_cumulative, were added. seems like sv_fro is pretty decent for reducing styles, either way, glad it worked out for you!
>>11677 why not keep it over there where it belongs
because the state of /hdg/ is a recurring discussion topic in this thread and you are free to hide the post and ignore it. not like we have schizos spamming shit over here anyways
>>11680 It isn't. It's been said time after time to keep the 4ch drama in 4ch. Why would it be a recurring discussion topic here anyway? It's not like this is a happening thread to discuss the drama in AI generals all over.
>>11680 I only have one useful update from there that I brought earlier regarding the webui, apparently you can update the torch that sd scripts uses to 2.10 as well I’m gonn keep my eyes on the thread to see how the anon pulled this off
>>11680 anon, we don't WANT /hdg/ to be a recurring discussion topic in this thread. YOU made it that way by bringing it up and wasting our limited attention spans on it, yet again. why not take your own advice and kill off the contagion before it spreads? hide the post and IGNORE IT. if as you say there really haven't been any schizos insisting that drama is important here, then you're the first. goddamn human nature
>>11674 Does easyscripts come with this sv ratio you anons are using?
>>11684 Yes, make sure it's updated.
>>11684 yep! it's an option when you run the lora_resize.bat! >>11685 what he said
>>11679 >>11681 >>11683 you've taken up more space complaining about it instead of just hiding it and move on >>11682 >gonna keep my eyes on the thread to see how the anon pulled this off anon that's as simple as running the pip commands to install the correct versions of the relevant packages and recompiling xformers. you can find help on compiling xformers on the github for the webui
>>11686 Are you easyscripts anon? I’m keeping my eyes on a 2.10 torch update anons did today with the webui that boosted generation speeds a lot today and apparently there was an anon that updated the torch version that sd scripts was using to 2.10 as well and got faster speeds from that as well. I’m pretty sure they’re using the main repo so I’ll repost the info they share regarding how they updated torch so you can try to implement it with your scripts.
>>11689 yes, I am easyscripts anon, I probably can, but I'll have to test it a bunch before I opt to include it, just in case it is a pain in the ass to set up, though unlikely
>>11689 I got the info I did use >pip install torch2.1.0.dev20230317+cu118 torchvision0.16.0.dev20230317+cu118 --extra-index-url https://download.pytorch.org/whl/nightly/cu118 >pip install --upgrade -r requirements.txt >pip install -U -I --no-deps xformers-0.0.17+c36468d.d20230317-cp310-cp310-win_amd64.whl The xformers was self-built using this guide https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/2103 but instead of installing cu116, used cu118 I execute my training directly via python or powershell, so I'm not how someone could do this with easyscripts or similars. My guess is you just have to switch what the install batch files do regarding that specific section? Or I can easily just remove the sd scripts folder and reinstall it with the method the anon shared
>>11690 Yeah I didn’t notice a quality drop with the webui just faster generation speeds, maybe in some placebo way upgrading torch can change LoRAs results from before but every LORA you bake even the same one won’t always be the same
>>11666 that's very pretty, well done
(1.98 MB 1024x1536 catbox_m485b5.png)

I love this LoRA.
(1.84 MB 1024x1536 catbox_i2nt2g.png)

(1.66 MB 1024x1536 catbox_tq9d4s.png)

(1.88 MB 1024x1536 catbox_fojf3i.png)

(1.76 MB 1024x1536 catbox_qqytdv.png)

>>11691 I'm assuming if I build the wheel it can be used on any system using the same requirements right? If so, I'll get to building a wheel to host on the Easy Scripts for that to be easier. If not, then I'll have to think about how I can guide the user into building it, if I even opt to install it in general. I might just make it optional just like the cudnn patch, actually relating to that, would the cudnn patch still apply here? if you have torch 2.1.0 does the cudnn patch actually do anything, or will it just cause issues? just want to cover my bases. >>11692 I will definitely look into it, any speed up without quality loss is a good one.
>>11695 I’m pretty sure the cudnn 8.6 boost will still be viable but I’m not positive if the anon that shared that setup was using it or not, assuming they wanted more of a speed boost anyways I think they do have it, that will just have to be something you test out. The only thing I’m wondering is if Lycoris will still work the same way of torch is updated, I’m assuming it will?
>>11695 >>11697 afaik torch 2.1.0 should come with the appropriate dlls but either way it's a fast test to find out
(1.90 MB 1024x1536 catbox_3su8ot.png)

(1.78 MB 1024x1536 catbox_ma0x5y.png)

(1.87 MB 1024x1536 catbox_tehms3.png)

(1.90 MB 1024x1536 catbox_kd6o5m.png)

>>11698 got it, my main goal here is to make it as easy to install as the current iteration of the install scripts is. I was already planning on making a new installer so this is a good opportunity time to build xformers and test it on my friends to see if works!
>>11613 1 copies the stencil. you can color in doujins with it. trying to do something other than 1, depending on the complexity of the image you will either get a variation or always noise. gets me thinking controlnet is best for either portrait poses or inpainting. not what i assumed, where you could take a scene and regenerate all the elements with different characters and details
>>11639 >>11640 >>11652 read the thread >>11575
(1.08 MB 840x1256 catbox_eenkh6.png)

(1.17 MB 840x1256 catbox_88egjx.png)

(1.30 MB 840x1256 catbox_vm9beb.png)

(1.05 MB 840x1256 catbox_foptxg.png)

Cute chichis
>>11704 that tsukushi chichi is some prime ToT
is based65 uploaded anywhere other than pixeldrain? I'm getting a ridiculously slow download speed from there (70kb/s-160kb/s)
>>11703 good times now time to sort through these bajillion loras that have been created in the past 2 months
>>11707 I for one eagerly await the printer girl lora
(3.22 MB 1280x1920 catbox_q7yo0h.png)

(3.15 MB 1280x1920 catbox_lk3d2s.png)

(3.01 MB 1280x1920 catbox_vwsn5t.png)

(3.25 MB 1280x1920 catbox_hpb9a4.png)

>>11703 Just to refresh your memory >>11702 I would very much like a catbox if possible
she looks so proud of her work >>11708 now in 20 delicious new flavors >>11709 I should get the catbox script huh these don't use any concept loras, just raw model: https://files.catbox.moe/bk75i7.png https://files.catbox.moe/943puz.png https://files.catbox.moe/1uqwdo.png I had to weight harpy / winged arms super highly or the mix just forgets its mm6 roots maybe it's time to cave in and use those loras huh
https://github.com/kohya-ss/sd-scripts/issues/274 lol relatedly I'm going to shill these posts another time or two because I'm too lazy to make a github throwaway (or go to /g/) >>11383 >>11117 >>11109
Has anyone seen a Norasuko art style LoRA anywhere?
>>11625 Ah shit you're right, thanks. Updated now.
(593.85 KB 512x768 catbox_jwllw6.png)

(544.36 KB 512x768 catbox_1tf7u4.png)

(604.72 KB 512x768 catbox_7qdwxi.png)

(623.41 KB 800x600 catbox_bg38xl.png)

>>11710 thanks for the prompt, I modified it a bit but I still get the chicken wings sadly
Not sure if this is useful or can be integrated for SD, but I found this while searching for some training data. Pretty decent looking upscaler. https://github.com/nagadomi/waifu2x (original, has demo links to try it) https://github.com/lltcggie/waifu2x-caffe (fork) https://github.com/lltcggie/waifu2x-caffe/blob/master/README-EN.md (english Read me).
>>11716 Waifu2x is ancient now, was never a fan of how obscenely it smoothed everything out compared to the GAN upscalers we have today.
>>11717 some of those upscalers are bad for 3d models, some 3d gacha stuff I wanted to use as data but was low res turned out pretty bad and I'm not sure what is the best upscaler for that stuff
>got lots of money so uh should I get a 4090 now or just wait for the 4090ti, I don't know if that version of it is just going to get a meme 10% boost like the 3090ti
is there an argument in resize_lora.py i can use to set a specific alpha? new_alpha is not a valid argument, and i'd prefer not to mess with the actual script if possible
>>11718 No idea either. Waifu2x works for 3d and 3d-style images? I'm surprised.
>>11721 does waifu2x have a pth file I can add to the webui?
>>11719 4090ti is confirmed to still have 24GB VRAM The 48GB titan version got shelved.
>>11717 in some cases, i like waifu2x more than modern upscaler sometimes I don't like how sharp the result of modern upscaler is and make it unnatural
i've been using an old commit of kohya's repo and have been resizing loras with the same rank. the interesting result of this was that it tended to fix slightly overfit loras. with a newer commit, pytorch 2.1 and new xformers compiled, it doesn't do that anymore; resizing to the same rank changes nothing. what was causing this behavior previously and how can i replicate it?
>>11717 only reason I brought up that waifu2x because someone recently used it to upscale FateGO CE's on e(x)hentai from their original 512 x 875 resolution to 1280 x 2187 (not sure why this resolution but its roughly 2.49x) and at first glance the quality looked fine.
>>11716 >>11726 Huh I didn't think anyone was still using waifu2x. When I was doing upscales a few years back Manga109 was the big thing but I found that deviance and fatalitycomix gave much better results though I haven't kept up to date so there may be better options since then. Though I was never a huge fan of how esrgan models tended to change the color of the image slightly.
>>11723 shit, some dude is selling a 3090 for less than 1k, I'm going to look into it but otherwise yeah I'm going to haggle the dude to find out if it's shit or not otherwise I'll wait for 4090s to drop in price but knowing NVDIA... lmao
>>11727 can you use deviance and fatalitycomicx as upscalers in the webui?
>>11724 >>11726 Maybe you gotta be in scanlation to really hate it
>>11728 The price for 4090s will drop when 5090ti X Tournament Edition Alpha Remix hit the market
>>11729 Yeah you can just throw them in the ESRGAN folder and use them in the extras tab. I don't know how well they would work in hiresfix but it might be worth trying
>>11731 so in like 1 and a half year knowing Nvidia kek >>11732 do you have the links so I can download the upscaler files?
Ah yes, trying shit out for science. Won’t see this on Neo-/hdg/.
>>11733 I originally got them all from here https://upscale.wiki/wiki/Model_Database but I don't see fatality comix or the same version of deviance that I have so I just uploaded the one I have to catbox. Someone else has uploaded fatality comix and a few other models here https://huggingface.co/uwg/upscaler/tree/main/ESRGAN Direct link for Deviance https://files.catbox.moe/ctkp6c.7z Direct link for Fatality Comix https://huggingface.co/uwg/upscaler/resolve/main/ESRGAN/4x_Fatality_Comix_260000_G.pth Direct link for Manga109 http://www.mediafire.com/file/w3jujtm752hvdj1/Manga109Attempt.pth.zip/file It also looks like you can upscale with 2 models so I would suggest using fatality and deviance (or another model if you find any better ones), one at 0.5 strength to minimize upscaling artifacts.
>>11736 thanks anon, you're a big help!
>>11737 No problem, here's quick comparison. In the final pic you can see there's still a bit of upscale noise. I'll probably take a look at some other models to see if there's anything better but for now it's honestly better than I expected
>>11736 >upscale with 2 model How??
Like this
>>11741 Holy shit I didn’t know that was in that tab
>>11741 I do upscales with sd ultimate upscaler and it's been great for me. Might try this again and see.
>>11741 is there a point of using double upscalers?
>>11745 Just for cleaner results or to get the best of 2 upscalers. The ones mentioned earlier (deviance and fatality) are ok on their own but give much better results when mixed. Though after triyng the more recent upscalers it might be best to move on. animesharp and ultrasharp are probably among the best and most popular options. I would also recommend 4xBSRGAN from this page https://upscale.wiki/wiki/Official_Research_Models BSRGAN is good on its own, but paired with ultrasharp or animesharp can help with some finer details. If you want speed then just stick to one.
>>11747 So do we generate an image without hires and then shove it into Extra with the two upscalers on?
I'd kill for a lora based on this artist
>>11749 The ultimate twitter shitposter? I'm surprised there isn't
>>11749 >>11750 as someone who does not follow twitter who is that artist?
>>11676 >>11687 seems like once again that there was already a style lora which I did not found and baked for myself first row is my try with dim8 second one is from pixel with dim128 heads up the dim128 seems to have noise offset baked in as my test lora with it starts to burn if I use both at the same time
>>11752 Theres another one on ATF I can lik it if you wanna compare that one as well
>>11752 >>11754 would be nice if you can I like to compare to see where I can improve here if someone wants to test my try https://anonfiles.com/Q82028f1z0/Born_To_Die_safetensors
Pardon my tardation >>11753
>>11756 his training info is here https://pastebin.com/rx8FhBKf I just ripped it from the lora his girls seem to have slightly better eyes when comparing to the original artist so there is that
>>11751 Kanikama, knew him from the kancolle days but feels like he mostly just tweets now https://twitter.com/kanihamiso
so did any of the autists ever come up with a good average text LR
>>11720 No, there is not a specific argument for it, kohya did not provide one.
>>11720 why would you want to do that, the alpha only matters during the training process
>>11749 >>11750 >>11751 There is one, though. Pretty sure I've downloaded it from that mega with the huge list of style loras
(1.40 MB 1024x1536 catbox_95fsoy.png)

(1.73 MB 1024x1536 catbox_qok8xq.png)

(2.35 MB 1024x1536 catbox_8d6clf.png)

(2.38 MB 1024x1536 catbox_orbmix.png)

Daily grind
>>11715 I hope someone makes a proper harpy lora some day, the raw models just cant quite figure it out
>>11765 So how's UniPC?
(1.92 MB 1024x1536 catbox_nw5j3x.png)

(1.95 MB 1024x1536 catbox_t9c4b8.png)

(2.06 MB 1024x1536 catbox_oqna45.png)

black light and neon light are so awesome to use together
from what I understand decreasing alpha reduces the strength of the final weights, is there any other benefit to reducing alpha but keeping dim at 128? and is training at say 128/64 similar to just training at 128/128 but with lower LRs?
>>11766 I have the data still for mm6. Once I have time I'll try training some loras
>>11768 >blacklight Uhhh, what's this for? What's the difference between just plain dark?
(1.67 MB 1024x1536 catbox_95ke72.png)

>>11771 you do know what blacklight is, right anon?
(2.14 MB 1024x1536 catbox_z7w6e4.png)

(2.14 MB 1024x1536 catbox_57pawf.png)

(2.25 MB 1024x1536 catbox_nbmabt.png)

(2.22 MB 1024x1536 catbox_molmg5.png)

>>11725 bumping this. anyone encounter something similar?
>>11774 alright i just looked at an older commit of resze_lora.py and used that to resize to replicate the same effect. i guess resize wasn't working as intended originally and has since been properly fixed
is getting a 3090 under 1k with warranty worth it bros?
>>11777 If you're upgrading from a card with like 8GB of VRAM I think it'd be worth it since the only card with more VRAM coming out anytime soon is the rumored Titan Ada and it's probably going to cost 3-4x as much. Other consideration is the 40% speedup you'd get by purchasing a 4090 instead, but consider the price per percentage point difference you'd be paying and decide if you *really* want to pay that much, which is a "no" from my standpoint. I got my 3090 at MSRP right around when crypto crashed, kinda regretting it now since I could have had it for cheaper if I'd waited less than a month.
>>11777 I wish I could find one that cheap.
(1.81 MB 1024x1536 catbox_4uozd2.png)

(1.73 MB 1024x1536 catbox_dh9z3d.png)

(1.69 MB 1024x1536 catbox_d8j60q.png)

>>11773 I love bunnies >>11777 If you can pay for it without taking a loan or rethinking if a gpu or food is more important... I would say yes
>>11780 >>11779 >>11778 ok because if I plan on getting a 4090 I want to build a whole new full tower case PC with a motherboard so I can fit in the 3090 as well, not sure what to do with my current 3080ti maybe just keep it in back up since it's still a great gp, just lacks the VRAM I need for more comfy training at higher res. Also I heard the Ada got shelved by a couple of sources but the whole idea that it's basically going to be 4-5k my hope is that price is already too absurd for any retarded scalpers, at that case just wait for the 3k 5090 when it comes out in late 2024 kek
>>11781 also I hate my retarded ass for getting mid everything, my pc cas, PSU, and motherboard, if only if I knew I was going to get into this AI shit I would have future proofed my setup to fit two 3000 series GPUs
(2.15 MB 1536x1024 catbox_74uwm4.png)

>>11780 Bunny cunny
(1.03 MB 760x1088 catbox_dvigni.png)

(1.15 MB 760x1088 catbox_46d7i3.png)

(1.09 MB 760x1088 catbox_16t2fr.png)

(1.11 MB 760x1088 catbox_vei9rv.png)

Made a locon of misawa hiroshi you need to use tags like "painting \(medium\), traditional media, watercolor \(medium\)" to really make the style pop off, i tried it in blood orange mix and 7th3C so dunno how it will work on more realistic models. Direct download: https://files.catbox.moe/ilmoyk.safetensors All my stuff: https://mega.nz/folder/s8UXSJoZ#2Beh1O4aroLaRbjx2YuAPg
(1.55 MB 1024x1536 catbox_k0c7ze.png)

(1.50 MB 1024x1536 catbox_9v5us5.png)

(1.52 MB 1024x1536 catbox_x60dpv.png)

(1.19 MB 1024x1536 catbox_5w7dig.png)

>>11783 here's some bunny to match your bunny. been experimenting with trying to make a more unique style, I think this is a good attempt, but I feel like it's still too easy to tell that tab_head is in it, but I want the fluffy ears and hair, which is trademark of tab_head, I'd love if people could give me their opinions on this combination of lora
>>11784 that's an interesting config file, what script are you using to train your stuff?
>>11785 Nice, you could try some different style prompts to tweak the style a bit.
>>11784 A collab someone made on discord, it spits out those configs when the training ends, this was epoch 5, i use this collab cause its simple and because im a gpulet when im not home. Collab in question: https://colab.research.google.com/drive/1fs7oHytA4Va0IzDK-F8q_q9RIJRIh5Dv?usp=sharing
(1.62 MB 1024x1536 catbox_giqiv8.png)

(1.58 MB 1024x1536 catbox_201jrb.png)

(1.81 MB 1024x1536 catbox_dbryca.png)

(1.57 MB 1024x1536 catbox_wie76y.png)

>>11787 yeah, I need to think about that, because I just can't get away from tab_head's hair style. I just love it too much. even this is pretty dimished on a side note, I don't even know if blade is actually doing anything, blade is seemingly really hard to use, because it just doesn't like to train I guess
(1.72 MB 1024x1536 catbox_mtb5wo.png)

(1.54 MB 1024x1536 catbox_7egjkx.png)

(1.61 MB 1024x1536 catbox_71b8n8.png)

>>11783 >>11785 bunny cunny
(1.49 MB 1024x1536 catbox_ybln5f.png)

(1.53 MB 1024x1536 catbox_q9x1lx.png)

(1.72 MB 1024x1536 catbox_mldpcv.png)

(1.49 MB 1024x1536 catbox_bqc28d.png)

>>11790 here's some more bunny cunny! totally, 100%, blade is doing something, setting it to 1 makes the entire output blade for sure, gonna dial it back again
Btw has anyone trained a lora on born-to-die?
(1.54 MB 1024x1536 catbox_p2q70r.png)

(1.59 MB 1024x1536 catbox_tpfiog.png)

(1.65 MB 1024x1536 catbox_i1k8lr.png)

>>11791 reducing blade to 0.55 definitely keeps the look I had above, while being a bit less tab_head-y, but now it's a bit too blade-y
just want to rant on how retarded linus is, the nigger seriously doesn't think people can just undervolt gpus for little performance cost just to use VSR, "muh more wattage usage = more bills" retard that's why you always undervolt shit
>>11794 but honestly I just use firefox so I have no use for VSR, besides it would be cool if NVIDIA had more user input from that shit (and the NVIDIA upscaling model seems to smear shit like Real-ESRGAN which I hated when upscaling some anime frames)
(1.47 MB 1024x1536 catbox_e63pgx.png)

(1.67 MB 1024x1536 catbox_l68w7b.png)

(1.41 MB 1024x1536 catbox_lfwup9.png)

(1.68 MB 1024x1536 catbox_f61lrt.png)

>>11793 keep up the good work looking for good mixes of styles is always appreciated
(1.50 MB 1024x1536 catbox_e8ry8p.png)

(1.60 MB 1024x1536 catbox_ythis2.png)

(1.80 MB 1024x1536 catbox_ha500i.png)

(1.83 MB 1024x1536 catbox_6t3697.png)

>>11796 this is honestly my first time playing with so many loras all at once, I don't usually mix loras too much, but I think putting blade to 0.5 is enough to balance out the tab_head without being overtly blade as well. I think it looks pretty good! I also really like these gens btw!
(1.76 MB 1024x1536 catbox_8hq00v.png)

(1.77 MB 1024x1536 catbox_fbze7q.png)

(1.54 MB 1024x1536 catbox_xxws9x.png)

(1.86 MB 1024x1536 catbox_70fe2k.png)

>>11797 Mixing styles is a fascinating journey in itself as you sometimes get abhorrent images and sometimes godlike ones
(1.68 MB 1536x1024 catbox_qbzmxj.png)

(1.72 MB 1536x1024 catbox_348f86.png)

(1.86 MB 1536x1024 catbox_ke71h1.png)

(2.01 MB 1024x1536 catbox_ujhgix.png)

I almost missed bunny time, here, have some more
(1.89 MB 1024x1536 catbox_cysp4d.png)

(2.13 MB 1024x1536 catbox_1yi5yy.png)

(2.19 MB 1024x1536 catbox_z0n209.png)

(2.22 MB 1024x1536 catbox_4gm33j.png)

(1.70 MB 1024x1536 catbox_ete63h.png)

(1.76 MB 1024x1536 catbox_0q8oi9.png)

(2.22 MB 1024x1536 catbox_5nel3q.png)

(2.06 MB 1024x1536 catbox_wjkt5r.png)

>>11798 some other styles mixed gotta love some bunnies as an example
(1.60 MB 1024x1536 catbox_j0g26a.png)

(1.68 MB 1024x1536 catbox_urip7y.png)

(1.58 MB 1024x1536 catbox_omleie.png)

(1.78 MB 1024x1536 catbox_d8139n.png)

>>11798 as i've learned, apparently >>11799 bunny posting at it's finest! >>11800 bunny cunny party I guess! >>11801 all of these look really good! I'm actually trying to get a mix that doesn't feel like too much of one style specifically because I kind of want to start trying to post on pixiv! I feel like my gens are good enough for it, though I need to see if Hifumi works well in this setup
(1.99 MB 1024x1536 catbox_8aq5zt.png)

(2.19 MB 1024x1536 catbox_9i6x2r.png)

(1.69 MB 1024x1536 catbox_v1zwbj.png)

(1.56 MB 1024x1536 catbox_syvcza.png)

>>11802 Bear break
(1.61 MB 1024x1536 catbox_jksl19.png)

(1.60 MB 1024x1536 catbox_oa4ql5.png)

(1.55 MB 1024x1536 catbox_gwv6vb.png)

(1.68 MB 1024x1536 catbox_x0fub1.png)

>>11803 only because more cum had to be added to the bunnies
(1.70 MB 1024x1536 catbox_wj979m.png)

(1.87 MB 1024x1536 catbox_avyiw0.png)

(1.53 MB 1024x1536 catbox_x720zm.png)

(1.84 MB 1024x1536 catbox_sp3flq.png)

>>11802 If you try to go for pixiv you should work on the hands/finger a bit there is nothing more embarrassing than seeing a good pic only for the hands to be fucked up other than that I would say you are good to go as most ai images I have seen were really shit/novelai subscriber
>>11803 Bear man you are a wizard
(1.94 MB 1024x1536 catbox_f21mke.png)

(1.81 MB 1024x1536 catbox_3ftk4k.png)

(1.88 MB 1024x1536 catbox_m0ri5y.png)

(1.63 MB 1024x1536 catbox_6fbtml.png)

>>11803 bear... bear in japanese... kuma... coomer?
(1.63 MB 1024x1536 catbox_qfrnuh.png)

(1.63 MB 1024x1536 catbox_nup6ib.png)

(1.69 MB 1024x1536 catbox_slwz6v.png)

(1.60 MB 1024x1536 catbox_3ct2uy.png)

>>11805 for sure, If I post on pixiv, I'll definitely clean up my posts a bit. thanks for liking the style though! >>11807 oh NO! the bears are attacking quick we need to deal with them!
Do you use control net for the hands maidanon or just prompt gacha?
>>11490 This wasn't posted yet was it? I second >>11488 and am in need of generating some veiny boobs too.
(2.10 MB 1024x1536 catbox_43takg.png)

(2.17 MB 1024x1536 catbox_u73bvh.png)

(1.87 MB 1024x1536 catbox_ez04zr.png)

>>11808 and everything changed when the bear nation attacked
>>11488 >1488 uhhh based?
(1.78 MB 1024x1536 catbox_ks55k0.png)

>>11813 while not bad I think your training img had too many watermarks/artist name in the corners as nearly every image I generated had one
(1.57 MB 1024x1536 catbox_lf5z4d.png)

(1.75 MB 1024x1536 catbox_c6xctr.png)

(1.68 MB 1024x1536 catbox_bvcg03.png)

>>11811 We must fight back! Beat them down! Destroy the cunny!
>>11813 Bless you, anon! Unrelated question, is there something similar to 4chanX for this chan? I have to ctrl+f when I return to the thread after being absent for a bit which is mildly annoying.
>>11816 There's dollchan but i've found it conflicts a lot with the built in extension of this site so it's a miserable experience honestly. Probably needs someone who knows or can read Javascript to fix it up but most of the anons here seem to be python experts instead of JS experts. https://github.com/SthephanShinkufag/Dollchan-Extension-Tools
(1.62 MB 1024x1536 catbox_zjon45.png)

>>11815 FOOL! Your creampies just made sure that they will outnumber us soon!
>>11817 I do have the ability to look at, and understand, most JS, I just haven't really thought to look into making a 4chanx system for 8chan. I will probably have to extensively look at 4chanx's code to get anything done though
>>11819 Can't you just like extract the thread updater and the features that work fine from that particular extension? Just probably have to remove the parts that conflict or are taken care of by the inbuilt extension and then you have a working app. I'm not a code person though, just suggesting the lazy approach of frankensteining something up.
(2.01 MB 1024x1536 catbox_b7wju3.png)

(2.00 MB 1024x1536 catbox_bwbwuk.png)

(2.19 MB 1024x1536 catbox_nw529p.png)

(2.00 MB 1024x1536 catbox_u1o82o.png)

>>11815 gAwRRR you stand no chance
(1.71 MB 1024x1536 catbox_ls7gbk.png)

(1.66 MB 1024x1536 catbox_se3oqx.png)

(1.60 MB 1024x1536 catbox_e2oids.png)

(1.64 MB 1024x1536 catbox_dolj0n.png)

>>11818 >>11821 its fine, we just need to do the same with the bunnies, we can make an army! >>11820 hmm, fair, I guess in general if people are fine with a frankenstein approach, I can take a look at it in a day or two, It won't be top priority though.
(1.93 MB 1024x1536 catbox_4ausix.png)

(2.06 MB 1024x1536 catbox_qkxwhg.png)

(1.86 MB 1024x1536 catbox_arwbf1.png)

(1.92 MB 1024x1536 catbox_tg987s.png)

>>11823 Sure bud
(1.52 MB 1024x1536 catbox_aseyp5.png)

(1.67 MB 1024x1536 catbox_61ansq.png)

(1.52 MB 1024x1536 catbox_10hurd.png)

(1.79 MB 1024x1536 catbox_9stoq9.png)

>>11823 alright anon, drill hair bunnies for you!
>>11826 yes, tan bunnies are also good. wish I could gen more, but I need to work on some stuff, so no more bunny cunny posting for me. please though, take up the fight against the bears! we can't let them win!
>>11822 >hmm, fair, I guess in general if people are fine with a frankenstein approach, I can take a look at it in a day or two, It won't be top priority though. I mean it's certainly better than nothing so it'd be great to have.
(2.18 MB 1024x1536 catbox_cykulw.png)

>>11830 Remember no bunny
>>11829 I wish you great exploits! >>11831 yeah, I guess I'll try and frankenstein something at some point, I'll try and look into it in a few days
hey wait they're not supposed to get along like this is there a way in voldy's ui to change the prompt based on location in the image like there is with comfy ui? that would make this kind of stuff a lot easier
Is there a Jack the Ripper lora?
>>11834 isn't that just two-shot? well, it's the best we have as far as I know https://github.com/opparco/stable-diffusion-webui-two-shot
>>11836 thanks, looks like what I want I'll give it a shot
>>11836 >>11837 ok as this is implemented, it's kinda useless you can only set really harsh boundaries for prompts, so if the character spills over the border it now starts using the other prompt for instance in picrel, left side is "bunny girl, flat chest, etc." and the right side is "bear girl, large breasts, etc." the bear onesan's ear turns into a bunny ear since it overlaps with the bunny's region i think to be useful, the boundaries would need to be soft and preferably drawn using a mask? maybe I'm just describing something you can do with controlnet already though
>>11838 I took a look around, looks like there are two possible solutions, both seemingly are experimental https://github.com/ashen-sensored/sd_webui_gligen and https://github.com/opparco/stable-diffusion-webui-two-shot/pull/23
(2.25 MB 1024x1536 catbox_3y2v1l.png)

(2.27 MB 1024x1536 catbox_mswmj2.png)

(2.23 MB 1024x1536 catbox_yj59sy.png)

(2.20 MB 1024x1536 catbox_w7g5k3.png)

>>11832 Remember no panties and this prompt isn't that bad you could easily fix most gens quickly with inpainting or img2img. I just love oversized hoodies.
Hey /hdg/ Can anyone here download shit from baidu? Trying to get LoRAs from china https://ngabbs.com/read.php?tid=35652922 https://ngabbs.com/read.php?tid=35635719 https://ngabbs.com/read.php?tid=35521895 Also, does anyone know sites where the Chinese/Korean/Japanese or any other countries share LoRAs?
>>11841 Korea's board link is in the gitgud links I believe
>>11841 There's a huggingface org called "osanpo" where a lot of the 5ch users dump their LoRAs. However you have to follow their initiation rites on shitcord first. https://huggingface.co/osanpo Found some pretty neat ones on there like a karaoke room LoRA, though it was pretty unorganized and they didn't separate out checkpoints from other models. I just flashed them my old figma LoRA and they treated me like some kind of celeb, kek Would advise against sending them style LoRAs trained on artists though, they'd probably be adverse to that kind of thing
>>11843 Aren't all those just listed here? ctrl+f カラオケ https://seesaawiki.jp/nai_ch/d/Lora%b3%d8%bd%ac%c0%ae%b2%cc
(457.69 KB 512x768 makoto1.png)

>>11844 Yeah but there's a lot more data there, like a mirror of Danbooru2021 and some other models uploaded hourly (like this one from Tokyo 7th Sisters uploaded 6 hours ago)
>>11843 Not sure where the "laboratory's osanpo thread" is to request joining.
>>11846 This one, I'd suggest not acting like an obvious foreigner https://discord.gg/7PuyheYS
>>11839 thanks for taking a look tried out the second one, and it does do the trick, though the process is ass. You can't really know where to draw the boundaries of the mask until you already have a generated image, so you have to start with a shit boundary and then find a good seed, then edit that mask appropriate to that seed. That's not the plugin's fault though
>>11848 great that you were able to get it to work! Now we can have the bears and bunnies living in peace!
apparently ai art normans are having drama with model makers, includes the guy that made pastel mix, reason being? Because of the company they are tied to rumao
>>11772 Sorry, I don't. I presume it's some artist, isn't it?
>>11851 it's an ultraviolet light bulb, gives off dark purple light that makes some things glow
(1.68 MB 1024x1536 00119-2727871652.png)

(1.72 MB 1024x1536 00060-939626063.png)

(1.74 MB 1024x1536 00008-1434136889.png)

(1.72 MB 1024x1536 00014-1461357422.png)

(1.43 MB 1024x1536 00015-1343217161.png)

(1.70 MB 1024x1536 00052-348962663.png)

(1.53 MB 1024x1536 00063-4257672409.png)

(1.74 MB 1024x1536 00177-1606142047.png)

>>11852 I see, speaking of which anon. How much image did you use to make Ze amin Lora? I kinda want to make mommy labyrinth but this is probably the first time making lora and I doubt I'm having the ability to properly make a dataset and cooking
>tfw you open an issue but nobody replies to you
>>11854 Catbox pls?
>>11857 anon please. they're literally all in the folder
>found artist i like >check bookmarks >consistently less than mine mfw my dumbass ai generated art gets twice the bookmarks as an actual artist with similar subject area what timeline is this
(1.35 MB 2688x4608 68037.jpg)

(17.72 KB 960x640 FrbRr8IaEAE2b1b.png)

(94.57 KB 960x640 FrbRqhyacAANMuY.jpg)

>>11764 Hence my surprise there "isn't", thanks for finding it. >>11769 Unless someone better at math comes along and says otherwise, the plain language of the LoRA paper suggests so. But careful there, lowering alpha increases the final weights. Lower alpha reduces the fraction a/r below 1.0, meaning the actual weights in the matrix have to be higher to get the same result. The impetus behind kohya making it tunable was underflowing LoRA. >>11848 I think most folks are waiting for the mask PR to land in latent couple. I mean how else could you implement it though, pause generation a few steps in?
>>11861 >I mean how else could you implement it though, pause generation a few steps in? honestly, yeah the biggest issue is that when you change the mask too much, the content changes drastically so the same mask no longer works maybe this is better suited for img2img or sketch2img though, i'll have to try and see if that makes the process easier
>>11865 wheres the vikala
>>11866 Have a consolation hors.
>some guy on civitai uploading /hdg/ LoRA >comment You stole this LoRA from 4chan >my fault, thanks for letting me know That is not the reaction I expected. >>11863 If you're prompting with masks from txt2img the idea is to guide generation from the beginning, not to fix an image where AI got the boundaries wrong. I'm sure it would help with img2img but it's a more advanced inpaint at that point.
whys it so hard for the AI to draw men
>>11869 >the idea is to guide generation from the beginning currently though (unless I'm using it wrong), the model generates whatever the fuck it wants, mask boundaries be damned. So if it decides to generate a character across the boundary you set by the mask, you're fucked and I think that's the best you can do with the simple location-based prompt stuff. To force the model to generate within your mask would require some extra stuff like that gligen thing
>>11871 I'm still comfily waiting for the PR to land, so I don't have practical experience. But are you using it like the original latent couple where you put "2girl" over the whole image with 0.2 weight, "2girl" in all the subprompts so the model doesn't forget there's a total of 2 girls etc. Because based on the examples, I think you need to be very specific which region gets what and forget about global coherency, this green mask = "1girl, blonde hair", this red mask = "1girl, brunette", and such. On a related note, yet another variation on latent couple https://github.com/hako-mikan/sd-webui-regional-prompter https://note.com/gcem156/n/nb3d516e376d7
>>11872 im not even sure this version of the plugin is even doing what I expect it to like even disregarding any global information, it's not even filling out the local information correctly
>>11873 interdasting You try turning up the alpha blend yet? I have no idea what it does but hey
>>11874 actually alpha blend of 0 is the most aggressive you can be, higher alpha blend means less of the mask and more of the general prompt but also, I'm getting the same type of weirdo behavior with the original rectangular stuff too (though I'm on that guy's branch of this plugin so who knows, he might have broken everything). picrel has river and 1girl on the correct sides of the image, but supposedly those concepts are only supposed to appear in the left 1/4 and right 1/4 of the image. Clearly, that's not the case--the girl is more toward the right half than the right fourth and same for the river.
>>11875 ok yeah something's definitely broken with the plugin enabled, no matter where I set the regions I get the same output with the same seed (if I disable the plugin I get a different image for that seed, though)
>>11876 >>11875 Have fun debugging (1:4 seems too tight to generate the girl by the way but anyhow)
>>11877 just reverted back to the original plugin, but I get the exact same image. does the generated image change when you change the regions with the same seed?
>>11878 Yeah here's the same crappy image with the girl supposed to get 1/2 of the image
>>11879 hmm, well it doesn't seem to be doing what it's supposed to, but at least it's changing the image for you ill investigate what the plugin is actually doing tomorrow I guess
Seriously why is so awful to draw men? >Get lora of char you really like >It's really good >Draws the character really well >Prompt sex >AI either draws 2 girls or has no clue about male anatomy
>>11881 >>11881 Does this happen on all models are just the one you are working with?
>>11880 It aint perfect science yeah. But part of it is the prompt. She won't actually take up the whole region unless you prompt "close-up, upper body" etc. or else the model will sensibly default to putting her at some distance ("standing" plays a role too?). That's how most nature shots have people after all.
>>11884 yeah I think something's bugged with mine since I get the exact same image no matter what regions I select (even if I change all the regions to be in the top left corner, with nothing anywhere else).
>>11881 >lora of char Dataset of waifu lora rarely trained to drawn with men that you need to mix with concept sex lora so it gives the model an understanding of male existence.
>>11883 I'm currently only working with AOM3. Have B65 downloaded but haven't tried it much because it's generally bad with sex. >>11882 Hopefully people figure out the NSFW/Sex aspect. Need the porn industry to flex their money and prowess in here.
>>11884 >>11885 ok I got it to work turns out, the plugin _was_ bugged, I needed to restart the server as well as the ui. I also needed to put the actual params in "extra generation params" instead of the text boxes in the u
>>11887 >>11887 >bad at sex >figure out the sex aspect a better trained model and better tech are required. If someone can figure out NAI’s black magic and implement it on better/specialized training data then you will have better AI coom.
>>11890 I still don't know how the fuck to work this.
>>11888 >>11890 I'm still not sure what Extra generation params box is for, I leave it empty and it works fine so ??? Anyway now you need to do it with coop fellatio lora, it's practically a rite of passage
>>11847 Thanks, I think they have accepted my good faith effort though I couldn't hide my shitty Japanese. Last time I actually studied Japanese was in college in 2013... I've also studied some Chinese since then and with the crossover in symbols I can still generally get the gist of what people are writing to me, but I'm hopeless with responding without MTL. If I didn't have to spend so much time on work, I'd love to get back to learning languages. In the meantime, MTL is only getting better - though it still sucks at proper nouns.
>If you anons haven't seen, Torch 2.0 is now generally available, more details here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/8691 On my 3090 it bumped up my it/s by a little over 20%, well worth the upgrade. More people are looking into how to use torch.compile() to potentially speed it up by a lot more so hopefully that comes to fruition. I did the 2.1 torch update and it works too, I think the guide on how to do it was posted in this thread earlier but yeah seems like 3090 bros even get the speed update too
I'm getting a 3090 this week, do we have any anons here that made a guide for finetuning? Because that's the first thing I want to do
>>11839 I would keep an eye on GLIGEN because it's a lot more controllable and the ControlNet extension is soon going to support that, in addition to paint-with-words. https://github.com/Mikubill/sd-webui-controlnet/discussions/519 Latent couple I personally find useless with drawing region masks because it's a lot more loose in how it works. I find setting the strict divisions myself much simpler and works better.
I'm reading the thread and say the mask drawing method for two shot, was this implemented with an update already?
>>11897 Still on a pull request, dunno if the author of two-shot plans on ever pulling it.
>>11898 ah ok I thought it was finally implemented rip
so apparently to update the pytorch to 2.0/2.1 for sd-scripts you need to build your own xformers but to do it with cu118 you need to have the toolkit for it installed and I had 12.0 so it didn't work at all. I was trying to look for a cu120 but I didn't fin d anything so I suppose that means there isn't a "cu" for 12.0 or 12.1 yet
>>11899 I honestly don't see he issue with using the fork with the paint. It's more active than the original, and actively tries and fixes bugs and shit. At this point I'm willing to call the original dead and move onto that fork.
>>11900 Yeah, I'm working on an update for the easy training scripts, I've already built the wheel so you can install it from there if you like
>>11902 It should be out later today, complete with a new python based installer, and a small batch script to convert from torch 1.12.1 to 2.1.0
>>11901 and which fork is that? https://github.com/ashen-sensored/sd_webui_gligen this? >>11902 thanks anon, I was going to install the toolkit for 18.0 anyways just in case I need to build my own stuff in the future
>>11904 https://github.com/ashen-sensored/stable-diffusion-webui-two-shot for the paint setup if two shot Also, fair enough. It took like 20 minutes to build or something, so I opted to just make a wheel anybody can use.
>>11905 I only really wanted to do pytorch 2.10 for the gui since it has the tab for finetuning models desu
apparently the latest windows update is making defender activate when running some VAEs or taggers like the deepdanbooru one, yeah I think I'm gonna run linux as my main OS once I build my 2nd pc
>>11865 Robin hell yeah
>>11889 >black magic lol
>>11905 also that wheel/xformers you built pretty sure I'll be able to use it too for the gui version of sd-scripts if I pip install -U -I --no-deps xformers-0.0.17+c36468d.d20230317-cp310-cp310-win_amd64.whl if you managed to build that version of xformers which the anon on 4chan built.
>>11894 would Torch 2.0 help with training speeds?
>>11906 >finetuning >GUI shit I still use the powershell script
>>11911 yes it's what easyscripts anon has been getting ready with an update for easy scripts, unlike the webui sd-scripts requires xformers so you need to build a compatible xformers with cu118 so it can work smoothly and get the speed benefits. I'm pretty sure you can also pip install a cu118 built xformers to the sd-webui that has been updated with torch 2.0 or 2.1.0, 2.0 is just the more stable version
>>11913 I'm just probably talking out of my ass though since building xformers is the first time I had to set up my cuda paths properly to get it working and I don't even know if a cu120 or cu121 exist yet
>>11910 Yeah, the wheel should work, at least it worked on webui when I tested it. It's not exactly the same version, but it's just about the same.
>>11916 is it faster compared to >--opt-sdp-attention
>>11913 Guess I'm not running a finetune training until the specifics get confirmed. That just means more tag correction autism... yay...
>>11918 it's just a 10% or less speed boost for the 3000 series users, it's only really 4090 chads that get the absurd speed boosts given their newer gen specs are able to take advantage of the newer versions of pytorch
>>11919 Meh, I was excited to update at first, but seeing all the mixed information and people having trouble with OOM I think I'm just gonna stick with torch 1 until voldy makes the update to torch 2 official.
>>11919 I'm a 4090 chad
>>11917 sdp attention is the same, if not faster, than xformers. It should be the preferred way of doing it now, and is what huggingface diffusers ships. xformers is a thing of the past.
>>11814 I don't know how the fuck I missed that but I will try to do a retrain at some point, will probably take a while since it's a 330 image dataset and some signatures will be easier to remove than others.
>>11922 xformers robots not in disguise...
>>11922 this is not true. xformers on torch 2.0 is still faster than sdp on torch 2.0 for 30 series cards
>>10141 >based65 final mix came back from hiatus looking for this, thanks anon!
>>11925 I'm still trying to build xformers with cu118 just getting stupid errors because I have to correct my environmental paths
I've built a set of xformers once without errors but webui didn't like it so I never tried it again.
I forgot what was the other form of using AI image tech that was faster, diffusers or something? It was something other than pytorch that had faster speeds but required models to be a completely different type of file so it's basically going back to square one
>>11929 cudnn?
>>11930 no it was something else that was being shilled like a month ago or so, it used something other than torch to boost generation speeds even more. It was long ago and never flew off so I can't recall the exact github page for the project
>>11933 >>11932 I remember this requiring some API bullshit and never touched it. Was this changed?
>>11934 idk no one ever talked about it after the fact that it couldn't use the current models we have
>>11934 >>11935 It's not that the models are different so much as it's a completely different library than torch so it would basically require a complete rewrite of voldy in order to be used. I think some people got it to work from the command line with very limited settings, like no loras or anything. I've never tried it myself.
>>11922 this shit fixed my problems, I updated torch and now I can finally generate images with highresfix with no errors/black images I have gtx1080
Since nobody seems to actually be doing a controlled test, here's what I got from updating on my 4090. I already had the cudnn dlls applied and some other fixes, so I didn't actually get much from the upgrade: The prompt for this is "big drums" with 1024x1024 images, I took the last of 4 gens. >7.5 it/s [torch1.13 + xformers + cudnn dlls] (1024x1024, prompt "big drums") >8.5 it/s [torch2.0.0+cu118 + opt-sdp-no-mem-attention] >7.1 it/s [torch2.0.0+cu118 + opt-sdp-attention + opt-channelslast] >7.5 it/s [torch2.0.0+cu118 + opt-sdp-attention] >8.2 it/s [torch2.0.0+cu118 + opt-sdp-no-mem-attention, attempt #2] Haven't tried compiling xformers, but will do if someone points me to how. But regardless, not the most massive of upgrades if you had your settings already tuned properly.
ok I managed to install a cu118 based xformers to the pytorch 2.1.0 ver of the SD-Scripts GUI. I'll hope this shit makes my Sweep Tosho LORAs baking faster
>>11938 pytorch 2.1.0 did not increase speed over 2.0.0
I have the CUDN118 built Xformers here if anyone wants to use it for pip installing if they switch over to pytorch 2.0.0 or 2.1.0 for the webui or any version of the sd-scripts https://pixeldrain.com/u/QfhaELW6
>>11941 thanks, worked for me >>11938 >>11940 continuing this, using xformers with 2.1.0 on a 4090 is actually slower: >7.5 it/s [torch2.1.0+cu118 + xformers0.0.17] compared to ~8.2-8.5 it/s using opt-sdp-no-mem-attention I guess sdp is better on 4090s
>>11942 lol " --xformers-flash-attention --xformers" gets 8.3 it/s I guess it's not so clear cut, but it doesn't seem necessary anymore (at least on a 4090).
(1.65 MB 1024x1536 catbox_e8vzrt.png)

(1.62 MB 1024x1536 catbox_2m71mz.png)

(1.64 MB 1024x1536 catbox_f3jbmj.png)

(1.57 MB 1024x1536 catbox_ertdv3.png)

>>11943 oh. well then, didn't find that one when I was looking, but sure
don't know why but the gui is giving off this error >Error caught was: No module named 'triton' shit after all that trouble I went through to install the GUI again with 2.1.0 and a CUD118 built xformers
>>11946 This was the exact error I was getting when I tried to build my xformer and failed. Whats worse is that when I checked all the related githubs and other channels, no one had a solution for this other than "try" and use a different build method.
(1.61 MB 1024x1536 catbox_fsuqiv.png)

(1.60 MB 1024x1536 catbox_prr5h6.png)

(1.53 MB 1024x1536 catbox_tvglv8.png)

(1.70 MB 1024x1536 catbox_rav7bj.png)

>>11947 torch 2.1.0 looks for triton, regardless of if it's on windows or linux, triton hasn't been built for windows, so there's nothing we can do about it for now. from my testing, it just disables some optimizations, but still works without any other issues
(1.68 MB 1024x1536 catbox_55htg3.png)

(1.63 MB 1024x1536 catbox_nshpi3.png)

(1.58 MB 1024x1536 catbox_sjl16w.png)

(1.64 MB 1024x1536 catbox_cbpn01.png)

>>11948 oh actually... this is an error wholesale that is causing it to crash isn't it. if that's the case, disregard my previous statement
>>11948 problem I was having the last time I did this was because it couldn't find triton, webui would not finish launching, so I waited for someone else to build one back when cu118 update came out.
>>11948 so we should just use the more stable pytorch 2.0 that doesn't use triton, i'm testing it out and seeing if it works
>>11951 I had no errors with the webui however just had to force reinstall my xformers to install the Cu118 built xformers
(1.54 MB 1024x1536 catbox_wlp64w.png)

(1.67 MB 1024x1536 catbox_xr0aun.png)

(1.68 MB 1024x1536 catbox_qsd8sh.png)

(1.64 MB 1024x1536 catbox_n4scav.png)

>>11951 I would assume it does, i'll have to rebuild the wheel for torch 2.0.0 for the scripts, but that should only be another 25 minutes of watching code slowly scroll by >>11952 I was also able to get it working without throwing any issues past a deprecation warning
I still can't get the Shondo loha or any lohas at all working with my SD
(1.66 MB 1024x1536 catbox_e51v3y.png)

(1.65 MB 1024x1536 catbox_kvdzzi.png)

(1.54 MB 1024x1536 catbox_ju1k8m.png)

(1.64 MB 1024x1536 catbox_6yb6sq.png)

>>11954 sounds very unfortunate, but it's honestly not that much of a loss, most people still aren't making lohas
4chan pytorch 2.1.0 anon was wanking so hard to that version that I have to try finding the proper line needed for the launch.py file to use the 2.0.0 version instead with cu118 lmao
>ERROR: Invalid requirement: 'torch-2.0.0+cu118' 'torch-2.0.0+cu118' you fucking piece of shit... IT EXISTS YOU STUPID SCRIPT >torch-2.0.0.dev20230118+cu118-cp310-cp310-win_amd64.whl
>>11957 trying to install 2.0.0 from the launch.py >torch_command = os.environ.get('TORCH_COMMAND', "pip install torch-2.0.0+cu118 torchvision-0.16.0+cu118 --extra-index-url https://download.pytorch.org/whl/nightly/cu118") get that stupid error fucking hell, I was thinking of doing sd scripts to avoid the triton error but if this doesn't work with the webui then it ain't gonna work with sd-scripts fml
>>11956 >>11957 >>11958 why not just install 2.1 with cu118? i've been using 2.0 for a month or so now and 2.1 has been nothing but upsides
>>11959 2.1.0 has been unstable at least with sd-scripts and causes triton errors even with a cu118 built xformers, doesn't seem to be the case with the webui but given that 2.0.0 is more stable I just decided to switch over. Also I fixed my errors, python was making fun of me for not adding >== after torch and torch vision
(1.62 MB 1024x1536 catbox_xe61pz.png)

(1.69 MB 1024x1536 catbox_2a7b5r.png)

(1.73 MB 1024x1536 catbox_whbzyv.png)

(1.75 MB 1024x1536 catbox_ebxv03.png)

>>11960 sd-scripts only throws "triton is not installed, disabling some optimizations" warnings for me, and others I used as guinea pigs for testing to make sure my wheel worked. So I don't think 2.1.0 is actually that unstable.
>>11961 I rather not see any errors like that when running stuff so I'm just going to use 2.0.0 for sd-scripts
(1.58 MB 1024x1536 catbox_ml2itx.png)

>>11962 yeah, big reason why I'm gonna build a 2.0.0 xformers wheel and allow the option of 2.1.0, 2.0.0, or the original 1.12.1
>>11962 >I rather not see any errors like that when running stuff so I'm just going to use 2.0.0 for sd-scripts why? it makes no difference
(1.51 MB 1024x1536 catbox_v9agvx.png)

(1.58 MB 1024x1536 catbox_7jt44a.png)

(1.59 MB 1024x1536 catbox_wkxped.png)

(1.67 MB 1024x1536 catbox_cvoln5.png)

>>11964 people see errors and think things broke, just a peace of mind thing I guess
>>11964 autism unironically, I just don't like seeing errors at all >>11963 the pytorch upgrade is really only huge for 4000 series and I'm not letting nvidia scam me until the market crashes for their shit.
(1.54 MB 1024x1536 catbox_40u2vw.png)

(1.61 MB 1024x1536 catbox_745nwu.png)

>>11966 seems like 3090 gets a pretty decent speedup as well, either way, I want to make sure my scripts actually provides support for as many things as possible so more people can easily make use of things like a new torch version
>>11966 >>11965 my 2.0 install for the webui had triton installed (i'm on windows 10) thanks to some dude on the ruski ai chan. wonder if that works with 2.1
(1.70 MB 1024x1536 catbox_4jaqv5.png)

(1.74 MB 1024x1536 catbox_rlmasy.png)

(1.64 MB 1024x1536 catbox_z5nq41.png)

(529.24 KB 512x768 catbox_c2sirv.png)

>>11968 can you point me to that? I would much rather just have triton installed if possible. I could have sworn that triton wasn't available on windows though
(1.80 MB 1024x1536 catbox_qa78vn.png)

(1.92 MB 1024x1536 catbox_87b7zx.png)

(1.89 MB 1024x1536 catbox_hi7qy6.png)

(1.89 MB 1024x1536 catbox_yr6535.png)

I gave that animelike-25d model a try but it feels kinda "stiff" somehow, it kinda ignores some poses pretty often. >>11969 Very cute.
>>11969 >I could have sworn that triton wasn't available on windows though it isn't, and he somehow got it working. i will try to find the github discussion with the link in a sec
(1.54 MB 1024x1536 catbox_8wc6is.png)

(1.64 MB 1024x1536 catbox_09a604.png)

>>11970 thank you! it took a bit to get the hair back to decently consistent with my new blend of lora for Hifumi, I think this blend of lora is really good, and is less almost entirely tab_head. I took a break from genning her yesterday to gen a ton of bunny cunny, but now I'm back to genning Hifumi in full force. >>11971 thanks ya
>>11972 The idea of a Mari lookalike being called hifumi is still so weird to me
(1.67 MB 1024x1536 catbox_4ya2gx.png)

(1.62 MB 1024x1536 catbox_djwgg1.png)

(1.69 MB 1024x1536 catbox_lafc6u.png)

(1.66 MB 1024x1536 catbox_tqnu77.png)

>>11973 fair enough, I get that, but at this point I've already embraced her name, so i'm just gonna stick with it, lel
ok I'll leave the cu118 torch 2.0.0 build for sd-scripts to easyscripts anon, I've been trying myself and staring at the list of torch and torch vision build with cu118 that match each others date is killing my brain and eyes
>>11975 torch-2.0.0.dev20230301+cu118 was the last build that I could find of torch 2.0.0 with cu118 before it moved onto torch 2.1.0 the torchvision with cu118 that matched it was this one torchvision-0.15.0.dev20230301+cu118
(1.67 MB 1024x1536 catbox_4ya2gx.png)

(1.66 MB 1024x1536 catbox_xnmw7b.png)

(1.70 MB 1024x1536 catbox_nj1793.png)

(1.69 MB 1024x1536 catbox_43glj9.png)

>>11975 >>11976 oh, great! thanks for that, I'll get everything built soon enough, still working on the new installer
>>11972 https://github.com/microsoft/DeepSpeed/discussions/2694 this isn't the discussion post i was looking for but it is mentioned here. apparently the link is removed so here is a reup i got from when it was shared to me on google drive https://anonfiles.com/fbf5O3f5ze/triton_2_0_0_cp310_cp310_win_amd64_whl
>>11977 >>11978 ok thanks for your info too anon, also I got the script running by doing >pip install torch2.0.0+cu118 torchvision0.15.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118 I think https://download.pytorch.org/whl/nightly/cu118 would work too? idk I just wanted to see what was the most minimal pip install I could run that would work right away
>>11978 wait if triton was an issue for 2.1.0 when I was trying to run sd-scripts doesn't that mean that if I >pip install the filename from your downloads folder that would fix the error I was seeing? So I can use pytorch 2.1.0 and torchvision 16.0 if this does work...
>>11980 well i don't know. i only used that with pytorch 2.0 so i don't know if it would work with 2.1
(1.70 MB 1024x1536 catbox_81rarn.png)

(1.62 MB 1024x1536 catbox_z9rfhx.png)

(1.66 MB 1024x1536 catbox_x539es.png)

(1.57 MB 1024x1536 catbox_h074ba.png)

>>11978 got it, I'll probably upload it onto my releases to make it easy to update >>11979 that sounds like it'll work well enough side note though, I think the installer is done, so once I do a small modification to allow for torch 2.0.0, I should basically be done with the new update. oh actually not quite done. I need to also add a batch file to update to torch 2.1.0/2.0.0 if you already have sd-scripts installed actually, I might just only use torch 2.0.0 if this triton build actually works
>>11981 yeah I'm going to rebuild the gui sd-scripts folder to see if it works with 2.1.0
>>11982 >actually, I might just only use torch 2.0.0 if this triton build actually works i would do some testing before deciding this. at least for the webui, gens are faster with torch 2.1 + no triton compared to torch 2.0 + with triton. i have a 4090 so i don't know if that benefit from pytorch 2.0 -> 2.1 is mostly because of that
>about PyTorch and xformers Other versions of PyTorch and xformers seem to have problems with training. If there is no other reason, please install the specified version. rumao I guess kohya didn't want to deal with the shit we're dealing with right now
(1.62 MB 1024x1536 catbox_w7787e.png)

(1.54 MB 1024x1536 catbox_ldxols.png)

(1.53 MB 1024x1536 catbox_bqd8ti.png)

>>11984 hmm, fair enough. either way... the installer did some fucky shit I guess, so I am gonna have to do some testing
>>11985 >>11853 trained relevant loraon torch 2.1 so it works fine. speed increase from like 1.25 it/s to like 1.75 it/s on the final epoch. that's with batch size of 5 @ 768 res
>>11986 also found the link to the original ruski 2ch post https://arhivach.top/thread/872994/#41194 don't know if it's of any value but can't hurt
(1.66 MB 1024x1536 catbox_zt9of7.png)

(1.60 MB 1024x1536 catbox_zae9nm.png)

(1.58 MB 1024x1536 catbox_7qkit4.png)

>>11988 thank you, if possible, I might try and build triton for 2.1.0 as well
>>11989 I think triton works like building xformers because the last cu I could find is 118 so you just can use that one and pip install it into your sd-scripts directory folder (that's just a guess because the ruski might have done it for the webui)
(1.59 MB 1024x1536 catbox_pzn109.png)

>>11990 yeah, i'm gonna give it a try, I hope i can get it to build
Kinda feel dumb following the thread and having nearly no idea what's going on Just have to wait for voldy to update I guess...
>>11991 hopefully I'm going to test it out with the 2.1.0 GUI sd-scripts repo I have been using as a guinea pig, if it ends up removing the triton error it's a success
(1.71 MB 1024x1536 catbox_bs3eb2.png)

(1.59 MB 1024x1536 catbox_toxw72.png)

(1.61 MB 1024x1536 catbox_y6e4pu.png)

>>11992 long story short, we are trying to see if we can get something built for 2.0.0 working on 2.1.0, while I'm trying to just see if I can build it for 2.1.0 >>11993 cool, if my attempt to build triton fails then if it works on your end, then we have a lucky break
If CFG 5 tends to make better pictures then why is 7 the default? And why do some people go all the way to 10-14?
(1.65 MB 1024x1536 catbox_faplsu.png)

(1.53 MB 1024x1536 catbox_feaerz.png)

(1.63 MB 1024x1536 catbox_2r0x7x.png)

>>11995 holdover from NAI days probably, I normally gen at 7 though and don't really have issues... I should try cfg 5
>>11994 this is the only error I got >UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() otherwise I got nothing else
(1.62 MB 1024x1536 catbox_z77aqn.png)

(1.84 MB 1024x1536 catbox_xrd1pa.png)

>>11997 that error is a deprecation warn, so it seems like it works, which is a good thing because I'm having very little luck with building triton myself
>>11997 but so far the results of 2.1.0 with CU118 built xformers and Triton built with CU118 from our 2chan Ruski bro >sweep tosho LOCON >768x768 >batch 2 >1.01it/s lowest >1.03it/s average also the deprecation error was for this >kohya_ss\venv\lib\site-packages\xformers\ops\fmha\flash.py:338: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
(1.70 MB 1024x1536 catbox_lfw6py.png)

(1.76 MB 1024x1536 catbox_efa374.png)

(1.75 MB 1024x1536 catbox_uc2v7e.png)

>>11999 what speed were you getting with the other options if you've already tested them
>>12000 unironically it was 1.31, it averaged out to 1.15/it for the current version I am using
(1.76 MB 1024x1536 catbox_iq5lm0.png)

>>12001 hmm, so triton was slower? hmmmm
>>12002 1.15it/s is for the triton and xformers version with pytorch 2.1.0 1.31/it was with the normal build we were using before all of this testing
(1.48 MB 1024x1536 catbox_hv7oi9.png)

(1.52 MB 1024x1536 catbox_aliw96.png)

>>12003 oh I see, I think I mixed up it/s and s/it, so triton is faster, but a seemingly decent amount
>>12004 3080ti with 99% of my VRAM being used, yeah it really is, the headache getting this to work was worth it desu. Thanks Ruskibro for making the triton file for us to install, trying to build my own xformers with CU118 was a pain anyways
>>12004 Can you share some kind of wildcard prompt if I want to generate a lot of a character in different positions or situations without having to change the prompt much?
>don't check the thread for a day and half, maybe two >300 replies aw fuck not again
>>12007 Updates always happen when you're not looking!
>>12008 "UPDATES could be here", he thought
>>12005 cool, I'm gonna scrap building triton and just use the 2.0.0 version for both 2.1.0 and 2.0.0 >>12006 I have a really rudimentary wildcard system that is pretty much backgrounds, outfits, and kemonomimi, i have an even less sophisticated set of wildcards that are just things like hair styles, hair color, eye color, looking direction, among other things, do you want them?
>>12010 Ruskibro built his triton with just cu118 like I did for xformers, he probably labeled it as 2.0.0 because that was the PyTorch version he was using it for the webui given that 2.1.0 is still “experimental” or maybe I’m dumb and you use PyTorch for triton unlike xformers building but it works anyways so that’s all that matters
>>12010 >i have an even less sophisticated set of wildcards that are just things like hair styles, hair color, eye color, looking direction, among other things, do you want them? Sure!
>>12011 Or I am just getting confused because you do have to install torchvision for xformers but you only need to specify the CUxxx you are using
>>12005 nice i'm glad it worked with 2.1. guess when this lora is done training i'll have to install it. >the headache getting this to work was worth it desu dude tell me about it. i went schizo in november trying to optimize voldy's for my new 4090. the rabbit hole was deep
>>12014 Yeah and luckily you get the biggest speed boost from all of this work until the 5090ti comes out
>trying to search how to build triton >messy GitHub page filled with guides for docker and Ubuntu One day the guide will be as straight lined as the building xformers guide
(1.55 MB 1024x1536 catbox_73ilhx.png)

(1.69 MB 1024x1536 catbox_3vao4e.png)

(1.66 MB 1024x1536 catbox_x539es.png)

>>12012 so for the clothes and backgrounds they are set up in a specific way, kemonomimi_complete has all related tags for that kemonomimi, so ears, tail, and girl. I also have supernatural_complete which is angel, demon, and dragon so far. I do plan on fleshing everything out much more, which I fully intend to share here when I'm done. It doesn't handle things like full sex prompts, but you can use the wildcards like in my gens to make sex prompts. each section is seperated out a bit so that it's a bit easier to call what you want, such as tops having shirts and coats, and bottoms having pants and skirts. I also have full_body, things like swimsuits, outfits, or body suits, accessories (which is woefully empty), panties (because there's a ton of options there, it needed it's own), and foot and legwear, which I combined into one folder. each section has at least one activation wildcard which consists of all combinations of the smaller wildcards, such as clothing/tops/tops_nsfw for example, which contains clothing/tops/shirt clothing/tops/tops_mod clothing/tops/shirt clothing/tops/shirt, clothing/tops/coat topless, clothing/tops/coat topless, clothing/tops/tops_mod clothing/tops/coat clothing/tops/tops_mod clothing/tops/shirt, clothing/tops/tops_mod clothing/tops/coat topless in the wildcard, this way, you can avoid calling 400 wildcards if you want, I'd suggest you use UmiAi as your wildcard solution because it supports autocomplete, which I wasn't able to get working using dynamic prompts. (just make sure you know that it is gonna come with a ton of embeds, I'd just delete them all) I included a readme with some examples, and this text just so you have an idea of how best to use these https://files.catbox.moe/srqjuj.7z just throw it into your wildcards extension that supports folders, and recursive tag calling (like UmiAi does) first image has nothing to do with this explanation, I just wanted to share it second image is an example of using every prompt third image is an example of using some sub prompts you know where to find me if you end up having trouble with it
>>11346 >alphonse lora. was on my wishlist, thanks anon!
I don’t really like the AI image YouTubers that are the “main guys” they give out terrible info and just parrot the shit r/stablediffusion posts. Thanks anons, you’re the reason why I can do the shit I can with training and promoting
Welp, I ordered a PSU. Went for the AI1300P, was out of stock everywhere but on Amazon and I need to import it from their german warehouse but whatever, still over a week faster than waiting for shops to restock it AND 50 bucks cheaper. Hope it'll survive the trip and that it's actually new and not a sneaky refurbished one, last time I bought a PSU from Amazon it was listed as new but got a refurb instead (though it was a 3rd party seller so w/e) Really hoping I can get this shit together by the end of the month
>>12019 I sure love the guy with a thick ESL accent and his forced humor and how he stretches every little thing into 10 mins videos
>>12017 Can you rezip this as a regular zip file? 7z gets the file flagged as a virus on every antivirus and browser for some reason
>>12022 oh sure, I don't understand why 7z does that sometimes https://files.catbox.moe/2aira0.zip
>>12023 MATING PRESS
>>12023 69/10 you mouse wizard
>>12027 catbox pls
>>12027 no idea tbh but i wish also catbox the flandre please
>>12027 More Reimu pls :'D
(1.45 MB 1024x1536 catbox_icietf.png)

(1.83 MB 1024x1536 catbox_dwqlmj.png)

>>12030 god I love cheese >>12031 Armpit pervert! Who would like such a thing?
>>12030 am i retarded or?
>>12032 "Armpit pervert! Who would like such a thing?" SOMEONE SAID THE EXACT SAME THING YST MAN!
>>12033 I can open the link sooo... yes?
>>12035 First Kemono and now anonfi-- ah whoops I forgot I never set a DNS for the wifi adapter I got. Apparently my ISP blocks both now
(2.32 MB 1280x1920 catbox_1myn0m.png)

(2.80 MB 1280x1920 catbox_7i5z6h.png)

(7.04 MB 1280x1920 catbox_5gs5jz.png)

(2.79 MB 1280x1920 catbox_9fsx1e.png)

Finally some touhou in this thread
(1.66 MB 1024x1536 catbox_99cppb.png)

(1.59 MB 1024x1536 catbox_wp20ff.png)

(1.74 MB 1024x1536 catbox_55e1i5.png)

(1.72 MB 1024x1536 catbox_trscgx.png)

(1.63 MB 1024x1536 catbox_9x7huj.png)

(1.58 MB 1024x1536 catbox_220xir.png)

(1.77 MB 1024x1536 catbox_4ca05k.png)

(1.74 MB 1024x1536 catbox_vc4l0x.png)

>>12039 ah yes, bears
apparently according to normans, drawing loli/posting it is worse than calling a nigger a nigger, holy kek
(1.90 MB 1024x1536 00206-1604522032.png)

(1.72 MB 1024x1536 00229-644830135.png)

(1.57 MB 1024x1536 00215-3106859524.png)

(1.70 MB 1024x1536 00218-3999550931.png)

>>12042 well, that's unfortunate, more cunny for us then.
>>12042 my favorite kind of anti-loli norman is the 13 to 16 years old they/them 300 follower minor who accuses you of being a pedophile exclusively through dbz memes
>>12045 Norman Revival Plan It's a recurring event that we need to do every once in a while
>baked LORA with 2.1.0 GUI sd-scripts >fried as fuck I'm just going to say that's the gui's fault
gonna try making a khyleri style lora because i ain't paying 300-400+ bucks for memes
>>12047 nah I tried again and it's fried, I don't know what's wrong... Easyscripts anon I hope your test results don't end up fried because that means something went wrong during my pytorch upgrading process
>>12050 I don't usually do full tests when updating the scripts, it's because it would take a ton of time with the amount of tests I have to do to make sure every element actually works. that being said, this is enough of a change that I will probably do one full bake before I opt to fully release it.
>>12051 I'm going to try to build another xformers and see if that fixes it, maybe I have to build with the nightly build version of cu118 instead
(65.34 KB 890x683 gitcucks.JPG)

I'm trying to see if it was triton because I'm looking at the LORAs results of someone that updated to pytorch 2.1.0 before the triton file was posted here and there image wasn't fried, I ended up finding this github page and found this holy kek
>>12053 Got a link to that?
>>12047 >>11985 >>11992 Yeah I don't know what the fuck is going on, torch 2.0, 2.1, sdp versus xformers, builds... I'm just going to sit on the lower speeds, maybe switch to ashen-sensored's two-shot. why does my hobby need devops
>>12055 updating torch to the latest version for the sd-webui is easy delete your venv folder and put this into the launch.py file before running webui-user.bat > torch_command = os.environ.get('TORCH_COMMAND', "pip install torch2.1.0.dev20230317+cu118 torchvision0.16.0.dev20230317+cu118 --extra-index-url https://download.pytorch.org/whl/nightly/cu118") after that you need to pip install this and an xformers I'm building again once you have them in the webui of your directory make sure to activate a venv with cmd or powershell before doing this >.\venv\Scripts\activate >pip install triton-2.0.0-cp310-cp310-win_amd64.whl >pip install the name of the xformers I am building right now once I upload it otherwise aside from that make sure your webui-user.bat commandline arg line doesn't have xformers in it and use >--opt-sdp-attention instead for now, but once I upload the newly built version of xformers you will need to to pip install it and triton and if you want to use it >--force-enable-xformers However updating pytorch to the latest build with sd-scripts on my side has been a mess, this is something I'm going to need to test again with the newly built xformers and triton, or I'll try a version without triton being installed to see if it ends up fried or not
>>12057 whoops sorry, replied to the wrong post >>12054 yeah just tons of vitriol for a Ruski anon and the loli pics they generated on 2chan
But it seems like I'm stupid, apparently xformers did install the stable build of the pytorch that was compatible with cu118, so it went to torch 2.0.0 and torchvision 15. I had to change >pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113 to a version that specified torch 2.1.0 and torch vision 16.0 along with cu118 nightly build. I assume Ruskianon had 2.0.0. with their triton because when using cu118 it automatically downloaded the 2.0.0 and its compatible torchvision. I'm not sure if that's the reason why my LORAs being made with SD-Scripts 2.1.0 was frying shit but I'll see, there's nothing I can do to bake a triton with 2.1.0. because there's no clear guide at all for windows.
>>12056 More the aspect of which is actually better, although I'm not enthused about downloading random builds either.
Bros I did it. I actually prompted nipple slips and that panty slide in the last one.
>>12060 It’s ok you can just delete the Venv folder to fix everything for the webui, sd-scripts has been more of a mess however
>>12061 I wish partially clothed was a tag that worked because if you do nipples then it's full topless and same for the bottom
>>12063 I totally get you, I also enjoy partially clothed girls, and it's very tough to prompt
Here's the xformers built on 2.1.0 torchvision 16.0 CU118 https://pixeldrain.com/u/fcRjcPyW the one I posted earlier was built with 2.0.0 torchvision 15.0
>>12062 Oh is webui support out? That I can do.
>>12066 It is you just have to delete venv and rebuild it with PyTorch 2.1.0 and torchvision 16.0 CU118 nightly build
>>12057 >>12053 why does any mention of russia makes normalfags seethe so hard. Is the psyops really this effective?
>>12068 it is, normalfags are just so brain rotted by the narratives and politics they see on twitter and whatever social media site they like to swarm
>>12061 link to twilight paladin lora
I'm made another version of the gui sd scripts with 2.1.0 but this time with only installing the 2.1.0 CU118 baked xformers, I want to see how the results turn out without installing triton and if it fixes the frying issue.
>>12068 I'm not a normalfag by any means but I have a negative opinion of russians because any time they're brought up on 4chins is to shitpost or as a vehicle for shitposting. The media being overwhelmingly against russia certainly doesn't help either and I do not personally know one so my vision of them is colored by these experiences.
>>12071 that triton error list is really annoying but if it does get rid of the frying I had then it means that it was the problem, if so then I don't know what the fuck is going on.
>>12073 but speeds are about the same, 1st epoch is 1.21it/s
Is there any collection of LORAs? I vaguely remember some anon did a rentry or whatever list but I wasn't really caring about them at that time.
adverse noise cleaner extension https://github.com/gogodr/AdverseCleanerExtension so that glaze shit didn't last long kek
>>12076 this is what happens when ML brainlets skip important classes like old-school computer vision and algorithms in grad school. They forget simples things like low pass filters exists
>>12070 It should be in the gitgud repo
>>12074 nope still fried, I'm going to assume it's the sd-scripts gui repo causing this. I'm going do the pytorch install on the vanilla repo for sd-scripts, anyone have a more recent training command script I can borrow?
>>12061 >>12063 catboxen?
>>12080 Here, not every argument is actually being passed and I'm mid-testing so enjoy the landmines https://pastebin.com/sDUhJ9ud
>decide to try out torch 2.0 with xformers in the lora training UI >too lazy to compile xformers for windows >symlink venv from the webui and get the training UI to work in WSL >almost doubles my training speed Either windows is fucking me over immensly or torch 2.0 with xformers is really fucking good for training
>>12083 you're not frying shit right? You said pytorch 2.0.0 which is the more stable build but 2.1.0 shouldn't be frying my LORAs outputs, you just updated the base sd-scripts repo hosted by kohya right?
>>12084 >You sorry, I'm not that anon, but based off of my training sample images, it's not frying my lora
>>12085 weird, it fried my two test but this was with updating the pytorch to 2.1.0 for the gui repo of sd-scripts, I did a 2.1.0 update with xformers built on 2.1.0 and the triton uploaded earlier in this thread for the main sd-scripts repo, if it ends up not frying my test baking right now that means the issue was something within the GUI repo
>>12086 I'm using 2.0, and my xformers it compiled for linux(WSL). I didn't encounter the triton is missing issue.
>>12087 the triton error is something only showing up for the 2.1.0. build, did you get this however? >UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
>>12088 that's just a warning, it just means that they will stop maintaining typedstorage and will possibly remove it for future versions. It doesn't affect anything
>>12089 Gotcha, if for some weird reason this test fries too I’ll just use torch 2.0.0
>>12090 success, the LORA baked with no frying! It was the GUI repo doing some fucked shit then... in that case Easyscripts anon this is all I did for 2.1.0 >follow basic instructions to install sd-scripts once you reach the pip install torch part >pip install torch2.1.0.dev20230320+cu118 torchvision0.16.0.dev20230320+cu118 --extra-index-url https://download.pytorch.org/whl/nightly/cu118 >pip install --upgrade -r requirements.txt >pip install -U -I --no-deps (download this xformers I made with 2.1.0 https://pixeldrain.com/u/fcRjcPyW) >pip install -U -I --no-deps https://anonfiles.com/fbf5O3f5ze/triton_2_0_0_cp310_cp310_win_amd64_whl and then you just finish the rest of the installation progress, it works perfectly
(1.07 MB 768x1120 00008-2174340385.png)

>>12091 here's the result btw, I was able to do batch 3 768 2it/s, I'm usually only able to do batch 2 but that was probably because I was using Locon/LOHA... not sure how the ps1 script I was provided would implement Kohakublueleaf's method
>>12092 using torch 2.0.0 with my xformers baked with 2.0.0 cu118 and triton. Speeds for batch 3 768x is 1.90it/s... so it's unironically faster? Weird as hell
>>12068 >>12069 lmao, fuck normal fags and fuck russians
>>12091 dead link
>>12095 lmao 8chan made the last bracket become a part of the link https://pixeldrain.com/u/fcRjcPyW this one is the xformers baked with 2.1.0 https://pixeldrain.com/u/QfhaELW6 this one was baked with 2.0.0, all done with CU118
>>12092 I hadn't gotten around to adding that yet, was only going to try it if normal lora failed to capture the current details I'm working on, but >Specify --network_args option like: --network_args "conv_dim=4" "conv_alpha=1" Speaking of which is there any ideal relationship between the linear and conv dimensions anyhow
>>12097 some people leave the ranks and alpha the same, some will do the conv-rank/alpha half of what the main ones are.
>>12098 What's that? Set conv rank to 256? Yes sir.
Got off my ass and started working on her after all. Not sure how to get the visible overhead light for a classic chair scene though.
>>12099 not that high, that's just going to make it huge, but if you're doing 128/64 for main some people will make the conv rank and alpha 64/32
>>12101 Was being sarcastic, I read ya.
>>12102 I know but I've seen people do 256 and 360 anyways so I wouldn't be surprised. Not even sure if Kohya is trying to still find a way to allow ranks over 320 like he tweeted he would some time ago.
>>12100 I guess you can copy paste the character into the scene then do a few steps of img2img, there's no way to generate dark scenes otherwise
I got triton and xformers working on the webui nice, hopefully easyscripts anon has an easy time with implementing the pytorch upgrades. It's good to know it doesn't break anything with the main sd-scripts folder, the GUI just got fucked over for some weird reason.
>Updated learning script repository. Speeding up the latents cache, correcting the number of steps when specifying gradient_accumulation_steps, and other fixes. I never used GA but there you go
>>12091 I was testing all of the installer stuff with 20230317, I'll make some changes to be 20230320 just in case that fixes things. Other than that, I actually have wheels for both 2.0 and 2.1 xfirmers built as well as put the triton whl directly into the repo. Thanks for sharing, I hopefully will be done soon. Mainly just trying to make sure the small updater script works as intended. (Though I did end up having to sleep before I could finish it)
>>12107 that's fine knowing I was able to bake LORAs 2 times with the sd-scripts main repo with 2.0.0 and 2.1.1 without fried outputs and the speed benefits it's perfect, knowing you use that repo in your installers everything will go well.
>get that ruby uma musume >kuudere ojousama that barely smiles >uma musume devs forget to program all of her animations to be unique so she just does a plain smile whenever you don't tap her in the trainer mode fucking trash... I love ojoudrills especially on kuuderes... I still wonder how that person on Civitai was able to make a LOHA for her kek
(1.48 MB 1024x1280 00621-1152073194.png)

(1.34 MB 1024x1280 00059-2010697240.png)

>>11435 >>11436 I'm several days late to the party, but thank you for this naizuri locon, Nene anon! Looking forward to the update whenever you get around to it!
nah I did another test with the gui and it fries still even with 2.0.0 pytorch, so that concludes my test with that LORA normalfag trainer. SD-Scripts hosted by kohya and used by Easyscripts has no problems with 2.0.0 and 2.1.0
>>12111 I wonder what is causing the GUI to fail? well, not that much to be honest, so long as the Easyscripts actually works then I don't really care. after all, I don't maintain the gui, I maintain the scripts
>>12112 I don't know honestly, I was baking a Locon with it and it just fried I only made a normal LORA with my sd-scripts test however I doubt that LyCoris can be that effected by pytorch 2.0.0 or 2.1.0 given Kohya has his own version of locon implemented
>>12113 hmm, I know that with LyCORIS having cp Decomposition on seems to fry easily. I don't turn it off by default, (because it's a pain to have off by default), but I do generally advise people to turn it off. that might be the issue tbh
>>12114 fuck I think you're right, I see no options to disable it even within the advance config options for the GUI. What a shit show
>>12115 of course, it always is something stupid like that
>>12116 Yeah even when easyscripts had the option and I tested it... fried as fuck, you know what fuck the GUI I'm not going to even say anything, normalfags can get fucked.
>>12117 I literally only added the option because it would have been a huge pain to have it off by default. I would have otherwise had it silently off and never told anybody about it
>>12118 well given there was no benefits to it at all and it basically made me thought all my countless tests with the GUI was a failure on my side which basically made me build two xformers... yeah I don't even know when cp decomposition will be good, if someone ever manages to make something good with it we'll find out otherwise for now it should be off on everyone's script
>>12119 i'm dreading the day kohaku adds sparce bias to the LyCORIS, because it's gonna be yet another thing I have so silently modify in the background, like many other things...
>>12120 yeah thanks for all your hard work, my brain was already dying from all the building and testing I had to do with upgrading pytorch for the webui and sd-scripts, the webui was easier because there was no hidden option I couldn't disable ruining shit like cp decomposition kek. Otherwise we have to thank the anon that shared the ruski anon's triton build to get the full benefits of xformers built with 2.0.0 or 2.1.0
But I can still safely finetune models with the gui since that doesn't have cp decomposition affecting it, speaking about that I'm headed out to get that 3090 to get that type of training done, pytorch 2.1.0 and xformers 2.1.0 with triton, the speed to finetune a model will be less painful
(1.80 MB 1024x1536 catbox_mn3px1.png)

(1.88 MB 1024x1536 catbox_lk8epj.png)

(1.63 MB 1024x1536 catbox_vv8cur.png)

(1.63 MB 1024x1536 catbox_w660g0.png)

While I appreciate the hard work of anons for better and faster gens via python code magic I am still in dire need of 2hus
>>12121 thanks for using my scripts! I'm honestly really happy how many people like my scripts, after all, I aim to make them as good as possible.
I'm still using sd-scripts (from the GUI repo, but not using through the GUI itself) to finetune, which of these files do I need to add to the directory to improve training speeds?
Has anyone tried out using comfyui? The lastest video by a certain guy on youtube named Olivio looks very interesting.
(1.66 MB 1024x1536 catbox_mk7h7s.png)

(1.85 MB 1024x1536 catbox_6u8z07.png)

(2.00 MB 1024x1536 catbox_h6969w.png)

(1.82 MB 1024x1536 catbox_p8a5vl.png)

Is there some video tutorial on how to inpaint finger/toes? It keeps making 6 toes despite how schizo my negative prompts are. Add that I never do inpaint in my life to comprehend what to do.
>>12128 is that a lora of the killer of princess connect and redive?
>>12129 Nyes >right feet on left legs Aaaa, stop making the actual artist mistakes.
>>12056 >>12105 Is there anything else you had to change to get xformers to work?
>>12131 Just pip reinstall the 2.1.0 built xformers that was posted earlier and pip install triton make sure you —force-enable-xformers after you run the webui-user.bat without any command arguments. The posted downloads for xformers and triton have to be in the root directory of the webui when pip installing them but also ensure you ./venv/scripts/activate before all of this
Also did the easy scripts get updated with the PyTorch upgrade option or is that not released yet
>>12133 not released yet, fighting with the new installer just a bit.
>>12134 it should be done in a small bit though
LADIES AND GENTS I GOT MY PSU IT'S ALL HERE i don't have a table big enough and my back hurts :(
>>12114 looking at the code for lycoris, using cp decom lets you represent a higher rank convolution tensor with a lower dim than without using cp decom. What conv dim are you using for non cp decomp training vs cp decomp training?
>>12137 4 or 8 usually
>>12138 i dont change anything between them btw
>>12137 Try explaining what it does in retard terms
>>12114 btw, you can manually disable cp decomp by going into venv/Lib/site-packages/lycoris/loha.py and putting in self.cp = False before each instance of if self.cp: which occurs in lines 109, 127, 154, 177
>>12141 yeah I know, but the average person will not even attempt to do that when it comes to the easy scripts. I can't rely on a modified version of LyCORIS for installation, so I have to make the change on my side, through the easy scripts, not by directly modifying LyCORIS
>>12142 Pretty sure you can do it through a .ps1 script too but yeah that method is more convenient than me having to open a python file and editing it myself which I will have to do with a GUI
>>12140 the same conv dim with cp decomp enabled can capture more "features" than with cp decomp disabled theoretically. Idk how that would make your thing fry, did you train using locon? I suspect it might be because the locon cp decomp is doing some weird voodo shit that's not any sort of matrix decomp I've ever seen. Meanwhile the loha cp decomp is still not actually a cp decomp, but it is an actual tensor decomp method called tucker decomposition which has a theoretical basis for actually working
>>12144 Yeah I was using it for Locon that must be the case then
>>12128 we can be done with the priconne characters soon at this rate
Has anyone done a spitroast lora/image/pose/whatever where the girl is facing up while being spitroasted? I've seen a dangling spitroast lora that is basically that but face down instead of face up
>>12110 Im gonna be kinda busy these days so no uncensoring dicks for a while but i would like to know if you found any issues, overfit, etc so i could fix them aside from trying to get the side naizuri to work better.
>>12144 actually, I think I know why the locon cp decomp is fucked now. It's an order of operations thing. Tensor products are not commutative, so order of multiplication matters. Look at https://github.com/KohakuBlueleaf/LyCORIS/blob/main/lycoris/locon.py#L80 Whereas to get the benefit of low rank adaptation, you need to multiply your tensors together first, then multiply it with the input, line 80 shows that the locon cp decomp applies lora_down to the input, then applies lora_mid, then lora_up. If we look at lora_down, it essentially is a 1x1 convolution kernel that serves only to reduce the channel of x. From a purely information theory perspective, this neuters the informational density of the input tensor. It doesn't matter what lora_mid and lora_up does, as we have already discarded a bunch of information in the lora_down stage. Since lora_mid and lora_up have a lot of useless degrees of freedom and lora_down has a small amount of parameters, it makes sense why it's easy to fry it.
>>12149 So basically >just use cp for LOHA since it actually does good shit
>>12150 Should ask /h/ on 4chan then, I hear they're experts when it comes to mixing that in their models.
>>12151 Don’t remind me python scripts usually have the expression >get child ML and python are huge lolicons
Before easyscripts does the PyTorch upgrade what xformers did you use easy anon? There was two posted built with cu118 2.0.0 and 2.1.0 posted earlier
hey all, finally finished up this update. I fucking hate having to support three different torch versions in the installers, but anyways... new installer that is python based with actual checks: https://github.com/derrian-distro/LoRA_Easy_Training_Scripts/releases/tag/installers-v6 if you have it already you can just update using the update.bat there will be a new bat file called update_torch.bat, just run that and it will ask you what version of torch you want + if you want to install triton if on torch 2. changelog is at the end of the readme https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
>>12153 I didn't use either, I built them both from scratch
>>12155 both were built for cu118 though, the only difference between them was the torch version
>>12156 There shouldn’t really be a difference then since triton 2.0.0 worked with my xformers built with 2.1.0
>>12157 And if I want to change the xformers I can just do pip force reinstall
>>12152 >tfw you'll never perform transitive closure on your family tree
>>12157 that's fair, but long story short, I built both xformers during the time we were still testing, so I just opted to just use both.
>>12159 If this is what I think this is there should be mutated beings in the lower sections
>>12160 Nah it’s fine, 2.0.0 is the stable build xformers and triton will build with cu118 I had to specifically say which torch and vision I wanted for 2.1.0
>>12162 yeah, I have both options, 2.0.0 and 2.1.0, because fuck it, I guess
>>12163 2.1.0 will become the stable build soon it was just released this month I think looking at the dates when cu118 and PyTorch 2.1.0 and Torchvision 16.0.0 started
>>12132 I'm keep getting ModuleNotFoundError: No module named 'xformers.ops'; 'xformers' is not a package. I've tried recreating venv manually, running webui with and without --reinstall-xformers but no luck. I'm just gonna stick with opt-sdp-attention for now but here's the steps I took 1. Backup launch.py and modify the initialization of torch_command to torch_command = os.environ.get('TORCH_COMMAND', "pip install torch==2.1.0.dev20230317+cu118 torchvision==0.16.0.dev20230317+cu118 --extra-index-url https://download.pytorch.org/whl/nightly/cu118") 2. Delete venv folder and run webui. 3. Download xformers-0.0.17+b3d75b3.d20230321-cp310-cp310-win_amd64.whl and triton-2.0.0-cp310-cp310-win_amd64.whl in webui root. 4. Activate venv with .\venv\Scripts\activate to pip install both xformers and triton.
>>12165 Weird it worked for me when I did it, I think I had the same error the first time and redid the whole process but honestly the tests here have shown that opt attention works as good so yeah just use that for now to save a headache
>>12166 Honestly I forgot what I did to get rid of that error I just redid the PyTorch venv rebuilding a lot of times kek
>>12165 did you launch with --xformers or --force-enable-xformers
3090 with RMA under 1k bucks obtained, can’t wait to install this shit and see how fast it is with the PyTorch upgrade
>>12154 thank you, but can you pls explain >Added support for torch 2.0.0 and 2.1.0 along with built xformers for both whats the difference between them ? which one should I use
>>12170 2.0.0 is the more stable build while 2.1.0 was released this month, 2.0.0 is just considered more stable because it had more time being used for development. The speed difference? Is something I have yet to test
>>12171 alrighty thank you!
>>12170 Also 2.0.0 didn’t have the triton error that 2.1.0 had, however easyscripts will include xformers and triton built with 2.0.0 and cu118 for maximum compatibility between both versions
>>12168 I tried both, neither worked
>>12174 Yeah then stick with opt for now
>>12169 Hello fellow used 3090 chad
>>12126 It's good, if you have experience with Blender it is very comfy like the name implies. I still tend to use voldy's because stuff gets implemented there first though. The VN tutorial is cute. https://comfyanonymous.github.io/ComfyUI_tutorial_vn/
>>12177 That's neat but why can't autists just make an actual tutorial for once
>>12178 probably that was him learning how to use renpy so he can make /h/ games and milk patreon retards
>>11834 prompt?
>>12179 true but jesus christ i don't want to click through dozens of dialogue about muh starving artists vs ai memes just to learn about this shit is there an actual tutorial around?
>>12181 why do you even need a tutorial? i just checked the github and the install is explained there and everything else is pretty self explanatory
did anyone have trouble with the new easy scripts update yet
>>12183 it'd be great if nobody does. I tested the torch_update.bat 100 times for updating to torch 2.1.0, and the new installer had a ton of testing too.
git pulled and now my x/y/z plots don't save the x/y/z format in the metadata for png info. anyway to turn this back on?
>>12183 cudnn.zip got flagged as a trojan when I tried to update torch version but after I flagged it as safe and ran it again it was fine
>1megapixel training >3 batch >18gb of VRAM being used ah yeah baby this 3090 was worth it
(2.21 MB 1024x1536 catbox_u0f6ha.png)

(2.14 MB 1024x1536 catbox_zpuquk.png)

(2.10 MB 1024x1536 catbox_nvu1gh.png)

>>12186 yeah, it's the only mirror I know of for the cudnn 8.6 version. I do need to see if I can find a better way to host it, but Nvidia will get angry likely if I just directly host it on github like I did with the xformers wheels I built.
Also I had a lot of weird errors when updating torch, just the shit regarding the cudn118 update and saying yes to it will lead to it "not being available" but knowing 2.1.0 or 2.0.0 come with CU118 anyways it should have been installed anyways and that option was just for the people who installed pytorch 1.6
>Been using colab for ages >Set up my PC to see if it can generate a picture with default webui settings >1 minute to generate a normal euler 512x512 >CPU and GPU cooking themselves to do so lmao
>>12185 is anyone else having this issue? it's driving me nuts
I'm gonna get the 3090 in soon (as soon as I find a table I can build this thing on, maybe the 7000D is a bit too big lmao), what can I do to squeeze the most IT/s out of it without needing to use the tensor cores thing?
So apparently yeah cp decomposition doesn't fry LOHAs, it's just Locons
(8.51 KB 440x190 SlightImprovement.png)

So at https://github.com/opparco/stable-diffusion-webui-composable-lora The author makes the claim: "Eliminate the impact on negative prompts With the built-in LoRA, negative prompts are always affected by LoRA. This often has a negative impact on the output. So this extension offers options to eliminate the negative effects." I've been messing around with it a bit, just the not affecting the negative prompt option not using AND or composition stuff. Deselecting "Use Lora in uc diffusion model" noticeably changes the art style - for the worse from my testing, so the author isn't correct on that front (at least for my test lora and prompt). But deselecting "Use Lora in uc text model encoder" does change the overall image composition in a way that, from my limited testing, slightly improved prompt fidelity. However, it's hard to say for sure either way. Anyway, I just wanted to bring this to y'all's attention so if anyone else wants to test it they can.
>>12195 >With the built-in LoRA, negative prompts are always affected by LoRA. This often has a negative impact on the output. So this extension offers options to eliminate the negative effects." What about with kohya's extension?
(877.48 KB 768x1056 UseinUCDiffuserOFF.png)

(926.85 KB 768x1056 LoraUCTextEncoderOFF.png)

(868.68 KB 768x1056 DefaultBothON.png)

(915.91 KB 768x1056 BothOFF.png)

>>12195 Here is a comparison between the default (all check boxes on is the same behavior as if you didn't use the extension) vs with either check box or both check boxes off. As you can see with the uc diffuser checkbox off the image is noticeably flatter and less detailed. However, with just Use lora in uc text model encoder off, the composition is improved, especially the hand/arm position and the coherence of the background. >>12196 From testing the composable lora does NOT recognize and affect LoRAs from kohya's extension. The LoRAs have to be in the prompt for the checkboxes to affect them.
>>12197 This makes me want to play shadowverse but I'm tired of the current meta and I just want the new pack to get here
>>12197 But you'll notice as well that turning off both definitely improves the stripiness of the pantyhose, which is desired. So maybe the composable lora guy was onto something. As for the flatter art style, it's not necessarily a bad thing, and might be able to be compensated for by adjusting the prompt. I will have to test more.
(921.80 KB 768x1056 Default.png)

(995.57 KB 768x1056 TextEncoderOFF.png)

It's hard for me to say for sure which is better - in this example the composition seems better with the default behavior.
>>12193 I bought a full tower for this reason and its a pain in the ass to find a place to place it lmao
(999.16 KB 768x1056 EncoderOff4.png)

(975.02 KB 768x1056 Default4.png)

>>12200 And in this one as well. Note that the stuffed animal, which is an accessory of the character that is present in a lot of training images, is not prompted for in this case - so it showing up is a bad thing. It looks like just sticking to the default behavior may be for the best after all. In any case, the difference is certainly subtle enough that it's not immediately obvious which is better, which means that there's probably no reason to switch from the default.
(1.06 MB 840x1256 catbox_6xuu52.png)

(1.29 MB 840x1256 catbox_9ikqll.png)

(1.23 MB 840x1256 catbox_kfo6s2.png)

(1.19 MB 840x1256 catbox_lti7vt.png)

>>12195 Yeah I've been shilling this extension for a while especially for style loras. I just uploaded some comparisons for one of my loras I just uploaded and without composable lora enabled. Toketou https://mega.nz/folder/2FlygZLC#ZsBLBv4Py3zLWHOejvF2EA/folder/SZsxWTRK Bonus (still need to make sample images): Esearu https://mega.nz/folder/2FlygZLC#ZsBLBv4Py3zLWHOejvF2EA/folder/TUkxRJhS
>>12201 I sorta have the space for it, I just don't know where to actually assemble it. I have a bad back so sitting on the ground is a no-no (plus we got a dog and there's hair everywhere) and my only choice is lifting it up on the kitchen table.. with a bad back.
>>12204 HOLY SHIT CATBOX RIGHT NOW
>>12203 Hmm yeah I've only been testing character and concept loras, not style ones. Might be a very different situation for style LoRAs - though it's too bad you can't decide on a per lora basis in that case since it seems good for character loras. One thing about character loras is that I always find them to be far more effective in conjunction with a TI embed, even when trained on the same images. Does anyone know of any TI embed training methods or scripts better than the one already in webui and the monkeypatch extension? I'm still waiting for kohya or whoever to support using TI embeds DURING lora training so I can experiment with "pivotal retuning." I'm extraordinarily curious how a lora/embed combo will perform if the lora is trained with the embed present.
>>12206 https://files.catbox.moe/sb80qz.png inpainted the face a few times
>>12207 Actually, it looks like Kohya SS has a TI training script as well, but I always trained LoRAs using the easy training script from anon. Anon, if you're here, does your easy training script have any support for TI training using kohya? The main advantage I'm after is better control over parameters and also the binning technique for pictures.
(1.35 MB 1152x1152 catbox_m9eg15.png)

(1.43 MB 1152x1152 catbox_kmsmm7.png)

(1.20 MB 960x1152 catbox_qh6f6v.png)

>>12123 i still need to train some LoRAs for the less easily promptable 2hus (maybe joon and shion next?)
(396.63 KB 512x512 catbox_9y458w.png)

Wow! This hutao lora is really good!
>>12211 >joon I sleep. >shion I'M HARD
>>12211 What style lora is this?
>>12214 according to the catbox, its Ryusei Hashida
>>12213 same tbh but I'd have to do both because they're a package deal (and I'm sure a lot of people would appreciate Joon too)
>>12216 yeah that'd make sense since you'll need to crop the images are you gonna do 2 concepts in 1 lora?
>>12217 probably separate but I'll likely still experiment with multiconcept, will also give me an excuse to go back and also try to make sukuna + seija multiconcept
>>12187 How do you do 1megapixel training?
(3.12 MB 1536x2112 catbox_jhh4bl.png)

>sigh
I'll be honest, up til now I've been generally skimming past those there brown maids. What's the wisdom on consistency in skin tones? Tanned/dark skin is getting some varied results.
It's decent I guess
>>12221 I combine "dark skin" and "tanned", and put "light skin" and "pale skin" in negatives to get more consistency you can adjust the strengths and tag orders to try and control it a bit more.But that's just my method
>>12210 not directly, I planned to add support eventually, but things kept coming out for lora, so I put it on the backburner. I believe that I can change it to support TI pretty easily though, probably won't be immediate though
>>12222 Playing a bit with the coin lora thing. It's not very good to be honest. The coin spawn is random and all it really does is randomly spawn a coin because I need to take care of the eyes and everything else myself.
>>12219 train at a resolution greater than or equal to 1024x1024 pixels? one megapixel is just a million pixels
(1.52 MB 1024x1024 catbox_dnckb1.png)

(1.41 MB 1024x1024 catbox_tjxq9i.png)

(1.52 MB 1024x1024 catbox_xgzddc.png)

(1.41 MB 1024x1024 catbox_6f3w5a.png)

https://civitai.com/models/21365/marionette-lycoris >Find a nice looking lora >It's a loha model >Lohas dont work with my UI still
(2.07 MB 1024x1408 catbox_t9nlwl.png)

this misawa hiroshi lora is fucking incredible
Jesus this 3090 wants more juice when training compared to my 3080ti, it loves to go over the limit I set in afterburner sometimes
(1.89 MB 1024x1536 00033-701204414.png)

(1.85 MB 1024x1536 00011-3530353664.png)

(2.03 MB 1024x1536 00000-1120374850.png)

(2.12 MB 1024x1536 00116-1619748399.png)

(1.32 MB 1024x1280 03536-4177094181.png)

The incoming kiss lora is kino
>>12232 Share
>>12234 10/10 frfr
(1.85 MB 1024x1536 catbox_qu3ufo.png)

>>11276 Im addicted to this position and the Wamudraws lora
(2.12 MB 1024x1536 00116-1619748399.png)

>>12238 glad you like it and thanks for the catbox; didn't even think of this position. gonna use it for future previews
>>12238 Prompt some more of that pose
>>12147 I have been meaning to do this but unlikely to get to it soon. A POV one especially should easily mesh with character LoRA since the girl's on nearly full display and front-facing. >>12154 I assume I can hijack this just to set up venv and then go on my way? >>12195 In my tests it varies, some character/concept LoRA fare better when allowed in the unconditional/negative prompt and some don't. I don't generally use style LoRA so haven't tested that. I think the default is actually that it disables LoRA application to the unconditional prompt, which is uh, an aggressive choice. Last I checked it isn't even recorded in PNG info. >>12225 https://huggingface.co/SenY/LoRA Guruguru is nice.
>>12241 >Guruguru is nice. It never really works for me, the eyes rarely change.
is the anon that made the archive script around still?
>>12241 Nah the default definitely allows it (at least the default < > implementation, dunno about Kohya). I've tested it, the outputs are identical (to within xformer diff) if you have both checkboxes on vs. not enabling the extension.
>>12242 Odd, maybe your model(s) have strong eye preferences? >>12244 Oh, guess I turned it off and WebUI remembered that setting. So I'm the monster making unreproducible catboxes.
>>12245 What models do not?
What does separate TEnc and UNet do? I've set TEnc to 0 on a lot of loras and all it does is make the picture slightly worse?
>>12247 nvm solved my own problem lol. was trying to remember what your github page was so i went through the previous threads and searched for "soup" and found the link to update the script. updated the script and had to install dateutil, but pip kept telling me the package didn't exist. apparently instead of dateutil, it's called python-dateutil. anyways, it's fixed now, but glad to know you're still alive
>>12248 Here's a simple and lazy prompt on B65 that has literally two words It worked on 2 out of 12 here and usually doesn't work at all when there's more prompts or complicated stuff
>>12251 NTA but b65 has been pretty shit imho, b64v3 works much better
>>12251 Like most eye stuff I don't think you can do it very well with 512x512 unless it's a close-up, that's why the ones I posted were upscales for the most part. You can see the spiral starting to form in 768x768 though.
(3.07 MB 1536x1536 lost in pip.png)

>>12254 i hope she finds her way
(2.01 MB 1024x1536 00469-4276655415.png)

>>12252 i wouldn't call it shit, but i definitely still prefer based64 over it. to be fair, i'd blame it more on AOM3 being a part of based65 more than based65 itself
>>12253 896x896 on B64v3
>>12258 Starting to see it at least. You definitely weren't going to get perfect spirals out of it but I thought it sufficed.
>>12258 Forgot to add that it's still pretty inconsistent. It'll work when it works but it generally does not. The empty eyes lora from civitai was pretty shitty too so the best thing I have still for empty eyes is AOM3 and/or the Musoduki lora until someone makes a proper empty eyes lora. People here got Nahida's eyes and Hu tao's pupils so surely a lora that removes all detail from eyes and flattens them should be easy to make right? >>12259 Meh, I generally prefer just using (Empty eyes) and not throw an extra lora into the mix since guruguru rarely ever works.
>>12260 Ah well. There is an old empty eyes lora on shivitai, but I haven't tried it unlike seny's guruguru.
>>12261 she looks found to me
(2.33 MB 1024x1536 catbox_nl3ywi.png)

(2.16 MB 1024x1536 catbox_90105x.png)

(2.22 MB 1024x1536 catbox_zhi8sb.png)

>>12229 The source material is just one league above most hentai artists
(191.10 KB 768x768 over 8000 hrs in paint.jpg)

>>12263 I tried.
>>12266 Really good stuff honestly, reccomended. Just make sure to specify nipples, bra or whatever or else you'll get a girl chewing on her shirt like me when i'm bored
(9.44 MB 3456x4224 68041.png)

>I am the first person to post anything for this character on pixiv in almost 3 years image gen is too cool bros... wish it had turned out better but i was tired of working on it
>>12271 Noice. One of my chars was the first in ~2 years, wonder what the chan record is with stuff like old megaman characters or whatever.
(1.55 MB 1024x1536 00011-2986108883.png)

(1.72 MB 1024x1536 00014-2547298676.png)

(2.32 MB 1024x1536 00046-2200069253.png)

>>12239 >>12240 Forgot to mention, I was a bit lucky with that first post. The pose could really use a lora if it doesn't have one already.
(1.91 MB 1024x1536 00012-1529675979.png)

>>12275 Nice, thanks anon
>Download a lora made by a JP dude >Every image is body horror >Download his sample images >The clown stripped exif data so I cant use them or copy them
>>12280 many such cases
>>12280 I'll never understand the Jap mindset of "muh personal promptos donut embed" like cmo'n man the shit's free and costs nothing to keep on there
(1.07 MB 768x1360 catbox_qnvgnp.png)

Hydrus chads, have any of you ever set up the server function? I have a friend I want to be able to look at and manage my hydrus database.
>Got cocky and started updating every day >Suddenly everything breaks in a million pieces and the only fix is to revert to an old backup install
>>12287 the fuck did you do?
>>12288 Idunno lol Everything's "Transport endpoint not connected"
>>12289 Forgot to mention i'm on collab. Was working fine before but I switched accs to get over the limit and the new account started shitting itself no matter what I did.
https://github.com/TheLastBen/fast-stable-diffusion/issues/1805 Looking at github, it seems that the colab guy broke something and i'm shit out of luck until he fixes it Guess no more proompting for me for a while.
>>12291 you should probably sell a kidney or start sucking dicks for a dollar each until you have enough to buy a X090 card.
>>12292 Being poor is rough anon
>>12293 economy is collapsing anon, just debt max everything while you can.
>>12212 link?
So 1088x1088 training is more sharp than x768, 24gb VRAMchads we won!
>>12297 what happened and what do I need to read?
>>12298 I was just testing what 1megapixel training I could do before I used up all the VRAM on my 3090 with batch 3 and 1088x1088 was the res that was the fastest 23gbs of VRAM used
>>12297 >>12298 More sharp is news to me. The ability to see finer details has been demonstrated but I'm skeptical a LoRA can improve general resolution much. 768x didn't come cheap for the base models
>>12300 If anything setting the height to 1 megapixel is something 12gb vram users can at least do, setting both width and height to it is something else that I need to test with a detailed character like Nahida to see if it has actual benefits
I've tried 1280 x 1280 training and it did work buckets 832 x 1280 it finally fucking learned the eyes!
>>12301 If this can be applied to finetuning i'd be fucking happy.
>>12301 What do you mean? megapixel is describing the area of ~1024x1024. If you're setting reso to "1024" kohya is automatically making it 1024x1024 for his internal calculations.
>>12303 Pretty sure you can set the height different to the width, unless that doesn’t work and it’s always been 1052x when I was trying to do 768x1052 because that is where I got my most comfy speeds from with 12gbs of VRAM
>>12305 Code dive was here >>10193
>>12306 Ok that makes sense then
(1.62 MB 1024x1536 catbox_tmera1.png)

>>12232 >>12234 Yeah this is CUTE and great.
Wondering if I should sell my 3080ti for another 3090, if I wanted to do the meme SLI I heard that it's easier to do with the same brand of card and that's going to be a needle in a haystack for now
>>12309 you can only Nvidia Link (the "new" SLI they discontinued) with 3090 so yea, might as well do that.
>>12310 Yeah I just have to find ones that will bridge easily right away, I’m only going to do it if I find one under 1k like the one I just got
>>12311 which 3090 do you currently have?
>>12312 MSI ventus, I'm trying to look for others under 1k like it but nah no luck so far, the only one I found with the service I used is the Asus ROG STRIX 3090 >31.85 x 14.01 x 5.78 cm >38.5 × 19.1 × 10.6 cm yeah feel like NVlink wouldn't work without doing BS
>>12313 I really wish there was a list of NVLINK compatible 3090s when mixing up, would make this easier
>>12241 You don't even need to hijack the installer, I created a batch file that just sets up the venv called update_torch.bat. just grab that if you want to update your venv and change the folder location, assuming you aren't using my scripts.
>>12314 or maybe I just wait for black friday and hope 4090s are on sale or shit, kek probably not. I'll probably keep the 3080ti for the current PC I am using and build a new PC to fit in the 3090 I got and 4090 in the future tbh
>>12316 You can't NVLink 4090s, and you sure as fucking hell won't have room to double up 4090s on a normal ATX board.
>>12317 yeah, the 3090 will be used for games on the new pc while the 4090 will be used for AI. That's later on though however if I can find a ventus 3090 for the same good deal I got I'll get it and sell my 3080ti. Having NVLINK would be cool even if it is a dead art.
>>12318 >3 nvida gens later "Hey anon why do you still have such an old gpu for your pc? And what is in that extra room where no one is allowed inside?" "Just my server farm for stable diffusion... it cost a lot..."
>>12319 the electric bill costs a lot knowing how much this 3090 wants to eat even when I undervolt it kek, I heard some dudes recommend more than 1200+ PSUs for dual 3090s, PSUs especially the current ones are a pain in the ass for prices once you go over 1k watts
(1.76 MB 1024x1536 catbox_28ockl.png)

sometimes you find gems between all the shit on civitai
>>12320 I wanted a 1600W ATX 3.0 PSU to future-proof but the only one I know of with an 12VHPWR port is the GF3 1650 and apparently it caused someone on (this) /hdg/'s 4090 to crash during ML workloads so uh no thanks. Thor 1600W is gaudy as fuck and uh.. 750€ PSU. Without a 12VHPWR port. Yeeeeeeeeeeeah no. EVGA's current offerings aren't ATX 3.0 certified either iirc. I paid 350€ for the MSI AI1300P and I still feel ripped off but whatever.
>>12322 yeah I think I'll just wait till Black Friday to build a new full tower PC that can support a 3090 and 4090, I know the 4090 won't be on sale but if I can find a 1600w PSU that doesn't have a cancer price I'll keep it in mind.
>>12323 This is my build, just added a random 4090 on top https://it.pcpartpicker.com/list/MVv9gb Should draw 1200W after power-limiting the ryzen and the 3090 slightly, 1300W are probably fine, especially if you don't have NAS drives or retarded shit like screens on AIO pumps
>>12324 didn't want the X3D version of the 7950x? Or did you build the PC before it was released?
>>12325 Both, X3D is slower in ALL non-gaming tasks except 7zip (big deal), you can't power limit it, need to babysit it constantly and play the core affinity game, need to shut down half the CPU if some weird incompatibility arises and your game doesn't want to use the 3d cache cores by default, etc etc As for gaming? Sure the extra frames are nice but it's also 200 to 300€ more than what I paid for my 7950X + extended warranty JUST IN CASE
>>12326 Yeah I heard that's the case and given the only games I still play are the gachas I dug myself into a grave for it doesn't seem worth it to me, gamers getting themselves ripped off kek
>>12327 Lmao you bought the x3d? Can't you just return it and get a non-3d?
>>12328 No I didn't, I was looking at the benchmark videos for it and what you said is the case. Especially given the price point of the 7900x3d vs 7950x3d it's just a scam compared to the non 3D cache versions.
>>12329 I wouldn't consider either scams but they're definitely abysmal value, the 7800X3D is gonna be the one people should buy if they want the 3D cache. If you're the kind of guy who knows what he's doing and doesn't mind wrangling the cores you CAN effectively have two CPUs in one with the 900/950X 3Ds, cache cores can be used while gaming while the others can do something else in the background, maybe rendering video or something. But for the average user? This shit is NOT plug and play (also the install procedure is a mess and I'm sure you've seen all the "requirements") 7950X without the extra shop warranty was 609€ (lowest I could find, usually 650€ +- 10€), 7950X3D is anywhere from 869 to 909
>>12330 Yeah I'm pretty much set on the 7950x for the 2nd PC I'm building later on, I'm just going to hope that the sales are good by black friday so I can have an easier time spending and building.
>>12331 and maybe by that time video AI making will be less shit than what it is right now >shutterstock watermark on one of the current video AIs available
>have 3090 >still making shit LORAs I swear it's just the fact I'm messing up the balancing of this multi concept LORA, it's like 6+ folders
>>12334 When it comes to multi concept lora, you definitely need to be careful about the balance, and especially the tagging, of the dataset. So definitely makes sense if the balancing of the dataset is your problem
>>12331 DO NOT DO NOT GET 4 STICKS OF RAM THEY DON'T WORK AND IF THEY WORK YOU'RE LIMITED TO 3600MHZ GET 2
>>12336 Gotcha, that shit is weird I was hoping to get 64gb with 4 16gb sticks but I guess 2 32x will work better
>>12337 AMD CPUs work better with just 2 sticks of RAM. This was something I learned when mining XMR. basically get the biggest 2 stick set of RAM you can afford.
>>12338 Before all this AI stuff I would have been fine with all the budget stuff but now I want a server room kek
(2.07 MB 1024x1536 catbox_8fkpym.png)

(2.07 MB 1024x1536 catbox_642i55.png)

(2.14 MB 1024x1536 catbox_u9ndos.png)

(2.32 MB 1024x1536 catbox_fpxlx7.png)

>>12333 Yeah that's hot but I wasn't blessed by the gacha gods, it could also be a skill issue.
>>12341 That last one is pretty damn good. Assuming you were going for multiple views. Not that I dislike any of them, I In fact, like all of them
>>12342 Yeah I wanted multiple views but I did get some decent gens at least. The bear is not related.
(1.77 MB 1024x1536 00010-2190023754.png)

(1.71 MB 1024x1536 00061-993781104.png)

(1.63 MB 1024x1536 00120-1394167061.png)

(1.78 MB 1024x1536 00192-4159358084.png)

>>12344 Interesting style, I'll give it a try.
>wanted to sleep 6 hours ago >awake for 25 hours >wanted to get up in 12 hours and spend the entire night building the pc >suddenly find myself RMAing two ram kits because DDR5/AM5 is retarded and ordering a new one
>>12343 Bears are good, but I just can't get away from foxes. Especially my OC fox. Other than that, I always enjoy your posts
Well I got lucky. Found another 3090 same brand and under 1k, guy took off 100 if I pick it up today. My NVLink dreams are coming true!
>>12296 That was a joke, which is why the catbox is there. You can type Hu_tao_(Genshin_Impact) on based65 and it actually draws the character
(2.17 MB 1024x1536 catbox_u17wi5.png)

(2.24 MB 1024x1536 catbox_hp9x9j.png)

(2.11 MB 1024x1536 catbox_e7g66t.png)

(2.00 MB 1024x1536 catbox_d9g791.png)

>>12347 Thanks, I like your fox OC as well.
>>12351 I'm stuck at work for another 3 hours, but I'm definitely gonna gen some more when I'm home. I've got a friend that might die of cunny starvation if I don't give him some.
(2.11 MB 1536x1024 catbox_kmc5ej.png)

>>12352 >I've got a friend that might die of cunny starvation Well that's a problem.
>>12353 Don't you worry, he enjoys looking at the cunny people post here (as do i of course), I wouldn't be surprised if he bitches at me for calling him out on the board
I need to pick new RAM and the best I can get are either CL30-40-40-96 or CL40-40-40-77, both 2x32GB 6000MHz kits from G.Skill and Corsair respectively. 7950X so more receptive to better RAM. Which should I go for? They're the only two kits I can get here aside from Kingston (lol, lmao) I had two 2x16 kits but I just >12346 learned that AM5 supports two sticks at best cause every stick is apparently TWO sticks and dual channel is the new quad channel.. so why the fuck do X670E/AM5 boards in general have four slots? what the fuck?
>>12355 all RAM is made by 3-4 companies and kingston isn't one of them, there's no reason to be snooty if it's a good kit
>>12348 I'm jelly, hope you have the space and the the cables for it. >>12355 the 2 stick dual channel shtick has been a thing even with previous gen. DDR5 is a meme but if you got the latest board well then you have no choice. Only one specific >>>>>Intel board can let you use either all DDR4 or all DDR5.
>>12356 >all RAM is made by 3-4 companies and kingston isn't one of them, there's no reason to be snooty if it's a good kit https://www.kingston.com/en/memory/client/ddr5-5600mts-non_ecc-unbuffered-sodimm
>>12359 i don't know anything about ddr5 i'm still using a 4560
>>12356 But they're not good kits
>>12358 Still on ddr3 so I didn't know about the dual channel sticks
>three different wagashi loras on gitgud so which is the best out of them?
>>12260 How do we get any of the eyes people to work on an empty eyes Lora? Anons please....
>>12358 I still don’t know if it will work for LORA training but didn’t voldy add multi gpu support for the webui?
hll4 "beta" is out https://pixeldrain com/u/fSGpbE3j
>>12368 >beta Based66-Betamix
>>12367 Is that a Blame! LoRA? God thank you.
>>12369 troll or not I get some weird gens with that model
>>12372 Did you do a Sakura Miko test
>>12369 what kind of model is that?
>>12371 https://mega.nz/folder/5vAAhCgA#nC9wdlcmN2Z08D2V4M9oQw honestly pretty surprised it turned out as well as it did
>>12368 hll anon or reposting from /vtai/?
>>12374 vtuber finetune model vtai anon is pretty base and one of a handful of finetuners on 4chan that I’m aware of.
(2.00 MB 1024x1536 catbox_s71tio.png)

(1.63 MB 1024x1536 catbox_0bmomm.png)

(1.61 MB 1024x1536 catbox_wxsp3i.png)

>>12377 a vtuber is fine too I will look forward to the new based mix
>>12375 Oh you already have it uploaded, thank you. It's gonna take a while before I get to trying it out, but I wonder how well this can generate the same environments but with color. Or perhaps it can combine with a different style at lower weight so it's like Nihei combined with X for some interesting types of scenery.
>>12378 Where do I get this artist lora?
(1.54 MB 1024x1536 catbox_25h11f.png)

(1.61 MB 1024x1536 catbox_upc967.png)

>>12380 just something I am currently testing If satisfied I will upload it
with the dynamic thresholding addon i'm finding that the opposite of the wiki advice is good too >cfg 15 >mimic cfg 5 >half cosine down, min value 3
>Reinstall >Doesn't work >Reinstall >Doesn't work >Reinstall for the third time changing literally zero things >Works God I dont understand this shit It's like black magic for real
>>12383 meant cfg 5, mimic cfg 15
>>12382 Gotcha, I want to pick his brain on what new stuff and settings he did this time around.
I managed to get Loha/Lycoris working on Google Colab and the best part is that I have no idea what I did! It simply works now and the fact it does makes my head really hurt.
>update python to 3.10.10 from 3.10.8 >training script breaks because python devs are fucking retards >no repo carries the 3.10.8 binaries anymore, again because why WOULD we carry 2 month old binaries just in case our update fucks shit up >downgrade to 3.8.10(ubuntu 20.04's default python version >dump unet extension breaks because it uses a python >3.9 syntax >install pytorch and recompile xformers FOR THE THIRD FUCKING TIME >finally working god I fucking hate python. Why do people keep introducing more and more unnecessary and absolutely retarded """"pythonic"""" dogshit syntax
>>12379 i didnt try anything with color, but ill upload some lower epochs too in case those might help
I feel like AI doesn't do glass well. I always try to do girl in a box or girl inside a display case but it's always really hard to do glass that looks like glass so it always looks like a open case or open box
>>12291 It seems google is changing something and there's drive errors going around. I wonder if they'll just tell both text and image AI colabs to fuck off and then poorfags will have to sell a kidney if they want to prompt again.
>nigger asking for my 3080ti only for 500 dollars because of the 4070ti Fuck all of the 4000 cards that aren’t the 4090
>>11316 >>11317 Where do I get the updated version of this lora?
>>12395 >I feel like AI doesn't do glass well train it on images with glass refracting light and it'll do it perfectly
>>12401 Train what? A glass lora? A Model?
>>12397 Trying to mess around with weights if I can make it look more like the artist
(1.43 MB 1024x1024 catbox_ffvnyh.png)

(1.45 MB 1024x1024 catbox_o9n4hf.png)

(2.07 MB 1024x1536 catbox_6xus97.png)

(2.17 MB 1024x1536 catbox_z5l14r.png)

Corn O clock
>>12409 But can she masturbate with the corn or ride the corn
(1.28 MB 840x1256 catbox_9pyeya.png)

(1.17 MB 840x1256 catbox_ri065v.png)

(1.35 MB 840x1256 catbox_p28hhb.png)

(1.35 MB 840x1256 catbox_xlicu5.png)

>>12368 Huh I thought this was a hololive model but it has tons of vtubers. I even got a bit of Nanahira out of it
(1.06 MB 840x1256 catbox_23uy2x.png)

Any tag/prompt suggestions if I want a picture from the front? Not the sides, not the back, not slightly from a side, directly from the front.
>>12402 i was spearing generally saying "AI can't do x" is dumb because it can, if trained
(2.80 MB 1344x2016 catbox_wc7f24.png)

my maid keeps setting up dildos near me and sitting on them and giving me this look, is she trying to establish dominance? should i swat her with a magazine?
(1.84 MB 896x1592 catbox_ax0y0d.png)

(1.76 MB 896x1592 catbox_4hmdds.png)

(1.83 MB 896x1592 catbox_takjy6.png)

(1.88 MB 896x1592 catbox_txsuh6.png)

>>12428 and as usual, the more effort i put into it the more fucked up i notice it is once i'm done
Today I have learned that ((horrified, despair, horror, tears, crying:1.2)) makes your characters look like screaming banhees
>>12315 Jury-rigged it in my kohya folder with sketchy triton even (I call it sketchy because we didn't build it, mind you). Got a Cuda kernel image error which is interesting, I'll dig into it for a bit. >>12336 >>12338 Intel chads stay winning But yeah infinity fabric is very sensitive to memory timing. For intel I just slapped 4x DDR4-3600 8 GB sticks from the same product listing and they live comfily side-by-side even though they're actually from 2 different lots. For that 5% gain from simulating dual rank or whatever. >>12418 straight-on I have not actually tested it, but it is the appropriate tag.
I feel like every once in a while I will let my degeneracy take ove, prompt the most fucked up shit I can concieve then enter a cooldown period where I don't feel like prompting anything at all for days almost in a state of 'im so fucked up'
>>12365 Not true multi-GPU AFAIK, you can run them side-by-side though
(836.66 KB 840x1256 catbox_404y35.png)

(965.20 KB 840x1256 catbox_aoax12.png)

>>12431 Opossumachine lora https://mega.nz/folder/2FlygZLC#ZsBLBv4Py3zLWHOejvF2EA/folder/iB1zTQAI Definitely not related to any other similarly named loras
>>12432 Haven't been able to wrap my head around this cuda error, bitsandbytes/kohya github issues suggest it is down to a CUDA version mismatch. Which is weird because I never had CUDA installed back on good old torch 1.12.1+cu116 (which still works fine).
>>12389 I think google collab only need to install the extension and use the LoHa on prompt since additional network didn't support the LoHa yet. Been there too
>>12437 It's just frustrating because google colab has two states >This doesn't work and I have no idea why >This works and I have no idea why
Haven't been paying much attention for the last two or so weeks. Can someone give me a qrd on locon/loha/lycoris/whatever and how it differs from lora? Are they any better or worth bothering for a non hardcore training autist at their current state?
>>12439 Not really worth using until you like cutting edge or trying new stuff.
(2.28 MB 1024x1536 catbox_hrwrs3.png)

(2.20 MB 1024x1536 catbox_kfgr6b.png)

(2.21 MB 1024x1536 catbox_aa7wp2.png)

(2.26 MB 1024x1536 catbox_iuavm5.png)

>>12428 >>12429 >>12430 Very nice, also that time again.
So trying to find an AM5 board that has SLI support is a shit show, inb4 NVlink for 3090 is dead for the next form of AMD motherboards
(1.62 MB 1024x1536 catbox_7k7rk7.png)

>>12442 is there a reason for putting such nice pics behind a spoiler? or have we become so prude that a big belly is considered /d/-tier?
(1.42 MB 1024x1536 catbox_tlmubr.png)

(1.50 MB 1024x1536 catbox_xm0d97.png)

(1.90 MB 1024x1536 catbox_q3atnr.png)

(1.73 MB 1024x1536 catbox_afdfv0.png)

>>12446 man I should prompt more maids
I think I fucked up my finetune training and have to start over…
(2.18 MB 1024x1536 catbox_lwd8ek.png)

(2.23 MB 1024x1536 catbox_9ghjca.png)

(2.33 MB 1024x1536 catbox_cuoczj.png)

(2.15 MB 1024x1536 catbox_1yde23.png)

>>12446 >>12447 >have we become so prude that a big belly is considered /d/-tier? I remember getting a bunch of (You)s telling me to go to /d/ a few months back on the other /hdg/ but I guess that was stuck in my mind for some reason and some people just don't want to see preggo. >man I should prompt more maids Yes.
>I lost 14 hours worth of training over a mistake fuck... I hate how long finetunes take....
>>12442 god why can it do outie bellybuttons perfectly but not inverted nipples it's not fair bros
>>12452 Be the change you want to see and make the lora!
>>12453 maybe I will...
>>12439 I haven't trained any myself yet but of the ones I've downloaded I haven't seen any major quality difference vs. plain loras. I get the feeling most of the advantage is theoretical and doesn't really carry over so much in practice.
Time to figure out what to do for the next 35 hours while finetune trains.
>>12456 Pick up a new gacha
>>12457 I’m already taking a break from FGO kek
(1.83 MB 1024x1536 catbox_hfsis1.png)

(1.73 MB 1024x1536 catbox_trgm11.png)

(1.54 MB 1536x1024 catbox_eu9lm6.png)

>>12449 I will never understand the reasoning that the fruit of love should be on the same board as penis vore or worse... >>12458 download and clean the next dataset!
(1.94 MB 1536x1024 catbox_s3i9ts.png)

(1.65 MB 1024x1536 catbox_0tl64q.png)

(1.60 MB 1024x1536 catbox_odv10x.png)

(1.56 MB 1024x1536 catbox_fwp13d.png)

the moment when you prompt for dark skin and you get a drow
>>12368 Someone gen a quick momosuzu nene for me to see if it's even worth downloading.
I think I'm finally going to have to make a sarashi + fundoshi LoRA in addition to a breasts one, the inability of even LoRAs trained on characters wearing them to generate properly has finally gotten to me
(1.71 MB 1024x1536 catbox_a57z4l.png)

>>12462 seems good to me
(1.33 MB 840x1256 catbox_geiydt.png)

>>12462 It's worth using especially if you supplement it with style loras
>>12449 /h/ is for seven different kinds of raceshit and ntr and not pregnancy or lactation. don't ask questions.
(1.24 MB 840x1256 catbox_7eim5x.png)

(1.20 MB 840x1256 catbox_waxwn7.png)

(1.20 MB 840x1256 catbox_b8vktf.png)

>>12460 where can i find the qqqrinkappp lora?
>>12470 thanks
>>12471 >can now make unlimited artworks of this artist with no ballbusting based anon who made this LoRa
(2.57 MB 1536x1536 catbox_uykepn.png)

>>12472 Now I demand some good maids from you anon
>>12473 I'm actually mixing fishine with spacezin but yeah I really love spacezin's style and I'm glad the LoRA came out so well.
24 gbs of VRAM have shown me how much loha/locon loves to eat up lots of VRAM compared to a normal LORA kek
(2.34 MB 896x1592 catbox_6iiqsg.png)

(2.29 MB 896x1592 catbox_bf7zc1.png)

(2.22 MB 896x1592 catbox_nk3530.png)

(2.27 MB 896x1592 catbox_or7k7k.png)

>>12474 a good trade
(2.55 MB 1536x1536 catbox_1g72kw.png)

>>12477 Oh no! Someone locked up your maids!
>>12478 I maid them do it
(1.88 MB 896x1344 catbox_4e3jy5.png)

(1.98 MB 896x1344 catbox_johcvj.png)

>>12478 i hope someone lets them out soon
what is LOHA good for? It seems to not grab as much detail as LORA or LOCON
Anyone here using multi controlnet? Hands just do not work for me, they always come out fucked.
>>12483 Whoops double canny hands, meant for extra depth hands
>>12483 That looks hilarious
(1.99 MB 1024x1536 catbox_rdmzwi.png)

(489.74 KB 512x768 catbox_zaxgq1.png)

(571.51 KB 512x768 catbox_ycde2s.png)

(1.13 MB 512x768 00254-2690144158_cleanup.png)

>>12483 >>12486 >It's an abstract kind of sheepgirl.
>>12285 Found the model but could you link me to the 2 loras used in the image?
(2.09 MB 1024x1536 catbox_89ye23.png)

(1.97 MB 1024x1536 catbox_480gio.png)

(2.21 MB 1024x1536 catbox_qu0hzy.png)

(1.29 MB 1024x1024 catbox_u3dwux.png)

(1.39 MB 960x1152 00012-946715242.png)

(1.31 MB 960x1152 00051-946715242.png)

(1.43 MB 960x1152 00013.png)

(1.33 MB 960x1152 00015.png)

Jannies have forced me over here. Have some flatties.
>>12496 Cute vampire
>find artist I like >autistically train until I'm happy with it >never use it after testing >move on to next artist trainerbros... it wasn't supposed to be like this
>>12496 Very cute. Requesting catbox for that third Remi.
>>12498 Me, except characters. I autistically prompt and train until I get the details probably closer than most fanarts. And then I do nothing with it.
>>12498 >tfw didn't know any artist except prolific and doujin ones >even that I can't nail every doujin artist Its kinda suffering since I don't know which one should I pick to prooompt with. Since I don't think I can fooling around playing with mixing artist loras.
>>12498 I've hoarded a fuckload of LoRAs from here, 4chan and Civitai but I barely use any.
>>12493 all of them should be in gayshit
>>12499 the cock was photoshopped in and then inpainted but here you go https://files.catbox.moe/6ct4c1.png
(1.36 MB 1024x1536 00047-1631966908.png)

(1.81 MB 1024x1536 00052-4055850686.png)

(1.35 MB 1024x1536 00049-1214918099.png)

(1.65 MB 1024x1536 00054-1967763285.png)

I've baked the shun + shunny lora that I said I was gonna make a good 3 times so far, bake 1 was missing too many small details, bake 2 was closer, but still missing things, bake 3 seemed to get shunny correct most of the time, halo is fucky, but I don't know if it ever isn't but here are some of my test gens. I'm gonna go and try and bake it a 4th time to see if I can get the small details better.
(1.62 MB 1024x1536 00046-595563408.png)

(1.64 MB 1024x1536 00050-1336904087.png)

(1.74 MB 1024x1536 00040-421316278.png)

(1.47 MB 1024x1536 00041-3377047387.png)

>>12505 some things I noticed while baking her... apparently her official eye color is yellow, and apparently she has purple hair on the ends of her hair. shun is missing the little gem-button thing, and the floral pattern on her dress... which is because the dataset is super fucked. for both. so at the very least, I'm gonna ignore the floral pattern, the gem-button thing I do want to have at the very least show up. gotta say though... I understand BA Anons pain now
>>12505 SEX Are you going to bake a dark arona one too?
I screwed myself by not having my gigantic pixiv folder sorted by artist. Maybe hydrus can help, or maybe there's a script that can look up the image names and sort them by artist in folders. >>12500 There are some I've made that I stick with, some others I just once once and never again. Like I made the crimvael lora but I just prefer to prompt angels
(1.03 MB 1024x1024 00055-3754388387.png)

(1.27 MB 1024x1024 00056-3754388388.png)

(1.05 MB 1024x1024 00058-3754388390.png)

(1.20 MB 1024x1024 00057-3754388389.png)

>>12507 actually, I have been thinking of that, I know that there are some on civitai that might be good, though... at this point, I've started work on one BA character, so I already have an idea on how to tag and bake them, so probably will, just want to get shun a bit more consistent first
>>12506 >>12505 It's a pain few can understand since honestly they look fine and people will coom regardless. I hate seeing how fucked the halo is and I'm too stubborn to call it good enough.
(1.56 MB 1024x1536 00059-1740388743.png)

(1.93 MB 1024x1536 00060-1740388744.png)

>>12510 it is quite pain, I don't exactly know why, but missionary seems to make the halo more consistent... I might need to revisit the dataset, it might be too full of nsfw...
>>12459 >>12466 Don't think it actually belongs on h/d/g or anything, but personally preggo and hentai lines like >GET PREGNANT actually takes me out of the mood. Even in fantasy I don't want to deal with the hassle of taking responsibility or being a progenitor, what can I say. Fuck abortion porn much more though >>12506 Yeah it's terrible, you start to notice more and more character details you never paid a second's notice to before staring at a full dataset.
>>12512 Details like Arona's pupils being different colors? As a non BA player I would have never noticed if I had not stumbled upon a post about it
>>12511 Part of my tip for the halos I learned is that you want to prune your data set of any image that doesn't have the halo, and try to get more i mages with more of the halo to look at. I had a significant quality increase getting rid of any image without a halo no matter how good it is. That's really only if you care about being a halo autist.
>>12514 well... I do want the halo to be as consistent as possible, I might do that. did you also cut images with only like half the halo?
>>12515 Half was my borderline but if it was like less I cut it. if I kept a few pics that weren't in those rules it was for variety so the model wasn't just turbo baked onto their official outfit. I've been having the biggest aneurysm with Iori's because nobody draws her in anything but her swimsuit and her halo is like 5 different fucking colors in fanart, been losing my shit over it. I think Hanako's is the only one I haven't rebaked, which I will at some point to test, out of all the other ones. I've done the others at least 7+ times to get halo's and hers is still on v1 because it somehow turned out great and I get her halo fairly consistently.
>>12516 got it, guess I should ask just to clarify, what dim do you usually bake at? I have been baking at dim16, but am thinking of going to dim32 if this 4th bake doesn't improve the small details
>>12517 128:64, I haven't tried 1:1 but I've settled on this as it seems to work.
>>12519 fair enough, I think I want to try and figure out a lower dim to bake at, might go a bit higher than needed then resize down with dynamic resizing. I feel like the bake issue is probably more of a dataset issue rather than dim size issue though, because most of the details are there. If this 4th bake doesn't turn out well, then I'm gonna probably try and prune the dataset, remove any images without the halo. hopefully it doesn't completely destroy the balance I had between shun and shunny, not that I can't just change repeats around to make it work
>>12520 You can always resize the lora with the other script after you do the bake so its' better to start high imo. I haven't tried these after resizing them either but that'll be the next step once I'm fully comfortable with the ones I got since fuck having this file size on these. I've got 150gb of lora's and most are just various rebakes of shit because I can't decide what's best.
>>12516 >I've been having the biggest aneurysm with Iori's because nobody draws her in anything but her swimsuit and her halo is like 5 different fucking colors in fanart, been losing my shit over it. Time to get into the turbo autism that is manual data correction.
>>12521 my experience so far is that you can usually fit everything into dim16 usually, if I bake at 32, then I should have plenty of headroom to learn everything then dynamically resize down. to be honest though... I haven't baked at dim128 in so long that I literally forgot how
>>12523 I should test lower stuff but it takes me like 1.5 hours per lora to bake, and my PC is fairly unuseable except having a stream open so it's boring if I don't do it either sleeping or at work. I've settled on 128 for now since it doesn't seem bad just the size is off putting, but really want ot make new stuff I haven't really made new stuff in a while just keep re-baking and re-tagging older shit to get it better. It's working so it doesn't feel like time wasted but it's feeling like a grind.
(2.50 MB 1280x1920 00028-2535398242.png)

(2.12 MB 1280x1920 00029-2535398243.png)

(2.50 MB 1280x1920 00030-2535398244.png)

(2.70 MB 1280x1920 00031-2535398245.png)

>>12524 oh that's 100% fair, with torch 2.1.0 though, my bake times have cut by a lot. I just baked 2400 steps, it took 1 hour and 20 minutes, usually that shit is like 2 and a half hours. So at this point, I'm willing to try and bake at lower dims to get it correct, that being said, it's not entirely correct, but the halo is actually decently consistent now. seems like training for longer at a lower lr did the trick in this case. it's still not entirely correct though, so I'm gonna prune the dataset and try again later probably, might also just change to dim32 to get it a bit easier
How many multi-concept/costume folder LORA makers do we have here? I'm wondering how you balance stuff for the best results
>>12525 I'm still hesitant to set up torch 2.1.0 so I'm kinda waiting for it to get folded into automatic1111, not sure if it will but I should look into sooner then later since more people then not seem to get far better speeds.
>>12527 it's not hard to do desu, all you have to do is change the torch line in the launch.py file with >pip install torch2.1.0.dev20230320+cu118 torchvision0.16.0.dev20230320+cu118 --extra-index-url https://download.pytorch.org/whl/nightly/cu118 and then run the webui after you delete the venv folder, it doesn't require its own xformers like sd-scripts so you can just use opt-attention
(1.47 MB 1024x1536 00056-1691435234.png)

(1.90 MB 1024x1536 00057-1691435235.png)

(1.78 MB 1024x1536 00059-1691435237.png)

(1.48 MB 1024x1536 00058-1691435236.png)

>>12527 as >>12528 said, it's not hard to install.
>>12526 Not an expert or anything, but basically you need the subsets relatively balanced so the model has ample time to learn from each instead of getting lazy with the smaller ones.
>>12528 remember to use code tags [code][/code] or else it'll look messed up like that
>>12531 oh I didn't know that, thanks
>>12496 You can uoh in peace here brother
>>12530 On a related note, trying to gen 3 girls at once is proving to be a shitshow even with ControlNet and/or masked Latent Couple and/or multisubject LoRA like penis-on-face. I'm not sure if repeating the data of the 3 of them together, while not enough to generate consistent gens on its own, would help those techniques. Because 2girl sex gens do much better even though there isn't substantially more data and mostly non-sex. 1024x1024 is very expensive for iterative experimentation... I also want to test LoCon...
So is there any reccomended method or concept to finetune? I know HLL4 is diverse because there's so many vtubers and different artists so the model is an ideal finetune however I just want to know when do I want/need to finetune compared to making a LORA
Went back and trained an embed for the first time in a while. How in the fuck is voldy/monkey patch's embed training so much slower than Kohya's? It's like several orders of magnitude slower, Kohya's embed trainer can finish in about 30 minutes something that takes monkey patch a whole day. I can't believe it used to be in common use.
>>12535 Hll was always trained with at least 100k images split between chubbas and porn/generic. 3.1 was 75k chubbas + 150k generic/nsfw. He hasn’t posted his full spec yet on Hll4 beta but did drop a txt file with his tags. He does not do that much tag management other than checking for blatantly wrong tags and mostly let’s the auto tagger do the work. When I started my Fate Zero/Ufotable finetune I started with 38k images but results started looking better at 90k images and and hitting the 110k started making stable looking generations. I’m currently training a model with 116k ufotable + 30k Troyca/El Melloi Case Files + 34k assorted porn images with some more in-depth tagging autism because WD-tagger will do some of the weirdest fucking shit when tagging TV resolution. Hope that gives you an idea if you ever touch the finetune game. I had to start looking at other places for info because other than hll anon, no one bothers with training anything but LoRAs.
>>12536 I was told Monkeypatch + 4090 gave you 30 minute bakes with embeds when the Monkeypatch update came out in late January.
>>12537 Maybe I can make a finetune on 100k idolmaster pictures, can work... man Hydrus is going to be busy soon
>>12539 If you want it to do porn you are gonna need additional generic/nsfw images but for a test run, 100k should be enough to get you started. What type of images are you planning to work with, and what GPu do you have? Me and Hll anón have 4090s and 100k images with the right settings will take about 18 hours to train.
Dunno if anyone's using it, but I updated https://rentry.org/anonskohyaentrypoint to also support training embeds, and made it so you can queue up various different operations at once in the same queue.
>>12464 >>12465 >>12467 Looks like the same problems still remain from previous version, still have to specify hairstyle and eye color etc i guess ill download it and try with the lora on top
>>12540 two 3090s, no NVLINK yet cause my stupid motherboard didn't come with SLI support.
>>12543 But before that will the learning rates be vastly different? Given that finetuning requires more VRAM I assume that batch sizes are going to be like 4-5 right?
(1.14 MB 1024x1024 catbox_zviacm.png)

OHAYOU! I find it deeply fascinating that this locon shits the bed as soon as you gen something bigger than 512x512
>>12546 how is that the case? Did the train the LORA under 512x or something?
>>12543 You the anon that just upgraded his PC? I believe you can still run multi GPU training without NVlink.
(801.94 KB 768x768 catbox_uud781.png)

>>12547 weirdest example
>>12548 Yeah I just wanted NVlink to have something cool, but true honestly I don't even use up all my vram and can still play games with my training settings it's only really when I do over 1000x res for LOHA/LOCON and batch 3 that the VRAM starts to be used up fully kek
(291.68 KB 512x512 catbox_ceo5ku.png)

(897.64 KB 768x768 catbox_0kmdh4.png)

>>12549 or here first one is 512x512 second is 768x768
>>12551 unironically pickled, where did you get this? 5chan?
>>12498 literally me. I think I only generated like 20 images of Nahida after I perfected the eyes
>>12554 literally wondering what the hell they did to even get a LORA to get weird shit like that


Forms
Delete
Report
Quick Reply