/hdg/ - Stable Diffusion

Anime girls generated with AI

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 15.00 MB

Total max file size: 50.00 MB

Max files: 4

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

US Election Thread

8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

>>18742 Oh crap I forgot about that.
>>18737 timekiller, background noise they have to stay in character so you don't end up with them talking about the same endless mindbroken leftoid topics like a lot of streamer do squeaky voices make me crazy so I'll have to find a ara vtuber
I think the only useful thing the prodigy schizo anon has done was make that tagging batch script using three of the better tagging models for more accurate tagging after how much the extension changed with recent updates
basedmixes have gotten so bad wouldn't be surprised if the next one has that futa furry model that is being shilled on 4chan within its mix recipe
>>18746 would at least bring in some new blood
>>18747 that would be bad blood
>>18748 idk, just the fact it's trained at highres could help, same for the other enhancements like noise, similar to how adding a lora trained at higher res makes quality a bit better, but this time with deeper "knowledge" of it. I just wish people would wake up and start actually doing anime models use some community volunteers (with some website like furries did) to retag everything and then just crowdfund the TPU cluster (I think they use kaggle?) I mean, they have a sd 2.0 model with vpred and NoPE (no positional encoding, supposed to help with long detailed/schizo prompts, which is something I want to try since I do that), noise offset and trained at 1088x1088 we have NAI and inbred mixes I really don't get it
>>18749 Furries had better tagging to begin with afaik. They didn't have a base model so had to do everything from scratch. Anime community got the NAI leak and after a while chose to patch the holes with loras instead of dealing with training whole new models. There were hopes that WD would be able to do something but they simply turned out to be incompetent. It's gonna be pretty stale for a few more months I believe. Maybe someone will be able to do something good enough with SDXL or we get NAI2 leak lol, dunno.
Anyone know where I can get this model? ac3516ac78 Apparently it's a loli model but I can't find a dl anywhere
>>18751 Nevermind I found it Original link was on someone's huggingface but they nuked their repos. Luckily, I found a reupload on a random dude's page https://huggingface.co/Undertrainingspy0014/RandomStuff/resolve/main/loli_A.safetensors
>>18740 yeah all the newer stuff seems to have some shitty lora baked in. b64v3 and aom2-r34-sf1 are basically the only anime models i use.
>>18753 There's newer stuff? I've been permanently stuck on B64, AOM3 and that one ChewyMarshmallow model that got posted here because I liked the general style so I mixed it with AOM3. Trying out >>18752 right now and I kinda like it since the model is tuned for loli characters meaning I don't need to use things like the deaging lora or the "Youngest" embed which should be a lot better for the end result.
What's b64? Mind helping an anon out? I've been downloading a ton of loras off of civitai (and telegram's civitai backup when necessary) and copypasting descriptions to each one. I just use civitaihelper(fork that supports lycoris) to automate downloading a picture sample and it gets the rest of the page info including image metadata. Feels like I haven't even put a dent in my progress.
>>18755 Based64-V3 It's a model someone from here made and it ended up so good that it spread to a lot of places or ended up as a staple for a lot of people There's B65 but it's hit or miss tbh while B64 was pretty much gold.
Cheap and quick examples of the Loli-A model Using the hypno eyes lora because mind control makes me horny but the rest is just the model.
>>18754 > I don't need to use things like the deaging lora or the "Youngest" embed >>18757 nigga that's just (loli, flat chest, skindentation:1.2) on b64
>>18756 You wouldn't happen to have a link for b64 handy, would you? Or another anon perhaps?
>>18758 >nigga that's just (loli, flat chest, skindentation:1.2) on b64 Personally, B64 tends to randomly increase age or just stop drawing loli and go for a flat adult if I add a lot to the prompt a lot of the time which is why I always use the tools that force loli like the embed/lora for it. So far I like what it does and the general style of it so I'll definitely keep it in my toolbox, was just sharing in case others are interested.
>>18753 Feels a bit sad to still be using b64v3 but I think my stuff is still improving. >>18759 Here you go buddy, https://pixeldrain.com/u/khSK5FBj Use this as your negative prompt: (worst quality, low quality:1.4), (greyscale, monochrome:1.0), text, title, logo, signature, I also add (female pubic hair:1.5) and spread pussy to get the nice pussies. >>18760 Kek, I have a issue with genning "mature" looking flatties but I think it's my style LoRA mix doing it.
>>18761 Thank you kindly, it's appreciated. Been jumping between a few anime models, and it can't hurt to have another sworn-by model.
apparently this new update came out, interesting stuff even if you don't use the XL models https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12181
What's different about SDXL anyway? Is there a difference between using regular ass B64 and a SDXL model?
>>18764 sdxl is just the new stable diffusion model as cucked as 2.X but it was just trained at 1024x, so it helps with more coherent text showing up in outputs. However so far the only way to make this model good is to have someone that can use great danbooru/anime tagging and then just finetune the base model on millions of great looking anime images
Where do the japanese post their LORAs? Do they have some discord, some site or place like civitai for it?
>>18766 at least a few use civit. there was a 5chan index being passed around at some point. i think koreans use arca.live
>>18767 Where do I get the 5chan index? I heard of some osampo group too
>>18768 it's probably in here somewhere https://seesaawiki.jp/nai_ch/
A mega for anything I find, usually Civitai, that needs to be reuploaded and is in the telegram backup channel. Seems most models make it there, and it started like mid April of this year. Currently have a dead rentry link in there too. Will probably add to it as I come across removed models. https://mega.nz/folder/MVQjRRLS#OEfxtfMlsQuVq11B-sgV3g
>>18772 There's a chink civitai clone that reuploads everything that has been uploaded to civitai https://sd4fun.com/ You can find removed civitai stuff there and it's honestly a godsend
Can I toss everything into the lora folder or do I really have to separate the lyco from loras?
>>18773 I know of it, telegram is much easier to find content though. Can search uploader name too, and it hits all their stuff. The civitai discord search is also useful to find stuff, since most of the automated posts for model uploads remain in the discord.
>Huh, I guess this Lora doesn't work with this model because it doesn't know what this is >Change models >INSTANT REGRET
>>18761 Counterfeit is for pretentious-looking artsy shit and it enforces its own style way too hard, it's pretty bad with loras. Never liked it as well but hentai brats is certainly not what it was made for
I still prefer b66 simply because it interprets lora better. b64 have this thing where the concept of "garbage in garbage out", makes it hard sometimes if you use civitai lora where's you know, trash on average.
>>18777 >enforces its own style way too hard This was my biggest problem when using the mix, felt like I had way less control and I don't like the look of the images it generates anyway.
>work on my own finetune for months building a dataset that has 0% booru images just to distract myself from the depression of covid ruining my life >only have a 4090 (kek "only" right?) so best I could do is try and mimic HLL's process >cant fathom how Waifu Diffusion keeps dropping the ball every single time when they have the hardware and allegedly the datasets to do a NAI levell finetune >infuriated that SAI and Civitai are just trying to dupe a bunch of retards into trying to fix XL (for free... oh sorry, for a chance at some prizes) knowing full well that none of them can actually do it and turn it into a rat race by not disclosing how you should be training LoRAs or Finetunes for XL This must be some sort of clown world, I just don't understand what's going on anymore.
>>18780 What was your fine tune about
>>18781 taking an entire anime studio's production line up and finetune it ontop of NAI so you could get a better consistent style look than a LoRA and still be able to use concept and character LoRAs without frying your image from stacking on too many. And because I was using HLL's method, the best look was taking the Based Mix recipe and switching HLL for my finetune and it 90% replicated the studio's look.
Hey all, I still am as busy as always, but I did find some time this weekend to update the easy training scripts to the newest version of sd-scripts and lycoris. I haven't added all features yet, as I still need to actually look through everything to get a better idea. But it's the most up to date version now. https://github.com/derrian-distro/LoRA_Easy_Training_Scripts/tree/SDXL All of the updates are on the SDXL branch, (which btw doesn't actually break 1.5 training) so if you want updates use this branch for now.
>>18782 Ufotable?
(382.86 KB 512x768 46373-47591220.png)

(745.25 KB 608x896 00474-3492909164.png)

(2.17 MB 1920x1080 00881-1736248821.png)

(1.98 MB 1536x1152 01320-1827022607.png)

>>18784 Bingo The main problem with training on anime rips is that you get an aspect ratio bias, where only the landscape images on normal t2i images look best, and when you start trying to break it by introducing images in non 1920x1080 resolutions to the dataset, the model starts to break if you don't balance the figure somehow. Another thing is that I have been very strict in following the Danbooru tag wiki to ensure all manual tagging is done correctly per its definition, but this is where I discovered that danbooru users were tagging stuff incorrectly, with popular tags having examples of what should not have the tag and what should. This made me realize immediately that autotaggers are trained on incorrect information from the get go, on top of the fact that they were never meant to be used on anime screenshots (they have a higher false positive rate for some reason). When I started working on this finetune again after taking a break, I was working on breaking the aspect ratio bias and one of the things I immediately learned upon review is that all the image composition tags (portrait, upper body, cowboy shot, wide shot) were ALL wrong, and in my recent training it showed the worst results I ever encountered in about, 8 revisions I've done for this finetune. I think of these 4 images, is 3 different revisions. With the Catacomb one being the latest, and the one I like the least for finding all the problems I mentioned above and was mostly an inpaint exercise because for the past 7 months, I dumped all my autism points in finetune trial and error.
>>18785 yeah all finetuning seems like, at best, a drunken stagger in the right direction i can tell you that even if a lora induces biases in image composition, that can be used for interesting things
(1.56 MB 1024x1536 00261-3470132177.png)

(1.72 MB 1280x1024 shiki3.png)

>>18785 Yeah I remember you. People in sdg were wondering where did you disappear. Sadly that place turned into a complete shithole. Admirable persistence and hope you'll be able to finish what you've started. I wasn't able to force myself to finish Takeuchi lora bake cuz lazy and I still kinda rely on that trashy Lykon lora when I want the style. A coherent fine tune would certainly be cool especially for backgrounds. That bike is also amazing and seems to be better than any NAI model is able to produce usually (at least in my experience). Excuse the non-/h/.
>>18780 this field has been full of pajeet grifters and shills right from the beginning and still is.
(1.93 MB 1536x1152 01382-3064671469.png)

(438.57 KB 512x640 52506-904484187.png)

(638.30 KB 576x768 53724-1024681843.png)

>>18786 Drunken stagger, yea, that is a very good way to describe it, at least with everything 1.5/NAI. XL is... planting face first in a sewer manhole and somehow fell through and landed in the shit. The amount of XL "finetune" and LoRA shilling (on /sdg/) is so bad and when you call it out they start kvetching hard. Regarding your LoRA comment, this ufotable thing started as a LoRA and originally those Aspect Ratio biases weren't an issue, but I think that LoRAs are more forgiving when it comes to mistakes and slip ups, so a lot of these problems never popped up until I stepped into the finetune game. My first attempts with the same + extra data is where all sorts of things were blowing up left and right and I used the tools in the anime pipeline in unintended ways to solve some of the problems in the short term. I'm confident finetuned ckpts require things to be more tight. But at least I think that one of my original theories were that if you can properly tag concepts in a finetune, they can look better than stock NAI, did a test with the bible black school uniform and it looked good in Ufotable-like. >>18787 I didn't know you were here! Sorry if I "ignored" some of your posts, I was trying to be a bit more low profile especially with how crazy /g/ has become. Also posting on /g/ was a bit of a distraction from getting work done, and at some point I just got tired so I took a break from everything. >>18788 Finding out Emad was a jeet name and not some nickname was one of those disappointing but not entirely surprised events when dealing with this field.
>>18780 I hope this time around WD does much better with WDXL, there was recently a large change in who is helping work on it, seemingly, so my hope lies with the new members. They seem to be adamant to get it right this time. So at the very minimum I have some hope this time. As for the civit bull, yeah its pure bull for sure. Though, when it comes to training SDXL, I'm pretty sure people just don't know how to train it yet, so learning experience I guess.
>>18790 >there was recently a large change in who is helping work on it I have a throwaway discord account in their server but haven't been keeping up. Any specifics in who was swapped around?
>>18790 The Civitai shit is because all the pajeets and bottom feeding normies are desperate for those 4090s, in a contest where the barrier of entry is owning a 4090 or enough cash to burn away renting better hardware so it’s all disingenuous hack jobs. They all think they can be the one to fix the model and don’t know what they are doing because it’s just a huge clout chase grift.
>>18791 Salt and Cafe aren't part of it anymore, guess salt joined niji, meant haru was by themselves. And a group of people stepped in, Neggles, and Mahouko are notable ones as they got dev status. Seems like there's some more too, but not everybody who joined got dev role, so hard to track.
>>18789 >I didn't know you were here! Sorry if I "ignored" some of your posts, I was trying to be a bit more low profile especially with how crazy /g/ has become I don't post there much and usually avoid being recognizable myself. I was hoping that avatarfagging got jannied from there for a bit but alas. Now it's avatarfagging + rampant SDXL jeets. >>18790 >I hope this time around WD does much better with WDXL, there was recently a large change in who is helping work on it, seemingly, so my hope lies with the new members. They seem to be adamant to get it right this time. So at the very minimum I have some hope this time. I personally have low expectations but would be happy to be proven wrong. It will still take shitload of time to get something that is better than what we already have.
>>18792 Can't really deny that tbh
>>18793 >Salt I talked to him directly once and... he doesn't know how to read. I got annoyed at him just for his seemingly airhead behavior. Could've been we talked at a bad time. I do know he named himself on /g/ trying to advertise WDXL and he got roasted hard >Cafe I haven't seen him since WD1.5, was wondering why he wasn't the main guy of WDXL. I still have his instagram finetunes in my library. Don't know the rest of the names, I wouldn't mind helping them out but doing more work ontop of my own stuff without compensation is not something I can do. It's not even that I'm being jewish, I'm barely scraping by with what little I have. >>18794 I know a mod finally stepped in but the filth is still there. Hiroshimoot needs to make that /ai/ board so that at least all those fags can make their own containment general and leave everyone else alone. And I'm in agreement with the Waifu Diffusion situation. I want to be proven wrong, but I haven't seen anything that gives me confidence in a breakthrough. If Salt and Cafe were the problems then maybe these new guys can make it happen. My finetune can be ported to XL since I still have all the original resolutions. What I need more than anything is a tool that can help me pre-bucket images depending on the training resolution so I can manually adjust resolutions myself, since I have lots of ufotable artbook rips with art that isnt on booru that I want to make sure I resize properly for better data.
>>18796 Godspeed with your project anon. I'm glad one of us is doing something at least.
>>18797 HLL anon is also pulling is weight on /vtai/, I wish he was still here in this chan, his PC is similarly spec'd to mine and we have the same 4090 so he was a great resource to make sure I wasn't fucking my shit up.
>>18798 I think he said he isn't planning to do anything SDXL related because it's a terrible resource hog
>>18799 I saw the post in question, said the most he would do is make a LoRA and thats it. I would never attempt a full finetune of XL, I would just do what I could on a NAI 2.0-like/WDXL+ base if the unicorn ever appeared. My efforts are solely on NAI and try to fix all the problems on that base and have the model be mix friendly that can carry over all the improved tagging, so long as you like having slight ufotable face kek
>>18780 >myself from the depression of covid ruining my life Jesus man that sounds rough. Hope it gets better. >>18800 Are you fine tuning bare NAI?
>>18801 Yea just bare NAI-full/animefull-latest in the leak, no point in finetuning a derivative model because it screws up any attempt to mix the models with something else.
>>18780 you can try going to the furry diffusion server there's a LOT of info on training/finetuning, even for esoteric shit like SD2.0 features you'll need to heavily filter what you look at though
>>18774 copied from user tomyummmm on github In case anyone else came across this issue. The workaround is as follows: 1. Shut down A1111 webui and console. 2. Move the LyCORIS folder into Lora, make sure no folder called LyCORIS exists in the models folder. 3. Open CMD as Administrator 4. Run the following command, replace user and path with your own accordingly: mklink /D "G:\Users\user\Documents\stable-diffusion-webui\models\LyCORIS" "C:\Users\user\Documents\stable-diffusion-webui\models\Lora\LyCORIS" You should see the following: symbolic link created for C:\Users\user\Documents\stable-diffusion-webui\models\LyCORIS <<===>> C:\Users\user\Documents\stable-diffusion-webui\models\Lora\LyCORIS If you get the following error, read Step 2 properly. Cannot create a file when that file already exists. Now when you start A1111 webui again, Civitai Helper can scan the LyCORIS folder inside of Lora folder, and automatically download all information and preview image. At the same time, https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris can also see the LyCORIS link to the actual folder.
>>18804 Ignore the G: that was mine and I ctrlz my notepad++ without proofreading the message Simply just your install path in each set of quotations.
>>18803 I’ve seen all the shit one can see in 17 years of browsing 4chan, I’m sure I’ll be fine seeing random furry in their own den. What’s their link?
>>18806 https://discord.gg/furrydiffusion training shit is mostly in the "custom models" forum, most interesting one is fluffyrock but fluffusion is also active there's also info on the "tuning projects" forum (just below custom models) there's even a guy trying to make a new base model wtf
>>18780 How do you imagine the end result of your fine tune? is it a flexible model that just has a ufotable style by default but still is compatible with all nai-trained loras? are you considering nsfw?
(1.29 MB 1024x1536 48250-1935823571.png)

(1.15 MB 1024x1536 45913-1642877175.png)

(528.23 KB 512x768 44706-2131033965.png)

(1.35 MB 1024x1408 41239-229999346.png)

>>18807 thanks, I'll check it out when I can >>18808 Pretty much. A ufotable style base where the images can look like a ufotable anime screenshot at first glance and still apply loras/TIs without it breaking the look of either the model of the LoRA. I also want to bake in certain concept LoRAs into the finetune so you don't need to use them anymore such as certain nsfw concepts, general tags that don't work well like bangs/hair over eyes, better weapons (Fate Zero has a ton by default) and some easter egg outfits (Bible Black School uniform above). I have tested mixing older uncensored hentai rips into the dataset to try and improve porn and also use its different aspect ratios to break the bias and try to flesh out the general nsfw data as a whole. It does work but certain aspects didn't translate that well because its older animation but nude bodies generated were more fleshed out and retained the ufotable color palate and face despite being 90s/00s animation. Posting some examples.
I DON'T NEED TO GENERATE PORN TO MAKE MY DICK GO THERMONUCLEAR
>>18810 ogey
>startup time: 160s AAAAAAAAAAAAAAAAAAAAA BLOAT BLOAT BLOAT
>>18812 Fucking how? Even on my dinosaur with all kinds of unnecessary extensions I get 40s at worst
>>18810 I usually need to see pantsu for that effect. But yes, it's too easy as a pinup chad
>>18812 Check the startup profile in the webui, bottom left I think
>>18804 What does this do exactly? I did it but the problem I'm having is that people don't properly tag if it's a LORA, Lyco and LECO plus it's a gigantic pain in the ass to try to sort these 3 out. Does moving the Lyco folder inside the Lora folder make the lycoris things work even if SD calls them as <lora:Randomshit:1>? And what's the actual difference between calling a Lycoris using <Lora:ActuallyALycoris:1> and <Lyco:ActuallyALycoris:1> since I notice some of the lycos still work even if I call them using Lora tags? Trying to learn how stuff works here
>>18757 Where did you get that model and lora?
>>18819 the model is like 3 posts above The eyes is https://civitai.com/models/86294
What's a good style lora with really good skin detail and shading?
>>18816 It doesn't use the lyco handler when selecting from the lora tab, so I guess maybe just name all your lycoris with a lyco prefix in the filename and just change it in the prompt field. It means you don't need to remember that x is a lyco and having to go to the lyco tab to find it. Set up this way, you can still filter on the lyco tab too, but they won't populate without a filter (folder filter works too, so like a Styles folder)
>>18822 If you want an easy method of renaming files, there are programs that provide advanced file renaming. I use Advanced Renamer, but there's likely others out there and possibly better. But I just took all my folders in my lycoris folder, dumped them into the program, and told it to prefix all files with Lyco_ and it automated the process.
>>18774 >>18804 >>18816 you're overcomplicating this? the webui supports all the lycoris shit now natively as of 1.5.0. there's no need for the extension.
I really like what the add detail lora does to pictues
>>18824 I use Vlad which already had the extension. The extension referenced is Civitai Helper, which is useful to automate fetching thumbs, and most of the page info including image metadata. The creator of the extension refused to support lycoris (there is a fork that does), and this person brought up a way to keep your Lycoris in lora folder while civitai helper is still able to scan properly. Anyways, unsure how auto has it set up, but if lora/lyco are on different tabs, it's still useful since you can see both of them from the lora tab and can still see them in lycoris tab. Selecting lycoris from lora tab still puts the lora handler though (unless auto somehow differentiates the file properly and gives a lyco a lyco handler from lora folder).
>>18826 Whats the fork that supports lyco?
>>18826 well that explains it. that vlad fork has caused so many problems for devs. auto just uses the lora handler for everything now since it's all implemented natively.
What is with SDXL and why do people keep trying to push it when WD works?
>>18829 For what it is and what you should actually use it for SDXL is amazing despite how much more annoying and fiddly it is to use. It's useless for anime and porn though and WDXL is terrible.
>prompt not working well >reorder tags >working even worse fuck
Have the new commits resolved that unofficial "SD stop giving a fuck about your prompt and needs to be restarted" bug?
>>18832 put one of your old images into png info and send to txt2img to prove to yourself this is a myth
Is this Gtonero person in civitai someone from *chan? I always see them redo stuff that got posted in either hdg and they're pretty hit or miss. Sometimes they're great and sometimes they're pretty bad but the bad ones seem to get redone later and made actually good to my surprise
>>18834 he's some youtube thailandigger who only makes shitty genshin/honkai green screens and cringe edits and begs for patreon money
>>18831 I guess it's just one of those days, huh. Arms behind head, crossed legs and from below refuse to play nicely today no matter the weights or negs. I can get one, maybe two working together-ish but not all three.
>>18836 Yeah alright, it just doesn't want to work today. Whatever, not the first time it breaks.
>>18836 Stupid workaround that you probably already know: Sloppily mask the part that's not correct and then inpaint in "whole picture" mode before upscaling
>>18838 To clarify: If you do this on high denoise, it's just a way to reroll the gatcha for only that part of the image
>>18838 >>18839 It works but I kinda don't want to do it that way. I know that prompt works, it has worked before more or less reliably bar the crossed legs gacha (as usual) on that same exact LoRA mix. I've had shit like this happen a few times with much simpler prompts too. This is the closest I've gotten today but it took way, way, way, way more attempts than usual. Angle could be lower and a bit more obtuse and I don't like how the legs turned out, her right left looks too skinny. And the hair/wings detail isn't there. But it's still more or less close to what I had in mind.
>download some spread pussy lora off civitai >makes my character look bloody and beaten when the lora is on why the fuck? could you like not?
>>18840 That's fair enough. Pure prompting is fun. But it's also sometimes a darn challenge. Controlnet and inpainting and all that jazz exists for when I don't feel like fighting the gacha war
>>18841 Well, have you try to use it on weight 0.2 point below recommended ones? It's usually far below 1 too for sexo lora.
>>18841 >civitai model is broken Many such cases
>>18842 >That's fair enough. Pure prompting is fun. But it's also sometimes a darn challenge. Controlnet and inpainting and all that jazz exists for when I don't feel like fighting the gacha war I'm stubborn as hell but it definitely feels like stuff just "breaks" sometimes in some way (and again, I'm talking about stuff that has definitely worked before without needing to play the gacha) I loaned my GPU to a friend right when NAI came out and I didn't see any reason to DIY SD when NAI (the service) was so much faster so I tried it out for a month and stuff DEFINITELY "broke" in the same exact way. Seriously felt like it had good days and bad days, just like right now. Anyway, here's a cute Silence I genned for a friend.
>>18845 Interesting. I don't have those day to day changes much, but I only gen like once a week. Here's an angry maid because I just downloaded this open towel LoRA. BTW. Thanks to whoever mentioned CivitaiHelper. Manually downloading and sorting that stuff was becoming a hassle
>>18809 do you ever plan to release this?
(524.46 KB 512x768 48939-2677377902.png)

(1.51 MB 1536x1024 42640-314310244.png)

(512.14 KB 512x768 41701-4261076695.png)

(1.59 MB 1152x1536 43363-4056671115.png)

>>18847 Eventually I will, right now this model is no where near a state for a satisfactory share/release. It's not even a matter of "oh I feel insecure about my work", I KNOW that this is not a good model to share. I would be a hypocrite if I were to release something not even half baked after shitting on SDXL grifters and the typical seanigger or pajeet hack job on civitai for months. My latest training results reflected very badly with the way I added new data in the current revision, and it showed when I was using my old prompts and a sample of catboxes I downloaded from all sorts of 4chan boards to test the model. Stuff that had good results on the previous revision were unsatisfactory on the current one, so I'm dedicating all of this month to do nothing but dataset clean up which includes the images themselves with a couple of theoretical tricks I came up with and fixing image composition tags in hopes of correcting the fuck up I made. I also have a backlog of other Ufotable anime and movies to help with balancing the data. Since it's just me and an artfag friend who helps me tag once or twice a week, I can't give an ETA on progress. When I have something to share, I'll drop it here. Here is some more images. Since this model is not mixed with AOM2, there is no gape so the genitalia is based solely on bible black and other hentai data I have baked in, and the pussies as well as pubic hair bushes are not very well carried over so I can either do some adjustments by cropping images in a better way already in the dataset or do my own "gape" training by being very particular about adding only very detailed, tasteful images of well drawn genitalia and bush in various forms states and positions. If I knew the cutoff year of the danbooru dataset NAI used, I can more accurate pick out images that wouldn't already be in the base model. Also, surprised I got some decent looking feet, I'm not a footfag but Im sure people would appreciate that their fetish would work out of the box.
>>18849 Is there some intend to put colored bush? Blonde bush is hot.
>>18849 What about innies?
>>18836 it's punishing you for being a coward the beatings will continue until you have (hands:1.3) in your positive
i'm still on a commit from march 14. anyone that has updated from an old commit recently, is the master branch safe?
>>18853 I pulled to the latest commit and when I prompted with a LORA and it broke my prompting I kept downgrading until I hit f865d3e11647dfd6c7b2cdf90dde24680e58acd8 and that fixed it, but I think it broke some of my extensions
>>18854 >but I think it broke some of my extensions if it's not too much of a hassle could you elaborate on which ones?
>>18855 I cant tell really but "3D open pose" only so far, I haven't messed with controlnet nor do I use all its models so idk about that one, I tend to stay away from extensions for that reason, the console is spitting out Gradio error messages so Im assuming theres more, that's all I can tell so far.
>>18850 I have never gotten colored pubic hair to work even in other models. You may have to add a color mask on photoshop and inpaint the spot to change the color. I’ll still try and gather some colored pubic hair samples. I’m into red heads myself so the curtains MUST match the drapes or else we got issues. >>18851 Innies and cleft of venus work on stock NAI but lack detail so I would only need to get higher quality samples for close up shots.
>>18832 >SD stop giving a fuck about your prompt and needs to be restarted can you post example? do you mean the case where it just sometimes starts ignoring prompt and with >1 batches outputs garbage? anyway if it's the bug, either disable neutral prompt or adetailer
>>18859 I swear to god open towel pics have tainted base NAI, I get so many pics of girls holding random bits of clothes like that as if it's trying to gen open towel
>>18860 Yeah I had that issue as well, it even made me put towel in negatives.
>>18858 >do you mean the case where it just sometimes starts ignoring prompt Yes, as if the CFG has been turned WAY down or it just straight up ignores tokens. >and with >1 batches outputs garbage? anyway if it's the bug, either disable neutral prompt or adetailer None of those.
(1.60 MB 1024x1536 catbox_ldyw0y.png)

I'm trying out the headpat LoRA and I'm pretty sure it's not supposed to be this hit or miss, I only get a headpat once every 5-6 gens or so. Maybe some of my other tags are causing some weird conflict? And when I do get it there's a good chance it's the blackest nigger I've ever seen even with dark-skinned male in the negs.
>>18848 Good stuff. How much inpainting do you do nowadays? The spots that usually could use some inpainting, like face and hands, are pretty consistently looking good on your gens.
>>18860 Had to put "towel" and "white towel" in the negs anytime I had someone partially submerged. It still shows up on occasion >>18863 Have you done some testing of your prompt and the LoRA without your usual add-ons? Whenever I try a new concept LoRA I remove all my style and character ones. Then when I get a solid prompt going I start adding back in the usual stuff. Helps me figure out if I'm the problem or if the LoRA just sucks. Either way, that headpat is cute.
>>18864 Hands, eyes, face, facial expression, the pussy at times and weird backgrounds. I'm spending more and more time on images now but at least I'm having fun.
>>18863 some models have issues with certain LORAs, always test a brand new LORA on the base model it was trained on first if you want to see how well it works and yeah the other LORA may conflict as LORAs not only change styles and characters but postures as well,
>>18866 The results make it look worth it. Mind sharing a bit of your workflow? I'm usually just 512x512 inpanting using "only masked". And then just one small part at a time with prompts like: Hand, close-up, holding towel + style prompts.
>>18868 txt2img>gimp to fix the bigger flaws(missing fingers and weird backgrounds)>img2img with a tiny Denoising strength increase, like 0.01 or 0.02 and after that I do my inpainting. I learned how to inpaint from the "bloodborne-anon" and he inpaints at 512x768, only masked and I keep the style prompts and my LoRAs but I'm not sure if that's really needed. But my actual inpainting workflow is: background(if needed)>pussy(same there)>hands>face>fix wonky grins and expressions>eyes. Probably not the best way of doing it but it seems to work for me and I'm still learning.
>>18865 >Have you done some testing of your prompt and the LoRA without your usual add-ons? Yes... no. I started trusting non-civitai LoRAs semi-blindly out of laziness. Bad habit, I know. I even turned the LoRA up to 1.4 but it wasn't working any better so I turned it down to 1 again. Something VERY fun started happening, whenever I get the hand to show up there's a chance it will fuse with the head wings, wasn't doing that before and I haven't touched a thing. Still getting niggers most of the time so thanks LoRA trainer. >>18867 Will do though I think there might be some conflicting tags anyway even when I turn the LoRA up to 1.4+. Something among "pov, close-up, from above, portrait/face". The shot is framed mostly how I want it (trying to get it even closer but "face" paired with close-up/portrait doesn't seem to help so I ditched it early on) and the tag combination makes sense in my head but you never know with SD. Also I usually order my tags more or less meticulously (usually it's character-related > framing-related > setting-related and I tend to insert concepts right after the character tags if needed) but it's too hot to give a shit and when I reordered the tags for these >>18836 it made the prompt even less consistent. ANUS just un-beta'd the most recent BIOS so maybe it's time to build and upgrade to that 3090.
>>18870 Something that might be nice is to use BREAK in your prompts sometimes. I'm still not 100% certain it does very much, but it's worth a try. If you're already ordering the prompt semi meticulously, just add some BEAK in between those sub categories so every category gets it's own focus reset.
>>18871 >Something that might be nice is to use BREAK in your prompts sometimes. I'm still not 100% certain it does very much, but it's worth a try. I've never tried that somehow, I've been meaning to but I always forget. I've always thought that's only useful if you're nearing the token "limit" and SD doesn't give enough attention to the tokens near the end, is that not how it works? Do you have to include all the LoRAs again?
it's been nice knowing you bros
>>18872 I'm still not exactly sure if it increases coherency of the image. But I think your explanation of what it does is mostly correct. Whatever you prompt after BREAK is treated with the same attention as if it were the start of the prompt. I've mostly been using it in case it does work (and for placebo), and to keep my prompts neatly organized. Don't need to include all LoRAs again.
(449.61 KB 1920x1008 msedge_k7S2l2A4Nh.png)

>>18874 I'm trying it right now and I'll be honest, the consistency has gone down A LOT. Is this how I'm supposed to do it? The BREAKs seem to cut down my IT/s by quite a bit. Am I overlooking something? I feel like I'm doing something retarded. Maybe I should just get another headpat LoRA and call it a day.
>>18875 >The BREAKs seem to cut down my IT/s by quite a bit. There's a pad option in settings to fix this (name below). It's due to the negative prompt using less of those 75 token chunks compared to the positive prompt. >>Pad prompt/negative prompt to be same length
>>18875 Prompt looks good. IT/s shouldn't go down much (if at all). Some extension could be pirating your BREAK command (like regional prompt). But I haven't a clue >>18876 Changing this setting did not impact performance for me, but I hope it fixes the problem >>18875 is having.
>>18876 >>>Pad prompt/negative prompt to be same length Don't think I have that, I'm on an ancient commit from before the gradio upgrade and before even torch 2.0
>>18877 >Some extension could be pirating your BREAK command (like regional prompt). But I haven't a clue I have tiled diffusion, tiled vae, addnet and lora block weight installed but they're all disabled.
>>18875 I'd only BREAK stuff like style descriptions e.g. scan, sketch, traditional media etc And if you're below 75 tokens don't use BREAK imo.
Well, shit. I downloaded a random headpat LoRA from Civitai and while the hand is nowhere near as good in terms of anatomy and overall consistency it actually appears every single time and I haven't seen a nigger yet. This is the one I was using before https://mega.nz/folder/ZvA21I7L#ZZzU42rdAyWFOWQ_O94JaQ/folder/pzoyFYqY Needs a good retrain and total nigger death.
>>18882 Very cute, I said earlier that I was gonna take a break from the open towel LoRA but I had to see if it works "from behind" and well it does.
>>18883 If it's from the same mega I grabbed the headpat lora from then I'm surprised it works at all, lemme try it
>>18883 The person who made it has from behind examples on their page. Glad it works for you too >>18884 Probably this one https://civitai.com/models/117027/open-towel
(2.11 MB 1024x1536 catbox_64qzbz.png)

>>18884 It is but expect some bad gens and the good ones will still need a bit of work.
>>18885 >same guy has a blacked lora No.
>>18887 Mental illness, both you and the blacked faggot
wow what a cool funny political meme you underaged nigger
>>18886 Oh crap, sorry, I just assumed we had grabbed the same one. Didn't know Randanon had one. Gonna try it to compare
(1.47 MB 1024x1536 catbox_6qaw40.png)

>>18890 >ugh people who hate niggers are just as bad as niggers themselves11!1!!! lol, lmao even >>18891 The one from the MEGA works well enough I guess. I should play the variation seed gacha more often.
Anyone have or made a good folded lora? This pose here I want to generate characters in this pose but I don't think anyone has made something that can consistently do this pose
>>18892 one more >>18893 Is that what it's called? Not the imminent mating press pose?
>>18894 Boorus call it "Folded" but I just know I want to gen more of that pose
>>18895 If you're feeling lucky there's 3 on civitai. This one looks somewhat promising https://civitai.com/models/72820/manguri-gaeshi-nsfw-folded-pose-nsfw-lora
>>18896 I think I tried the civitai ones and none of them were consistent or good enough. I'll check them out later but the request definitely stays up in case anyone wants to try to get the pose consistently in a LORA/Lyco
>>18891 Oh, I only grab LoRAs from civitai as a last resort but I'm actually testing a Kirin Armor LoRA right now and it seems ok.
>>18871 if you're using the regional prompting extension, BREAK will split your picture into four regions even at 1:1. it gets smoothed over quickly but i assume the extra noise doesn't do good things for the pictures. these were latent gens to make the issue more prominent.
(661.10 KB 960x640 catbox_67wqiq.png)

(706.24 KB 960x640 catbox_gpmglf.png)

>>18897 The manguri one on civit seems like the best. Still not much better than just prompting but It does work pretty consistently with regional latent though it tints the image pink for some reason.
sometimes it feels like the negs box is there just for show
>>18902 god i love kirinsluts
>>18903 Extremely based.
>>18900 Lora seems extremely hard to work it and feels like it's burned into one pose and one POV Is it possible to try to do this pose with controlnet or something?
>>18905 If you only get one pose and POV that means the baker failed to tag their images properly before baking the LORA, it happens commonly, because if you don't tag postures and camera angles/view it will hard default to that angle because it thinks its part of the concept itself, you can try lowering the the number/order or messing with LORA blocks but its still going to want to do that.
>>18783 Blessed script anon, I've had a few ideas for making loras better that I wanted to try running by you. First: An idea I've had that I haven't seen anyone implement is multiple resolution training for loras. That is, training with the same images scaled at multiple levels. I believe that a lora that was trained on both 512*512 and 768*768 (and even 1024*1024) images would be more robust, especially when using hiresfix. For example, if you were upscaling 2x, and your first pass was 512, it should work great because both the first pass (512*512) and the second pass (1024*1024) were directly represented during training. Second: Pivotal tuning for simultaneous creation of a paired lora and textual inversion. Basically, just train a lora and a TI at the same time: the TI is present in the lora prompt, and the lora is present in the TI prompt, and both train simultaneously together. Ideally, the lora and TI would become synergistic and would be even better than a TI and a lora trained separately.
(865.42 KB 640x960 00794-loli_64_3045452955.png)

>Maybe I'll try making something wholesome today >Me after an hour of messing with vanilla prompting I've degenerated past the point of no return
(3.09 MB 1024x1536 catbox_l2mhni.png)

(2.71 MB 1024x1536 catbox_wx88x8.png)

(2.50 MB 1024x1536 catbox_rp4wln.png)

(2.62 MB 1024x1536 catbox_n3m30n.png)

a brown catgirl nagato for every season
>>18909 Very cute
Has anyone made any loras or stuff of the new vtubers? What happened to all the /vt/ people we had here?
(976.46 KB 3584x858 image.jpg)

>>18881 >https://mega.nz/folder/ZvA21I7L#ZZzU42rdAyWFOWQ_O94JaQ/folder/pzoyFYqY Ayo that's my lora >dark-skinned hand >missing headpat I have to try pretty hard to get dark skinned hand without prompting/not get a hand at all, so something is wrong here. >whenever I get the hand to show up there's a chance it will fuse with the head wings Possible that the lora is having trouble with this, since those wings might look like pseudo arms to the AI. I'd test with your lora, but it doesn't seem like you've uploaded it anywhere and the ones on civitai have different trigger words. Like the others asked, have you tried generating without additional loras? CSR tends to draw their characters with darker skin tones, so I'd look at that first. I don't normally get dark-skinned hands without actively prompting for them, and even then it's a crapshoot unless I do (dark-skinned male:1.2). Picrel generated using your prompt minus all the additional loras. That being said, it is a 128DIM lora. I'll rebake it one of these days and see if using a lower DIM improves other unrelated things.
>>18912 /vtai/ anons made some already with the leaked images and early fanart already
(1.63 MB 896x1280 00176-48175451.png)

>>18915 uooooooh loli fediel erotic
>>18915 I want to see pictures of the cute and funny jewel girl
I wish there was a site like civitai but Japanese or well made I should not have to scroll past 10 3DPD loras, 20 furry loras and Bara just to see the anime ones
>>18918 Just filter the user, it's not like the furry and bara has that many spammer. Usually dime dozen autist.
(2.52 MB 1600x1280 catbox_08thoz.png)

>>18920 nice
(1.95 MB 1280x1600 catbox_6cxgze.png)

>>18921 Thanks
>>18922 The little tooth almost looks like a fang and is a nice touch
What are the chances of Rebecca Miyamoto lora already existing?
(8.92 MB 3072x3072 00001.png)

(9.66 MB 3072x3072 00027.png)

I'm getting started with Controlnet and only using prompts feels like being a caveman.
>>18924 there's a couple other ppd characters on civit but no luck i think a ppd style lora in general would be pretty nice
>>18925 I played around with controlnet for a while and honestly it kinda lacks the gacha nature of pure txt2img which I enjoy
https://civitai.com/models/116155/hearteyes Someone actually made a working heart eyes lora that doesn't need inpaint
(916.91 KB 640x960 catbox_aq4pwl.png)

>>18928 Damn this is actually really good
(4.36 KB 517x221 blacklistedtags.png)

>>18918 All the furry/bara/yaoi are made by a small amount of people who specialize in that, also you can blacklist tags as well
>>18927 I'm not a big fan of the trial and error of pure txt2img. made me take a break from genning for a while
(2.48 MB 1952x2048 AtagoRaceQueen.png)

(381.50 KB 4000x722 xyz_grid-0009-1360118085.jpg)

(1.25 MB 1400x1020 NagatoSleepy.png)

(376.20 KB 4000x743 xyz_grid-0016-3599717343.jpg)

Messing with block weight lora, have a few questions. Unique character features seem to be contained in the MIDD and OUTD layers, as the other layers on the XY plot don't seem to do anything at a glance with regards to how a character looks. Theoretically, does this mean I can block weight merge a new lora with just the MIDD and OUTD layers to make a 'better' character lora? OUTS seems to be doing something with style/shininess/reflections, but I can't quite put my finger on what exactly it is. Included official art for reference.
>>18914 I was pissed so I was too hard on it. Still, the civitai one seems to work better and infinitely more reliably for me so I dunno what to tell you. >Possible that the lora is having trouble with this, since those wings might look like pseudo arms to the AI. I'd test with your lora, but it doesn't seem like you've uploaded it anywhere and the ones on civitai have different trigger words. It happens with the civitai one too but FAR less frequently. Does yours have any triggers other than headpat? As for the source of the niggers I think it might be namako/takorin's fault, my style mix works like this: Fizrot - face/body anatomy, lineart, composition CSRB: shading, breasts anatomy Takorin: colors, poses, clothes (nearly impossible to override even at 0.1/0.2), eyes Out of all those Takorin has the biggest amount of niggers in the dataset by far, I should rebake it after some much needed TND.
>>18936 neuron activation
>>18936 I can't wait to finally fucking play MHW without having my CPU turn into a space heater and without having frametimes measured in eons
>>18937 Kirin sluts erotic... >>18938 Based fellow MHW enjoyer, I still launch Iceborne pretty often but I have completely dropped Rise/Sunbreak. As I'm not enjoying the changes in that game at all, they also butchered my baby.
>>18939 I don't have Iceborne, I've heard that not only is it shit but it fucks up with the balance in the base game too or something like that
>>18939 >>18940 something about the claw more or less breaking and trivializing the base game content
>>18940 Clutch claw and tenderizing are both bad additions but Iceborne is still a 9/10 expansion in my opinion, it has some of the best fights in the series.
>>18942 If CreamAPI still works on it or whatever the new unlocker is supposed to be I'll only enable IB after completing the base game
>>18936 Thanks for reminding me I need to save up for the MH x Arknights collab
>>18943 I think it still works but I could be wrong.
>>18939 rise gunlance is a shitload of fun
>>18946 All the new combo options into stake/EC are nice but I'm not a fan of the silkbind moves at all. What they did to HH and GS makes me seethe but SnS plays pretty well though.
>>18947 you just gotta embrace the blast dash full burst lifestyle
(118.86 KB 531x961 GC.jpg)

>>18948 Done that, dropped the game due to my issues with the game and the gameplay in general. Not planning on playing it ever again tbh, I'd rather go back to GU(which I'm not really a huge fan of either) or just replay the even older games.
>/g/ is being shilled again >/h/ is slow as per normal >/vtai/ full of fatchubba posting make it stop, please
>>18950 >last time I fatposted on vtai someone here complained >don't post for a while >fatpost on vtai again >someone here complains this time too itsover...
>>18951 ever since that Bao thing it's just gotten out of hand bro
>>18952 I mean yeah, there are a lot of fatposters that should go back to bbwchan
What causes certain LoRAs to cut down IT/s more than others?
>>18949 wew, i kneel. i don't think i broke a thousand hunts on any weapon in world.
Found some decent-looking (for SD anyway) car LoRAs but they're still VERY wonky and extremely gacha for "sitting on car" poses. Guess it's time to bust out Blender.
Can't believe there's really good loras for empty eyes, hypno eyes and heart eyes but nobody has figured out how to add a phone or anything similar so I can recreate my mind control doujins
how did this person go from actually pretty nice to holy shit burn it with fire
>>18950 >>18951 I miss the vtuber posting here, the new vtubers look pretty cute. Dunno if i'll stick to it but the jewel girl might be the first corpo vtuber I watch in a while mostly due to a mix of being horny for the design + streaming things i'm interested in watching.
>>18914 >>18935 I experimented for a while and MAN I should've done this a long time ago. OG CSRB LoRA: Darker than I'd like on average, semi-frequently dangerously close to nigger territory. I often get weird grey/dark grey (or brown-grey) hands and arms with it, no idea what's up with that. Less nigger and more clothing store mannequin. Might really fuck up the hand placement but it works fine 90% of the time. Much better when mixed, standalone still gives me those issues at 0.4-0.6 None of those issues show up during my usual gens since I only generate solo pics. Fizrot: It doesn't like my Shoebill LoRA and there's a moderate risk of niggers at >0.6. Occasional wonky hand and failed (no hand) gen. Takorin: the most niggers by far and THIS is what fucks up the compatibility with your LoRA. The higher the weight the more it seems to cancel out your headpat LoRA even at extremely low weights like 0.2 I hadn't realized it until now but I think this is probably what caused me quite a few headaches in the past and it seems to cause the whole hand-headwings issue too. Seems like fizrot+csrb is the way to go at least until the takorin guy either retrains his lora or I make my own. Though I might keep using it just for solo pics if/when I'm not using any concept LoRAs. As a side note having dark skin and dark-skinned male in the negs is either detrimental or it does absolutely nothing.
>>18914 >>18961 >generate without a style (somehow forgot to) >get hairy niggers My dataset has zero niggers in it so whose fault is this, your LoRA or b64v3?
>>18962 Which lora is yours, and which loras are you referring to besides yours?
>>18963 Mine is the Shoebill LoRA and I'm referring to >>18914 's headpat LoRA.
So I finally upgraded my entire rig with a 4070ti (I know a 4080 would have been better but I could not find a used one near me) and I want to train a lora for an obscure character. How would I go about creating the lora?
>>18965 a used 3090 would have been both better and cheaper
Thinking about making a beach chair LoRA
(2.64 MB 1600x1280 catbox_vgdeps.png)

>>18967 That would have been useful yesterday after struggling to get any kind of chair on a beach
>>18968 "beach chair" works but it's a shitshow. Not to mention the signature SD look where everything is misaligned
>>18965 I'm gonna spoonfeed you because I consider anyone who's motivated to find this board worth it. (even though its easier to find it now kek. https://rentry.org/lora-training-science https://rentry.org/LazyTrainingGuide https://rentry.org/59xed3 https://rentry.org/lora_train https://rentry.org/2chAI_LoRA_Dreambooth_guide_english
>>18970 gonna add these are old guides btw, civitai has a bunch more up to date but its kind of a mixed bag considering their overall quality when it comes to some of the stuff they have on there.
>>18970 you do realize that having 5 different guides with both redundant and conflicting info isn't very encouraging right?
>>18972 Well, if you got some better guides then post them. I mean I was able to bake my own LORAs when I just sat down and took the time to study what to do or not.
>>18970 Thank you for the guides. For some reason the OPs on /h/ randomly killed the rentry links that used to be there so a lot of stuff I was putting off for months just went up in smoke.
I wish I continued my python reps because ChatGPT has gotten really bad at trying to get it to generate me or help edit code.
>>18958 It might be just the model he's using for examples
(1.63 MB 1280x1600 catbox_jw4d3v.png)

(1.78 MB 1280x1600 catbox_ogr50e.png)

(1.69 MB 1280x1600 catbox_aefqjw.png)

(2.19 MB 1920x1280 catbox_yfp1pn.png)

>>18958 Probably newer one using SDXL I suppose.
>>18954 >questioning the black box
>>18972 >you do realize that having 5 different guides with both redundant and conflicting info isn't very encouraging right? lmao give me a break. the guides didn't even exist at one point and we were still making loras. if anything it's easier than ever to learn
>still no clear cut guides on full model finetunes it's over
>>18981 because no one (but furries) do it
>>18982 >tried looking through the furry discord >can't find what I'm looking for I need to dedicate a weekend of just cutting through their discord to find what I need
>>18983 yeah it's kind of a clusterfuck because the guy training fluffyrock (lodestone) is allergic to documentation you could just ping him and ask him directly
>>18984 >allergic to documentation REEEEEEEEEEEEEEE >t. used to write application and troubleshooting documentation for my IT department
whats better guys, hires fix or upscaling through img2img?
>>18987 it's the same process
>>18986 Which of the 3 Kirin Armor LoRAs are you using? The one for fluffyrock?
>>18986 I feel guilty for asking but may you catbox these?
(deleted last post since I forgot to toggle something)
>>18992 Very cute, what styles are you using?
>>18989 This one: https://civitai.com/models/77389/kirin-armorset-monster-hunter It seems to work ok. >>18990 >I feel guilty for asking Lol, don't feel guilty just ask away. https://files.catbox.moe/boiu3u.png https://files.catbox.moe/bjaq7n.png >>18992 Cute.
>>18994 Could you generate one with the full armor (or as close as you can get) then catbox it? I can't get it to work properly, especially the bandeau and arm pieces.
>>18993 One of the many LoRAs I've baked and not uploaded :^) https://danbooru.donmai.us/artists/250828 + the classic hint of Zankuro >>18994 Thank
(2.20 MB 1024x1536 catbox_u6nad8.png)

(2.11 MB 1024x1536 catbox_wpt3g3.png)

(2.51 MB 1024x1536 catbox_vahs5y.png)

(2.34 MB 1024x1536 catbox_r14b0o.png)

>>18995 Dunno if these are good enough, it tends to fuck something up often but I've had good luck with variation seed rerolling a nice gen and that has unfucked it. Or I just spend some time in GIMP doing it myself...
>>18997 >GIMP Based Now I don't feel alone dealing with it
>>18998 I was too lazy to pirate PS again when I needed a image editor but hey it gets the job done.
>>18996 >baked and not uploaded :^) That makes two of us :^) >>18997 They're MUCH more consistent than what I'm getting, that's for sure. Seems like the kirin armor token is too weak on its own and really needs to be used at 1.2++ I'm using some other tags to help it (horn headband, bandeau, fur-trimmed, loincloth, panties) but I should probably remove them. Exceedingly cute huntress btw.
>>19000 >cute huntress Thanks, my choco wives are already cute but the Kirin armor just makes them even cuter. I'm gonna play around with the Odogaron armor LoRA for the thigh window later.
>>18997 >>19000 Holy shit, it took QUITE a while but I think the gacha just blessed me. Still not that accurate but whatever.
>>18996 What did you do to bake your lora?
>>19004 It happens
>it rained the other day >heat wave finally calmed down >4090 no longer cooking me in my room post rain >today heat wave temps return >4090 cooking me alive again Ahhhhhhhhhhh
>>19006 I'm about to experience the very same thing tomorrow, it finally rained today and my 1080 Ti is hovering at "just" 70c when upscaling. With the fans at 100%, yes.
>>19006 >>19007 Been a early Autumn over here, basically got cursed with British weather.
>we have bri'ish ""people"" here AAAAAAAAAAAAAAAA
>>19009 But I'm not a britbong though, just got cursed with their weather.
>>19009 I'm Italian, I know how to ward them off. Observe: HEY NIGS AND CHIPS, REMEMBER THIS?
>>19009 Fuck no, I hate the Britbongistanians
>>19011 Uhh, sweetie, you are just supposed to chat “MESSI MESSI MESSI MESSI MESSI” and you can piss off all of europe
(700.89 KB 640x800 catbox_nwyrdn.png)

(2.04 MB 1280x1600 catbox_s8yzsr.png)

Trying to get ahead on this lora before the incoming wave of art
>>19014 Who is she?
Wasn't there a japanese site that hosted their own loras and checkpoints too?
>>19015 Sango, from the latest episode of the Pokemon anime. A mesugaki voiced by Ohtani Ikue (Pikachu's voice). I don't think the episode's even been out for 24 hours yet but there's already a ton of sfw art. Here's a short clip https://twitter.com/pokereo33/status/1687411301335838720
>>19017 Ah she's definitely my type
>>19017 >Here's a short clip RAPE CORRECTION IS NEEDED
>>19019 Kek that was my first thought as well.
>>19017 jesus
Lora trainers, what's a good image size to train on? 768 x 768?
>>19022 768 is good, and you shouldn't go below 1024 can work but it's hit and miss there is no "but I can't run more than 512" because even on 6bg I can run 1024 at dim 128 (painfully) with grad checkpointing, so 768 should be your default
>>19024 >but it's hit and miss Why?
>>19024 What happens at 1024?
>>19025 >>19026 idk, just my experience and my schizo deductions It's a bit harder to work with than 768 (and I can't really bruteforce the parameters since it takes so much time)
>>19028 The head wing on the opposite side is very hit or miss, will need to rebake with less hair-like wings and see if it improves
>>19023 >>19028 >>19029 Oh another LoRA I have forgotten to try, good stuff. Bratty Low rank huntress needs correction.
>fed up with portrait mode >want to generate in landscape mode >total crapshoot it's all so tiresome
>>19032 I gave up and switched to Anchovy. It's still a crapshoot for the most part.
Does anyone have some toml for easytrainingscripts? Ideally one for adamw and another for dadaptation
>>19034 Here's one for adamw8bit I've been playing with lately. Low dim, trains fast (20-30 mins an 3060 Ti), works well with 8GB vram https://files.catbox.moe/gcx3dd.toml Additionally you can: - Up the resolution to 1024px and enable gradient checkpointing (warning: doubles training time) - Change the TE learning rate from 1e-4 to 1e-3 (flexibility tradeoff) - Enable noise offset for naturally darker datasets/loras - (In Subset Args) Enable flip augment if symmetry doesn't matter
>>19032 >>19033 I pretty much don't go above 640x512 for landscape mode, the perspectives seem to fucking explode when you do 768x width or higher.
>>19038 Yeah it's pretty good. >epi_noiseoffset That one barely worked for me
noiseoffset is a meme
>>19040 for loras kind of (like most concept loras) for models no
>>19040 The one I linked seems to work fine, epi_noiseoffset is very hit or miss, maybe it works better on non-anime models.
>desire to become better at proompting driven by coomerism >same coomerism that clouds your brain and makes you stop what you're doing if it becomes just good enough if I wasn't a coomer I'd be able to work more on my loras but I would also not be using SD philosophy n shit man
>>19043 skill issue
>>19044 skill too high indeed
>>19035 ty broski ill give it a try later cause later
>>19035 >noise offset how good is pyramid noise compared to noise offset?
>get fatigued from dataset cleaning >look through my saved catboxes to to test model >prompt as in generates a schizo mess I hate fucking deschizoing other people's prompts
why can't we have sfw anime posting general, Im tired of sharing thread with normalniggers from g
>>19049 I take it its another episode of avatarniggers and stability shills blowing up /sdg/ again? >why can't we have sfw anime posting general /a/ would be a perfect place if they didn't hate us aichads. That and the fact you cant make generals on /a/.
>>18745 does anyone have a link to that script?
>>19051 NTA and not sure if the same but there is this one https://rentry.org/ckmlai#ensemblefederated-wd-taggers Although for some reason, probably me being a retard, the undesired tags part of the script doesn't work
>>19050 >I take it its another episode of avatarniggers and stability shills blowing up /sdg/ again? Yeah but I don't see any sane reason to share the general with dudes posting lizards and 3d realistic bimbos. I don't really have much against some funposting like frogs playing on a violin but that should be a separate general. Why can't cuckchan do a fucking /ai/ board already?
>>19053 real answer is probably because /ai/ would have to be a red board by default the frustrating answer is they are too busy making sub variation threads of /vg/ that they won't fucking kill when they are deader than /po/ (papercraft and origami, not pol)
>>19050 >/a/ would be a perfect place if they didn't hate us aichads AIchads should start ignoring the outbursts and post something good there while ignoring the seethe from artniggers and their golems. Every once in a while I visit that place and people only seem to post the most boring fried garbage if they post AI at all.
How do you guys deal with main tags that aren't present all the time with keep token? For example, you tag you images with "Character" and "CharacterOutfitX", but that second tag isn't always there and you can't tell the training script to dynamically keep tokens. For now I just do "Character CharacterOutfitX, other, tags" but idk if it's the best solution sorry if this sounds schizo
>>19056 you're overthinking this either divide them by folder and only do one for each or tag the normal outfit too
>>19057 I'm already dividing my dataset into quality tiers so with multiple outfits this will become a nightmare I guess a better question is : is keep token actually useful, and when should it be used?
>>19058 >so with multiple outfits this will become a nightmare no? tag the default outfit too and be done with it
>>19059 I meant for folders. >4chan dead itsover
did 4chan die?
well at least the SAI shills are stopped in their tracks for the time being
while you all are here I will inform you I went through about 500 issues in the webui repo and made around a dozen PRs to fix some stupid bugs that were merged in dev. so if you ever feel like contributing this issue list should now be accurate: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3Abug+-label%3Aupstream+-label%3Aannouncement
>>19063 extremely based
>>19063 thanks for your service
why is dataset tag editor so fucking slow (the answer is probably mostly gradio, but still)
>>19066 The built in editor in SD? I use Hydrus
>>19066 Gradio. I also use Hydrus like >>19067 and my own util scripts
>>19067 >built in there isn't one, this is an extension I should start using hydrus tbh >>19068 Yeah I also have a notebook open on the side for dumb batch operations
>>19068 What scripts do you even use? I just bulk upload my shit after running tagger a on it and then manually tag and groupspace shit together
>4chan is dead It's over We won
>>19049 i'm still fucking mad that ATFniggers destroyed the /b/ threads
>>19070 I have one that replaces metatags (like "translated") with "text" and anything rated explicit with "nsfw", which is what NovelAI did in their preprocessing. Also one to put a tag at the very beginning for keep tokens if I really need it.
>>19072 what's atf?
>>19074 all the fallen, a website for the sort of person who gets mad at the idea that someone else respects the 2d/3d barrier
>>19074 allthefallen.moe, pedo site
>>19075 >>19076 sounds like the kind of shit kiwiniggers SHOULD go after (but won't)
>>19072 last time I checked they weren't there, but instead there were fucks who were posting some putrid feet shit
>>19078 it alternates between one of the board bumpniggers, the "it's not cp" realistic AI cunny posters, and a spammer who discovered nemusona and just posts every fried 512x512 gen
It's surprising how most of the bad apples never found their way here I think I've seen maybe two people get banned at most?
>>19080 probably too slow for them. the link is still on gayshit you know so it's not like this place is much of a secret
https://github.com/Mikubill/sd-webui-controlnet/pull/1875 Something very cathartic about the ControlNet author suddenly appearing again to revert the repo after one of the contributors spent days working on something only for it to break shit
>>19082 same energy as auto vs vlad
>run dejpg and deedge with chainner >12mb image takes 8 minutes, ends up being 108mb what the fuck
>>19083 I'm enjoying this new Maid Anon arc
4chan has been down for like 3 hours now
>>19087 Welcome to your new home, we got lots of cunny and not much else
>>19087 IRC says it will be back up in an hour
>>19088 Where's the cunny? Post some
>>19089 IRC also says the bara spammer on /b/ doesn't exist, among other things
>>19086 Another huntress enjoyer. Kirin armor was actually one of the first things I tried to prompt but I got very disappointed quickly. >>19090 Right here.
Can IRC be told to ban Cumfy off /g/?
4chan still dead...
It kind of occurred to me one thing that was never figured out for NovelAI was a separate noise parameter they had. I tried to figure this out months ago (I'm the ghost account here) but never could, even after spending two days on stepping thru and debugging the leaked code. The PR that somebody else made based on what I suggested doesn't produce nearly as good results as NAI's did. I might spend some time again to revisit this but maybe someone here would be interested to take a crack at it. I was specifically debugging with Naifu since that has NAI's UI and everything set up. https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/5351
>>19089 lolmao
>>19095 Well it might have something to do with this I just found. This is probably the most embarrassing regression I've came across yet. https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12354
>>19095 >>19097 I’m a codelet So did they reintroduce that s_noise that was disabled or what?
>>19098 Yes, it hasn't worked since October. It also only applies to Euler (non ancestral), Heun, and DPM2 though, which is why nobody really caught it for so long.
>>19099 Ah so you would only see the change in those samplers so it doesn't really do anything in the long wrong huh?
>>19100 (me) >doesn't really do anything for other samplers long run holy fuck im retarded
>>18863 >the blackest nigger I've ever seen Anon, whatever that thing is you keep posting is belongs to the hood now.
>>19102 kek >"ayo cuz, this bird pelican pussy be off the chain nigga"
/sdg/ is back to being a shit show why did it have to be this hobby with the shitty community
>>19104 I now realize that I'd prefer rampant stability pajeets to this garbage. >why did it have to be this hobby with the shitty community Yeah.
>>19105 Yea, Stability pajeets are just funny to dunk because after a couple posts they give and retry later, the 1fag shit is unbearable because the nigger won’t stop. And reporting the posters doesn’t help at all.
>>19106 Yeah jannies pretty much only care if you post nsfw or dox.
Is this a sign of overfitting? The prompt is only activation tag and red bikini.
>>19108 I don't know who this character is but If the attempt is to prompt the character with only the bikini portion of the outfit and nothing else, it very much is overfit
>>19109 I feel like 16/8 Dim is low enough and I don't have enough dataset to lower the repeats. Do I reduce the learning rate or go back to few epochs before?
>>19112 >food >soup >soufflé >cup >bowl
>>19112 goyslop
about to go screencap an anime to get more material for a character lora all that for the fucking anime style since the character itself is done fine by the existing material
(428.50 KB 500x266 5K4d.gif)

>>19102 >>19103 Total nigger death.
>>19052 Thanks. Trying it right now but it OOMs for some reason and idfk how I would make it less retarded about memory
>>19117 nvm it was batch size, didn't think they had that
>>19115 Good luck, I’ve been fixing my anime screencaps all week
>>19050 >/a/ would be a perfect place if they didn't hate us aichads Anons generally don't mind but /a/ has a long-established cabal of artshitters and their orbiters. The orbiters are one of the most pathetic fucks I've seen on cuckchan, honestly. They absolutely are monitoring threads for any sign of AI art to immediately start cancelling anyone who posted it.
>>19120 I used to never pay attention to those fucks, ironic how fate plays out
Today is the day I get that new rig built. ... probably.
>>19122 What's the last non-fucked jewidia driver btw? I've heard the new ones will swap to system ram if your vram is full
>>19122 Do you have the parts already or waiting for stuff to come in? >>19123 Nani the fuck? When did that happen?
>>19123 531.79 is the latest non kiked drivers "you don't NEED more VRAM, see, we can just tank your ram and rape your pagefile instead!"
>on 531.18 based im safe
>>19124 >Do you have the parts already or waiting for stuff to come in? Minus the RAM kits I had to send back (because I didn't know about DDR5 being the worst yet with all 4 slots filled) I've had all the parts since March/April... >When did that happen Recently IIRC, driver version 532 or 535 >>19125 I wouldn't mind it if it worked well enough tbh, is it a shitshow even on high speed DDR5? If it doesn't I can always try to snag an identical 3090 in case we get multi-gpu support for prompting but 4090tards are NVLinkless
>>19127 >is it a shitshow even on high speed DDR5? I don't know I'm a craptop vramlet so this change makes SD practically unusable, I don't exactly care about marginal benefits on high-end systems to make jewvidia able to sell you less vram
>>19127 Unless you are gonna be doing HLL/Ufotable finetunes or god forbid train on SDXL (kek) having that 2nd 3090 doesn't really do you much good
>>19129 SIR but what about amazing prices 4090 very good try training sdxl please sir
>>19129 I can't imagine jewidia selling >24gb GPUs anytime soon and I don't plan on buying a 4090 even if it's used. I don't even know if it's gonna fit on my motherboard tbh
>>19129 at that point just rent a gpu/tpu cluster and get it over with
>>19130 kek >>19131 I wasn't saying to buy a 4090 instead, it was more of a if you have a 4090 already just stick with that unless you wanted to jump into the next level of autism. I have the MSI Suprim X with water cooling so the card is normal sized and I had space for the AIO, the normal Suprim would not fit my PC.
>>19133 >if you already have a 3090**** I can't type today
just realized I used adamw and not adamw8bit for my last lora lmao
>>19132 if you are gonna be forever a vramlet then maybe this is the only option, or what someone mentioned the other day was everyone pulling some cash together to rent a cluster in the meanwhile that distributed training is still not an option >>19135 who is this?
>>19133 eh you're probably right. But again, what if we get multi-gpu inference and not just training? By the time we get to it I could probably snatch one up for very cheap
>>19137 >if you are gonna be forever a vramlet I mean not being able to finetune models is a bit outside vramlet territory, but yeah I just wish AMD woke up and started banking on consumer AI demand, cards with loads of VRAM and a better software stack
>>18785 oh welcome back. i distinctly remember that finetune and was wondering where you were for like months. do you think it's is in a respectable state to publish yet or are you still working on it? i highly appreciate your work, anon.
>>19138 How many 3090s could one SLI/NVL together? Because you could also run those larger LLMs that need 30-40GB VRAM or whatever the requirement is. I just know a single 30/4090 isn't enough
>>19120 I couldn't imagine that ai art would cause so much seethe back when it all started
>>19141 I think it's up to 4? I'd only have 2 at most assuming the second one will fit on my motherboard Mine is the FTW3 Ultra non-Ti
>>19142 Yea, its so fucking weird. Lots of insecure "artists" really get their panties tangled up. I am kind of surprise that we already accelerated to (((hollywood))) actors kvetching about not wanting to be replaced by AI because of the strike. Actors shouldn't be afraid of Hollywood replacing them, its autists like us not needing them anymore when making our own entertainment that can be distributed and also side stepping their forced propaganda bullshit. >>19143 yea my board can't SLI 3090s so I'm stuck being a 4090kek. More than likely if I wanted to step up I would start looking at those A100s or whatever.
>>19145 I have a friend that for some reason gets completely disgusted as soon as I post anything that I have genned and touched up. The shit he posts himself that are made by "real human beans" look like pajeet 1girl gens with shit hands, dudes a bit of a clown.
(744.65 KB 768x1152 60133-892474572.png)

(1.36 MB 1024x1536 60561-4203842168.png)

(2.04 MB 1536x1024 59112-2739756415.png)

(1.91 MB 1280x1536 58689-23857752.png)

>>19140 >get mentioned on /g/ randomly with HLL anon yesterday >people find me here too lol... but yea, real life, being sick, and fatigue kept me aware for a few months. Latest training was a dud, have a couple other posts explaining the issue above, spending the entirety of this month doing a deep clean. Did a another ffmpeg extract of the heaven's feel movies, almost done doing a first clean of HF3 before me and my friend start manual tagging some of HF1. Got some other ufotable backlog that I may shove in with a not so cleaned dataset just to see how the model could evolve after all the ufotable fate/type-moon stock dries up for the foreseeable future after Mahoyo and the potential OPs that could be animated for the other two Tsuki remake routes when Nasu finally stops fucking around and finishes that. So yea, I'll see at the end of the month if I got something worth sharing. Speaking of ffmpeg, I noticed that if you do a full frame extract running the simple command, the pngs are massive, like 4~8mbs an image (a 2 hour anime movie is around 173k images) but if you run ffmpeg with mpdecimate, the saved/filtered out frames are like a 1/4th the size but you get pinged with lots of false positive removal, a problem which plagues my current dataset. I tried all sorts of settings on command line but I still get those huge filesizes, anyone got some suggestions. I'm trying to conserve HDD space while I wait for my NAS to come in next month and almost filled a 16TB drive with my datasets and models.
>>19147 Some dude on g just keeps mentioning you and wondering where you disappeared lol. Seen that several times already I think
>>19147 >make a finetune out of god knows how many episodes and even mories >hands are still as fucked up as ever pain
>>19149 They certainly look better than NAI hands. But even midjourney still struggles with hands somewhat, it's just hard for AI to do
>>19148 I notice that every time some posts a really good fate image that poor anon asks if its me lol I feel bad because the rare times I do post on /g/ no one fucking notices me lmao >>19149 >god knows how many episodes and even movies uhhhhhh All of Zero, UBW, HF, Knk 7 8 9 9.5, plus all of bible black except new testament, and lady innocent for porn data. Still don't have key art, posters, or scans in the dataset which can also help. The images I'm fixing right now is for HF, and I have to fix all the image composition tags across all the images. I am doing so much fucking work.
>>19150 >tfw AI can sometimes do feet better than hands What did SD mean by this?
>>19151 >I notice that every time some posts a really good fate image that poor anon asks if its me lol Yeah, I know, happened 2 times to me already kek. >I feel bad because the rare times I do post on /g/ no one fucking notices me lmao Probably unlucky timing dunno.
>>19152 It means we're about to win.
Sorry for the OT but I'm having a total blackout at the moment, is there anything I should back up on this current W10 install/drive that I'm not gonna be able to access later even after backing it up? I'm thinking just saved passwords rn
>>19155 uhhhh what exactly are you doing?
>>19156 Procrastinating instead of building the new rig... I'm gonna repurpose my OS drive + another identical drive for MacOS but I'll make a backup of the entire drive first and idk what I might not be able to access later
>>19157 As long as you know all your passwords I don't see the problem with what you are doing. If you were to load the drive as an external it will just ask you for the admin/user's password whenever trying to access files on it. If you are paranoid, make a separate backup of your most used items on a different external
>>19123 It'll eventually be fixed apparently. But for now, downgrade.
>>19159 >it specifically affects SD jesus christ lol
(679.64 KB 640x800 catbox_c3f9vo.png)

(692.98 KB 640x800 catbox_r1b5fe.png)

(2.43 MB 1280x1600 catbox_a73q9u.png)

Retraining fiz rot again this time with 2 tagged characters. Moving up to 16 dim because I'm really starting to feel the limitations of 8 on this one. Plus having a lower te lr seems to work well on 16 dim whereas before having all 1e-3 lr had some weird results. Though I'm sure I could also benefit moving from lora to some other lycoris variant
>>19162 based tag dark-skinned male so i can take the occasional nigger out, thanks
id the easy training scripts anon is here do you know if dadaptation is on it's v3 version or do you plan on implementing it?
>>19160 it probably affects anything ML-related, it's just that other victim cases are niche enough that nvidia could gloss over them as 'just buy a bigger card you poors'
>>19166 >activation token for a style lora https://youtu.be/pQzCptidLmw
>>19163 I cropped most of the males out but I'll tag whatever's left
>>19168 based, thanks. I can take the takorin out whenever it gives me problems (btw if you have nothing to do it'd be cool if you could do namako/takorin, I've been working on a deniggerized dataset and the current one is very overfit) but fizrot is essential to the style mix I've settled on
>>19169 Sure I'll give it a shot
how does the WD team still have a following they've literally never delivered to the promises they made
(155.89 KB 960x960 Starbound-Game-Logo.png)

>>19171 >how does the WD team still have a following >they've literally never delivered to the promises they made I got extremely angry after reading this and I couldn't figure out why for a few mins. I just remembered.
>>19172 The greatest scam of our lifetimes
>>19173 I'll preface this by fully admitting that I enjoyed launch NMS for what it was but I still can't fucking believe how hard NMS and Hello Games got shat on (and still do all these years later thanks to le funny jewtube grifters like crowbcat) and how EVERYONE (especially journos because the only thing they love more than a shooting star is one that's about to crash and burn unless it's pozzed - SB absolutely is) conveniently memory-holed Scambound.
>>19174 The excuse for this is "but starbound didn't cost as much, you need to have higher standards for AAA-priced games" but it means absolutely nothing coming from the very same faggots that never had standards to begin with.
>>19176 If you need a dataset with no niggers I'll get to work, I've only done a few so far since I don't gen non-solo stuff. If you've already scraped everything post it here and I'll start up the TND machine (photoshop)
>>19177 I'm pretty happy with the dataset I've got, just trying other training settings. If anything I would want to prune some of the pixel art that looks more like sprites than illustrations or that use use techniques like dithering. I just grabbed the torrent from here https://e-hentai.org/g/1951059/924d042aea/ and added a few more illustrations from their twitter. I also made sure to only use pixel art that is at native resolution and not upscaled
>>19176 folder's still empty for me
>>19179 Whoops should be good now
>>19178 >the pixel art that looks more like sprites than illustrations or that use use techniques like dithering I love those tho :( Guess they're just too different from the rest of the dataset?
>>19178 >>19181 Also do you want me to scrape his kemono page? Tons of new stuff there
(2.25 MB 1600x1280 catbox_6bszme.png)

>>19181 They tend to accentuate aliasing and add weird patterns >>19182 Already got it just sorting through it
>>19183 Alright, thank you
>>19178 think you swapped dagasi for ogipote by accident btw
>>19178 >>19185 jcm is also missing
>not range banned on 4chan anymore damn I guess the shutdown wasn't so bad
(2.36 MB 1280x1920 00106-4092915998.png)

(3.09 MB 1280x1920 00068-472821130.png)

(2.60 MB 1280x1920 00069-273251441.png)

(2.65 MB 1280x1920 00065-2598249304.png)

(2.48 MB 1280x1920 00100-273780709.png)

(2.74 MB 1280x1920 00000-2672915640.png)

(2.52 MB 1280x1920 00915-1141534038.png)

(2.87 MB 1280x1920 00194-138434930.png)

>>19186 Both fixed, thanks. Did a folder size check and Jungle was the only other one missing.
>>18783 Was about to ask / complain if LoKR was functional yet. Guess I'll see with this, thanks.
>>19166 holy crap that's fucking bad lmao
Anyone have a good suspension/hanging Lora or a good bondage lora in general? I think I tried everything in civitai and it sucks
>>19161 This is pretty good and on model looking! You made this anon?
(1.60 MB 1024x1536 catbox_szmqsn.png)

>>19176 If you want some feedback from some very quick tests compared to this one https://mega.nz/folder/JhlQwIxR#sunbu0qeSPhP2REqylIhxg/folder/Rw81lZLY Yours seems blurrier but way less noisy and seemingly less prone to getting random artifacts (this is probably a good tradeoff), clothes are much less (but still) overfit when it comes to characters present in the dataset (it's still over parseebros) but characters and character features are way underfit/less consistent now. Eg the other LoRA had some impressive consistency when it came to Cirno's wings or Momiji's hat (not perfect by any means but WAY better when compared to base models) and you could prompt most popular 2hus by name without much help if any was needed at all. Pretty much can't do that anymore, for some reason even popular 2hus - yes, even Marisa and surprisingly enough even Reimu sometimes (she's the most consistent but it can absolutely fuck it up) - can make it shit itself and create some Cookie☆ level OCs (EXCEPT Reisen somehow. She doesn't need a LoRA anymore, something that irritated me since the og takorin LoRA wouldn't stop mangling her features) Have a burnt Cirno for your troubles (I replaced the old takorin with yours but it took a while to get the wings to be in the same ballpark as the old one)
>>19196 >you could prompt most popular 2hus by name without much help if any was needed at all with little help* I was typing something, changed my mind, typed something else, went back on myself and forgot to correct it anyway
>>19178 >I also made sure to only use pixel art that is at native resolution and not upscaled >>19196 >Yours seems blurrier You should be pre-upscaling all takorin's stuff with nearest neighbor since your otherwise relying on kohya's code to do the upscaling for you which is going to make stuff blurry in preprocessing.
>>19196 >>19197 This is good feedback, thanks. The dataset is definitely lacking some 2hus but in general I don't try to have style loras be better at prompting specific characters. That's why I have a different category (multi), when I go into more effort sorting, pruning, tagging, etc. individual characters. If the tagger tags characters I do still leave them in though. In the case of Cirno, though, I think it's just to do with how lora interacts with the base model because I don't think the other lora has any extra Cirno images that I wouldn't have found somehow. I would suggest trying the te-1e-4 version if you haven't already because it's a more flexible so prompting Cirno should be a easier. I also just uploaded v2's which should be a marginal improvement. I included some stuff from fantia, removed some images with bad dithering/banding, and the few images that had patterned backgrounds. >>19198 I would agree with you if I was trying to make a lora that was meant to look pixelated. I've done testing on this previously and yes the built in upscaling is bad and should be disabled by default >>18309
>>19199 >The dataset is definitely lacking some 2hus but in general I don't try to have style loras be better at prompting specific characters I respect that though I think it should be handled on a per-artist basis imho, in cases like these where 2hus make up most of the dataset I don't think the extra consistency would be bad imho. You could probably get away with putting "touhou" in the negs later on if it ends up causing problems. At the very least it shouldn't be worse than the base model (which it definitely is right now, tested on b64v3 and AOM2-corneos7thHeavenMix_v2_40 (which kinda sucks tbh, the results are MUCH noisier and grainier on it but I tested it anyway 'cause that's what the old takorin one was tested on apparently) >In the case of Cirno, though, I think it's just to do with how lora interacts with the base model because I don't think the other lora has any extra Cirno images that I wouldn't have found somehow. I would suggest trying the te-1e-4 version if you haven't already because it's a more flexible so prompting Cirno should be a easier. Will try that and the V2 but the consistency definitely isn't placebo. Judging by how it absolutely refused to let me change Parsee's clothes I guess the OG one is just overfitting but when it comes to the wings and hat and other character-specific details that might be a good thing since there's usually so much gacha involved.
>>19200 Looking at the og lora I don't see any tags for cirno and only 11 with "blue hair" which lines up with mine. But I'm still gonna try some Cirno prompts of my own just to see what happens
>>19201 Maybe it's placebo or maybe I've been getting very lucky but the wings are definitely more consistent than base b64v3 with the og takorin. About to try your V2. te-1e4-dim8-alpha8 seems to be the better one judging by the previews I'd say that aside from weird issues like a very mangled reisen and parsee's clothes overfitting to hell and back (lmao get it?) the most irritating issue about the og one is colors being outright and consistently wrong when prompted, especially blues and cyans, they tend to turn green/aquamarine for no reason.
(1.51 MB 1280x1600 catbox_lwfgk9.png)

(464.09 KB 640x800 catbox_2oei9x.png)

(1.17 MB 242x243 feet.gif)

>>19203 goddammit we're so close
(483.34 KB 640x800 catbox_e9je6x.png)

>>19204 so close...
(122.73 KB 219x182 whyMelt.gif)

(774.72 KB 640x800 catbox_2xyrjd.png)

(760.53 KB 640x800 catbox_1aiu5v.png)

(830.54 KB 640x800 catbox_bkgoar.png)

(670.76 KB 640x800 catbox_8jbv0r.png)

Some initial comparisons
(486.69 KB 4000x1175 catbox_6k6x9c.jpg)

(2.44 MB 2560x939 catbox_5ff5nl.png)

>>19207 Latent off vs on. Latent really wants to keep the panties huh
>>19172 fuck bros I didn't need to remember this how do you make your game worse with a 1.0 release ffs also the puppy It was indeed memoryholed hard.
>>19209 >how do you make your game worse with a 1.0 Considering how every single update (and I mean actual update, not the auto-built nightlies that contained zero changes most of the time) to the beta (btw there was like a full year? year and half? with literally zero stable updates, remember that?) redesigned the entire combat system to make it as painful and tedious as possible it's not really a surprise tbh
(756.39 KB 640x800 catbox_ljdiol.png)

Last one for the night because I had to
>>19209 >>19210 The fact that I felt LESS robbed by the CubeWorld alpha for the same amount of money really says something about Shitbound.
>>19212 I mean at least Starbound is playable and has mods (dev of biggest one is an insufferable retard though). Cubeworld is basically empty.
>>19214 >is playable debatable >has mods I really don't care about the furry scat vore mpreg shit.
>>19213 I meant if you made the LoRA…
>>19216 Lol. No. There's plenty of Madoka loras already. I used some civit lora, probably this one. https://civitai.com/models/73593/homura-akemi-madoka-magica
>>19218 disgusting 3d garbage
>>19211 >spread armpit Where did you get this?
not quite what i wanted or where i wanted but fuck it, close enough
>>19223 Based gym girl enjoyer. Genning chocos squatting heavy ass weights mite b cool.
>>19224 Prompt was on bed with gym shorts, kept rerolling cause it kept drawing the shorts as panties and/or forgot the tail, got tired and settled for that despite having gym in the negatives. Oh well.
goodnight, /hdg/
>>19225 That's funny but I've had similar issues with gym shorts in the past. >>19226 Very hot and thanks for reminding me that I have to correct this brat myself.
>>19227 I'd have kept rerolling but I got extremely stubborn earlier while watching something and just batch genned like 150 pics total in the background with a shitty hugging own knees lora + variation seeds so that burned me out. Still not happy with it so maybe I'll inpaint it later but it definitely wasn't worth the burnout lol. I'm too stubborn for my own good
Do we know if NAI used that gwern scrap of Danbooru2021 or if they procured their own? Trying to gauge a timeline of their dataset.
>>19229 latter
(1.03 MB 1600x1280 catbox_zh5pud.png)

smug
>>19231 Kek Did you make all those Kirby ones on /vtai/ or just that one
>>WDXL failed because LR way too low and no caption on 80% of the dataset >I don't even know how they manage to be this incompetent >I get that the search space for hyperparameters is more difficult on this one (not that they mastered base SD training either lmao) but still Can anyone confirm this /hdg/ post?
>>19233 Yeah I did those too
>>19234 that's my post >They said i) LR was far too low. ii) They didn't put captions on 80% of the data source : OP of SDXL Tuning thread on (sorry not sorry) furry diffusion d*scord He has been talking a bit about SDXL training and seemed to know things about WDXL ("WD says caption dropout is essential, but kohya doesn't support it") so the info seemed trustworthy enough >>19236 oh well better source
>>19237 I thought Kohya used to support captions but one update got rid of them? It was part of the JP only readme of the finetune guide in the original sd-scripts GitHub. My first ever Fate Zero/Ufotable finetune attempt was freaking out because some files didn’t have captions but I noticed in future attempts that mid training the script would recreate the meta json file and remove all the captions. Always thought that was strange.
i don't get the animosity towards furfags when they seem so much more competent at this shit in general can't we just exploit them instead until the WD team gets its shit together? (never)
>>19236 For some reason I felt bad after reading this
>>19239 4chan “yiff in hell furfag” culture. Especially among giga elder oldfags from SomethingAwful pre-4chan days. And you always had those super high profile furries who were just batshit insane like with that infamous furry convention. I can tolerate them as long as they don’t go out of their way to piss me off but it doesn’t mean I’m down with their shit. But in the case of stable diffusion, I’m willing to play nice with them towards a common goal improved models.
>>19241 oh nah i meant in the context of stable diffusion exclusively
>>19239 furfag "shill" here I do get some of the animosity, it's not a pleasant community to be associated with and they're weirdos. Also the (shunned in furry communities too) actual pedos and zoophile. It'd probably be less tiring to "shill" if the effort came from idk, realistic models or something, since that's more acceptable (though they'd still tell me to fuck off). but yeah at this point we could just try properly asking them for help, BUT we'd need to come with a rock solid dataset in hand, which we definitely don't have. First step would probably be retagging boorus with a new autismo tagging system, but A. that'd need a large community project and cooperation by booru owners (for streamlining, and coherency) B. it can't garner too much attention either because shitters could try derailing the process because muh AI bad (but I could be overthinking how much AI is hated)
>>19243 refer to >>19242
>>19243 >B Essentially you need to keep shit either on a 8chan board separate for this or do in a closed off site platform like a private Matrix
>>19245 Yeah but then you'd lose the only advantage the anime community has : numbers and seeing it's not just a few harmonization projects we need (like furries) a few tards doing it will go nowhere
>>19243 Point B doesn't matter if you frame it in a way that it just makes searching for images useful. Basically something even non-AI folks can take advantage of. In fact if you propose this to booru owners you should frame it exactly like this too, because if your personal end goal is for AI purposes I'm sure they would not allow it. It should be something both sides can benefit from.
>>19247 Good point, yeah, that's a better way.
>>19243 >muh AI bad (but I could be overthinking how much AI is hated) You aren't overthinking.
>>19246 Numbers don't matter, as WD is the perfect example of both having many people that either do shit poorly or nothing at all, or having a small group of "quality" ""devs"" that don't know what they are doing. It's honestly just a matter of finding the right people for the job and hope they have the spare time (or neet free time that needs being occupied) and generosity to commit to the work at a regular interval. If it happens that most of them are furry adjacent, so be it so long as they know what the common goal at hand is. >>19247 The only flaw with this is that at this stage of the game, you'd be hard pressed to find anyone that could be asked for this kind of help who hasn't already picked a side in the AI wars bullshit. Any attempt seen of cleaning up boorus will be met with "but the AI scrapers". Hell even in some danbooru topic/threads(?) they were bitching about ways to fight scrapers and trying to manipulate the tags. This battle is at a statement until a culture shift happens.
>>19249 lmao all those e-hentai galleries with half star ratings proves that too well.
Forgive my naivete but what's stopping us from gathering somewhere private like Matrix/discuck (just for easier archival and faster communication), scraping a booru (or multiple the de-duping) and then having some big dick GPU anon autotag the result (I'm pretty sure that at this point the AI can tag better than the average non-furry booru user), split the dataset up, distribute it and have people clean it up manually bit by bit? If a pic is bad just put it in a separate pile and go back to it when the good shit had been retagged. I get that some people think that it'd be a useless endeavor if you don't have the hardware to train a model but we gotta start from somewhere, no? Just establish some easy to follow rules (eg scrap guro and scat, scrap sketches if the lineart isn't pristine, etc) that you can't possibly misinterpret.
>>19251 Most of the are legit fucking trash though which really doesn't help.
>>19252 autotaggers are inherently at best equal with the tagging system (statistically). This works for loras, but not for finetunes and certainly not for full training. Another problem is that the tagging system of boorus is very wonky and not descriptive enough (see images on e621 with easily 4-5x the tags ofimages on boorus), which is bad for AI. There are ways to augment the result from taggers I believe but it's still nowhere close to having a strictly tagged dataset with "tag what you see" rules
>>19253 oh yeah theres a lot of oversaturation of the galleries with half dogshit gens.
>>18756 I'm out of the game for months and b64 still gold lol
>>19251 I think there is only one or two exhentai galleries I've seen, advertised as AI, that have 4/4.5 star ratings. And only found them while I was looking for cg sets lol.
>>19255 That and fuckloads of this 3dpd garbage and fried trash. Honestly every time I see AI anime gen outside of the containment threads it's just a fried AOM2 super generic-looking shit.
>>19254 Which is why I said "split the dataset up, distribute it and have people clean it up manually bit by bit". The autotagging is just to get a headstart, it's easier to clean up and work with if it's already half done. I didn't mean to imply "lol autotag it and wing it"
>>19256 only so much improvement you can squeeze out of a <1b parameter model
>check e621 for the first time >look for their tag wiki >was harder to find that the danbooru one >also harder to navigate the wiki pages in general https://e621.net/wiki_pages/1671 This page is safe, its only not safe if you click the tags themselves because it will show example images. Only linking it so we can get a more accurate take of their tags. Like the Laion 5B problem with Stable Diffusion, just because the tag exists, doesn't mean the tag will do anything if there isn't enough hits associated with the key word. As for danbooru: https://danbooru.donmai.us/wiki_pages/tag_groups Me and my friend have been using this tag wiki as a reference check to manual tag the fate shit and also double check the autotagger's work. In fact we just finished a tagging session on the HF1 stuff. My friend is a photographer/art fag so its easy for him to see concepts beyond just the obvious 1girl background big boob shit so long as I can translate him art concepts to the nearest booru tag (if applicable) or fill in any gaps in his weeb knowledge if he sees something autotagged that he doesn't understand. It also helps that he's also a fatefag so its probably a lot easier than if we were tagging something that we weren't fans of. This is realistically how this shit is gonna play out. In my case, my friend remotes into my nigger rigged hack job VM set up with Hydrus and he starts doing work for an hour or two and the hops off. I made some custom rating sets in hydrus to use as a sign off/needs review system on images he finishes and also wants double checked to see if there wasn't anything that was missed. If you can apply a system like this where the images can be centralized and accessed by everyone, everyone can do pieces of work from time to time. The only flaw with Hydrus is that only one instance of Hydrus can run at the same time, and centralizing the client_db files to get around this won't work as then you would have competing versions of the db files. If you split the dataset up into several chunks then MAYBE but I am not a hydrus expert and don't know how to set up an actual hydrus server, I trust my friend so he just logins in directly with my VPN tunnel set up.
>>19258 thought I was going to be hopelessly behind the curve, so feels ok man >>19261 seems NAI are going to train a new SD model after they dropped their new text models. too bad the last leak was an insane freak occurrence that'll never happen again
>someone posted the gelbooru tag wiki page on /h/ right after the danbooru and e621 ones were posted here kek
>>19264 bruh.
>>19262 maybe something like a parallel booru that only store tags ? With a first pass done by the 3 autotagger script, second pass to prune that and add missing ones, and then 1-2 rounds of quick validation? that and maybe something like https://tagme.dev/ but for boorus (tagging projects)
>>19264 It’s not a secret but shit that timing was not coincidental
>"Beta" is tagged as spam on 4chan for actually what reason? How the fuck am I supposed to refer to beta server changes, beta testing or beta branch then???
>>19266 I know that Hydrus has some booru feature but I don't know shit about setting one up. I also need help setting up that triple autotagger script because I have a list of unwanted tags that me and my friend are tired of having to mass delete every time I add new images with auto tagging and that undesired tag portion of the script does not work for me. I could share that list so its less work to do. In terms of the idea, its not entirely different from what I'm doing. Its all about finding a way to centralize the data and allow people to connect to it and do manual work after the autotagging is done. I'm not familiar with how tagme.dev works, got a bit overwhelmed looking at the page. I assume this is a bunch of tagging script types? Also I was under the impression based on the previous posts that we were gonna try to keep this more underground away from prying eyes that may want to fuck with a potential project attempt? A public facing booru wouldn't be a good idea if that is the case.
>>19268 beta -> dev?
>>19268 try beta lowercase? I know that when the spam detector flags "reddit" it only does it when its in a certain capital or lowercase spelling depending how you use reddit in the sentence. Like "back to reddit" gets flagged as spam unless certain words or capital/lower case and I always forget every time so I just default to "/r/eddit"
Since I have collaborator access to the webui repo I just learned I can see traffic statistics. GitHub only stores the last 14 days for some reason but still interesting to see these
>>19272 >yandex thats interesting lol
fuck I've been away longer than I remember, the latest commit on my a1111 is from 2023-02-20
>>19263 yep. i was hoping they'd train a 33b text model first but their new 13b model has the spark.
>>19275 yeah, pity they're not going to train a bigger text model. but I've burned so much time on text since kayra dropped, it's definitely good enough to waste a few months.
>>19269 >unwanted tags I've been hearing about that issue, haven't tested that feature so I couldn't really tell you unfortunately. >I assume this is a bunch of tagging script types? Nope, these are small manual tagging projects aimed at fixing some inconsistencies/omitted tags. You basically give a tag query, from which users then iterate through images, and options for fixing it (remove tag X and add tag Y, add tag Z, remove tag W, ...). Picking one at random : https://tagme.dev/projects/hoof_tagging The project is aimed at tagging different kind of hooves correctly. It gives you a base query (hooves -unguligrade -hooved_toes) and a few options to choose from for each image that encompass every case, along with tagging instructions. It uses your api key to do these actions when you press the button corresponding to the option, to help automating the process. Haven't really used it though, but the source code is available. We wouldn't have the same need for these at first since the tags are already vague and muddy compared to their autistic tagging system, but still nice to keep in mind. >A public facing booru wouldn't be a good idea if that is the case If we just don't broadcast the link and keep any reference to AI and only say it's a retagging effort, that could be enough? idk various ideas: special tags for official art for series? I'm not sure how SD would interpret a prompt "series, official art" differently than a special token "series_(official art)" or something. Since a large benefit of training like this is embedded artist styles (which includes anime styles if enough pics) it could be worth looking into it. Largely a detail though. Finetuning autotaggers along with dataset fixing : retag a statistically representative chunk of the dataset, send it for tagger finetune, do another chunk (maybe while training? idk), repeat. Idk how tagger training/finetune works so may or may not work well.
>supermerger trainDifference oh god gpuchads figure this out
>>19274 whalecum back
>>19277 >I've been hearing about that issue I think those have just been my posts kek >tagme.dev explanation Ok I see. This could be something that can be done later when AI text encoding understanding matures and it becomes easier to tag and caption concepts or specifics better >If we just don't broadcast the link and keep any reference to AI and only say it's a retagging effort, that could be enough? I guess we can try, and then just lie if enough people try bitching/brigading. We also can't be public about the dataset if training the model can actually happen. >special tags; official art, somethingsomething_series I remove these, at least in my case. Because of gacha artwork being in demand, lots of artists that have done fan art of series get hired on to then do official art for the series and then you don't have a baseline of what is main artist vs rotating guest artists but they are all official art. Would be best to just do it how HLL has it and just make sure you have artists correctly tagged with their work and maybe have the anime/game series and related tags.
>>19278 what are you talking about?
>>19281 Hes talking about a new merge setting in supermerger extension it takes forever when I tried it so you probably need a big dick 4090 to use it at an effective speed.
>>19282 >>19283 shit I don't even know how to get supermerger working period >pic related what the fuck is this?????
>>19284 idk, the maintainers of that ext (and most jap exts in general) aren't really great at explaining things I noticed this because some retard is using that to merge furry models with realistic/general models + messing with clip using model toolbox to make uncanny realistic furries (I will not sleep well tonight, I regret lurking)
>>19285 I've been wanting to use Supermerger and the other offshoot bayesian merger but instructions unclear. As I've said in other posts, my skill points were dumped in training, not merging or using the other cool shit I'm behind on.
>>19285 >>19286 as a longtime merger (for SD anyways kek) its a lot of trial and error, so this realistic furry guy was probably autistically merging and testing models until he got the result he wanted.
>>19286 Yeah bayesian merger seems very cool but even if I was full time neet (which I am regretfully not) I wouldn't have enough time to figure this out. I only use supermerger to quickly whip up a merge in the rare cases I need one. >>19287 most likely, but it does seem to make reintegrating concepts easier idk, haven't used it because gpulet
>>19284 Finally! A program to turn every LORA into Jimmy Carr.
(1.74 MB 1024x1536 00010-2128071200.png)

(1.53 MB 1024x1536 00087-20648222.png)

(1.63 MB 1024x1536 00018-4237149154.png)

(1.79 MB 1024x1536 00033-3871192271.png)

(1.59 MB 1024x1536 00093-3399522588.png)

(1.78 MB 1024x1536 00006-1755472217.png)

(1.84 MB 1024x1536 00113-2374576055.png)

(1.85 MB 1024x1536 00006-1616998851.png)

>>19291 >>19292 good stuff
(13.76 KB 181x181 1683192786714170.png)

>>19290 Damn brat! Needs Correction!
(2.93 MB 1280x1600 catbox_v2md2a.png)

(3.85 MB 1280x1920 catbox_ofy3vo.png)

>>19294 I want her to correct me
haven't seen umbrella anon, fishine anon and seraziel anon in a while
(912.31 KB 1280x1920 catbox_22pk0m.png)

Trying to do the Kirby thing but with Pikachu and Sango. Works pretty well with furry models but difficult to make it non-sexual
(1.16 MB 1280x1600 catbox_zm9rtp.png)

>>19297 and of course the moment I post this I get a much better one
>>19298 how horrifying
fucking dire when windows search works better than civitai's
>>19300 Did you use the new UI? You can just revert it.
>>19301 using the old one, you need to completely match names or it won't find anything
Why is Stable-diffusion so BAD with Glass?
>>19303 i mean, you gotta give more context than that. Glasses? Glass cups? Glass windows?
>>19304 Glass in general SD can't make glass to save it's life It just looks like there's nothing there
>>19305 >It just looks like there's nothing there don't act like you've never walked face-first into a glass door
>>19306 I chuckled >>19305 try adding reflective glass maybe? I know it isn't a tag but using "reflective" usually works wonders on seethrough objects in my case
/vtai/ has some cool gens, sadly it's filled with the fat fetish shit that turns me off... man if I can't take that I imagine I could never take the shit that the furries post but I am interested in their methods for training and such knowing how it feels like we're at a stalemate here
>>19308 they probably use discuck, you can just turn images off when you're in their servers
>furries suceeded in mixing a vpred model with a non-vpred mix >requires vpred yaml the vpred infestation has begun
Is there a sleeping at desk/table LoRA? (even if it's not a PoV, just from the front is fine) Specifically with the hands hidden for obvious reasons
>>19311 I thought the yaml is only necessary for SD2.0?
>>19313 nta but since v-pred is part of SD2.0 it would make sense
what is v-pred anyway?
>>19313 yep with traindifference you can kinda mix vpred and non-vpred models, resulting in a vpred model >>19315 short answer : supposed to follow your prompts more closely or something long answer : https://arxiv.org/pdf/2202.00512.pdf
>>19316 Yea I aint reading that shit, I'm just gonna assume this is a good thing when it comes to getting good shit from 2.0 into 1.5 or vice-versa or whatever.
>>19317 it's generally a good thing but you need CFG rescale, and you need to adapt your proompting style to it since it's different from regular eps-pred NoPE (no positional encoding) is another thing they've done which is good, seems to basically remove the 75 token chunk retardation and also makes it so the model doesn't pay more attention to the first tokens
>>19318 ok, guess we will let the furries beta test this shit
>>19316 >>19318 Not reading it either but I can't wait to try it out. No more ordering + more rigid prompts sounds like pretty big breakthroughs
>>19310 wait warmly, she's gonna fix the UI
>>19321 >she
>>19176 >>19199 By the way, if you're ever gonna redo it could you check the color tags manually please? Especially the hair-related ones. Seems like most colors want to turn green-blueish (not pircrel)
>>19323 oh man I did NOT notice the eyes, I shouldn't have anime6b'd
>>19324 >he doesn't adetailer ngmi
>>19325 >adetailer I have a gambling addiction and I'd rather roll here than on Arknights.
>>19325 I prefer to manually inpaint tbh
>>19324 yes you shouldn't use that shit
>>19328 When it works it works.
>>19326 I'm a gpulet who maximizes yield and I haven't had much trouble with adetailer after spending some time tuning the parameters for my case.
>>19312 I took a quick look at danbooru since this seemed fun to do, frontal art of sleeping at desk is really rare. It's unlikely to be enough to train a conventional lora with.
>>19290 >>19295 that's a really good style, what lora?
>>18738 SDXL. If your PC is too weak to handle XL you can use AnyLora.
>>19333 go back
>>19333 Are you lost?
>>19336 >>19334 I'm sorry if your pc is too slow or you can't prompt but that's your problem. SDXL is the most advanced checkpoint at the moment but I understand that some people have weak GPUs. AnyLora is a standard checkpoint for anime pre-XL, that's a fact.
>>19337 You better be trolling because we actually ban people here
>>19338 Dude if you can't prompt for life or just poor it's not my fault. If you ban people for truth go ahead. The results speak for themselves, XL is just superior and de-facto new standard for anyone who can afford using it. Adapt or perish.
>>19339 >obvious stabilityjew trying to shill through reverse psychology yawn seen it before
>>19339 blah blah blah you are in the wrong neighborhood keep talking im just gonna report and ignore you
>>19341 >>19340 No argument as expected, of course.
>>19342 I'll bite. >poorfags >same exact recommended requirements as regular SD
Guess he's off the clock because the brigading on /sdg/ also stopped
>>19344 yeah it's 8PM in india
>>19344 >>19345 Off the clock and onto emad's pajeet cock.
These shills must be having a hard time if they have to come here of all places to try and fill their quota. Things in SAI HQ must be grim.
>>19347 (((they))) noticed we were trying to train instead of making more (((loras)))
>>19348 heh, my secret is that I'm always training
>>19348 >(((loras))) What's talmudic about (loras:1.3)?
>>19350 I suspect that their only option right now to bait people into XL is to just LoRA merge a ton of shitty civitai shits on supermerger.
>>19351 If you told me that the SDXL dataset had no niggers in it I would probably jump ship ngl.
>>19352 Good news, my dataset has no niggers
>>19353 This is a good day for White Jamahiriya.
GOOD MORNING SIRS
What the fuck is it with NAI mixes not being able to prompt Reisen?
>>19332 No style lora actually, just b64 and the Sango character lora. Also get the catbox script >>18742
>another day another insufferable shitshow on cuckchan
>>19358 can't you set up a script to scrape for catbox/mega/civ*t every few mins and check once a day to see if there's anything good?
>>19359 There already is a catbox scrape Gayshit gets updated regularly with the new megas And there are 3 chink mirrors of Civ
>>19360 problem solved then, no need to go on cuckchan
>>19361 only reason im on cuckchan atm is for the scrape or pestering people to post catboxes Other than I'm cleaning up my images as I check in
>>19362 how much bbc/unrelated bara-looking monster men shit is on there atm?
>>19363 lots of bitching, not enough image posting
>looking to the side doesn't look to the side, just gives you from side gens bullshit
>>19361 I still have compulsive desire to check those threads sometimes.
>>19365 Look to the side and look away and look at object all fucking do nothing for me
>>19365 "profile"
>>19367 >Look to the side and look away and look at object all fucking do nothing for me Looking away also either gives me from side gens or moves the eyes ever so slightly
>>19343 The requirements aren't really same. SDXL certainly can't run on 4Gb GPUs and require higher VRAM for comfortable genning and especially training, but that's the cost of a vastly superior model. Listen man I have no stakes in this game. Compare SDXL to 1.5 and see for yourself. XL already does amazing anime and more fine tunes are coming. You can just admit that your hardware is outdated, no shame in that. But your angry outbursts are just sad and undermine future for our community.
>>19368 Looking to the side should be the heading looking to the side, profile is an entire side view of the character.
>>19370 >Listen man I have no stakes in this game. lol, lmao >The requirements aren't really same. People have been recommending AT LEAST 8GB for SD since the inception of the OG Voldy guide. Maybe all the noxious fumes from Delhi's shit-flooded streets have started to affect your already negroid-level brain. >You can just admit that your hardware is outdated, no shame in that. I have a 1080 Ti and a 3090.
Don't even reply to him, just ignore him an wait for the mod to come around and to take the trash out
(1.61 MB 1024x1536 catbox_nc0pux.png)

ANGRY WOLF PUSSY ANGRY WOLF PUSSY ANGRY WOLF PUSSY ANGRY WOLF PUSSY ANGRY WOLF PUSSY ANGRY WOLF PUSSY ANGRY WOLF PUSSY ANGRY WOLF PUSSY
>>19375 Now show her butthole
>>19360 i wish gayshit anon would divide links by date, it's hard to tell what's new
>>19376 No because I'd be endlessly rerolling for a tail that's actually attached to her lower back
>>19373 >>19372 Pretty much nothing but purely emotional responses and vitriol. >I have a 1080 Ti and a 3090. Well you obviously fall under promptlet category.
>>19379 And you're obviously a nigger. A 4090 has the same amount of VRAM as my 3090, you have no argument.
>>19380 He’s was posting the same bullshit ok /sdg/, just ignore him and report him
>>19381 Calling him a nigger is way more fun tho.
>>19380 You just want to reuse same dumb prompts forever. Sorry you're not able to adapt to novelty.
ok I love vpred+NoPE shit lets you cut down in half your prompts if like me you were doing large schizo prompts to get the model to to what you want no more breaks, no more duplicate/stronger tags, no more absurd weights it actually follows what you ask it to do
>>19384 Ok so how do we get this shit to work?
>>19385 FUCKING BASED
>>19386 gotta train from scratch lmao, since the only sd2.0 models I know of are base and furry or you could try the trainDifference trick to port a model, but that would be a tedious process and even if "theoretically" it's v-pred, it's like the 22B text models : it's a frankenstein monster you should avoid if possible
>>19388 also, loras and embeddings (though less affected I believe) will prolly need to be retrained but since SAI moved on to SDXL it's fair to say SD2.0 will stay relevant for a while
>>19385 >not super recommended what did he mean by this >that is not going to happen holy fuck >>19388 I wanted to see how trainDifference would work. Couldn't you in theory trainDifference NAI/NAI derivative and 2.0 for the v-pred and see how it operates?
>>19390 >Couldn't you in theory trainDifference NAI/NAI derivative and 2.0 for the v-pred and see how it operates? yeah but from the little I understand of the vpred paper, proper vpred needs training on the v-loss because at terminal SNR eps-loss is zero or some bullshit like that. It'd work, but I doubt it'd do well. Still would be interesting to see
>>19391 it would be the closest thing to a NAI SD2 rebase you could get, even if it ends up being a hilarious dud. Would be even crazier if you could then still merge models into that frakensteined concoction
>see anyLoRA in a post >instantly worthless opinion simple as
>>19393 What is it based on? Counterfeit or something?
>>19394 Literally a bunch of mystery goyslob ingredients mixed together.
>>19395 Sounds like something I don't even have to test myself, not like I want to from the stuff I have seen people make with it. I still can't wrap my head around why people like Counterfeit-v3 so much.
>>19396 Counterfeit is such garbage, and I remember that people had problems with LoRAs on AOM3 versions and Based Mixes that had Counterfeit mixed in. And people STILL use it despite being told not to. I will never understand idiots.
(886.05 KB 640x960 catbox_75eehm.png)

Damn low res gen💢💢💢💢 needs upscaling correction💢💢💢💢
>>19397 Yeah I don't like it at all but I was able to make something decent with it but the hands gacha manages to be worse than NAI with latent upscaling. >>18172 >>19398 kek
>>19399 yea there is something super funky about Counterfeit, guy doesn't even explain what his parameters are other than it is a "trained checkpoint". Its crucial information I would want to know because its also stuff I got to look out for when I pushing my dataset number higher on my finetune as things start getting cleaned up and ironed out. Last thing I want is that some setting or something in the way I trained the model makes LoRAs not work. I wished HLL Anon was still here, I don't wanna bug him on /vtai/ out in the open.
>>19399 >>19400 it's a model made for women
>>19401 kek women (male;eunuchs)
>>19402 yeah maybe those ones as well, but counterfeit always gave off that "female artcel" feel to me, dunno
>>19403 >Bright saturated colors >Excels at making handsome males but can do females as well >can be a little sexy but not oversexualized, just "cute sexy" >very artistic details Yeah its very much fem artcel
(74.36 KB 235x509 CIVIFUC.jpg)

>Civitai not allowing fun again Someone managed to get this Lora before it was deleted?
>>19405 How odd, it's not even on the chinese reupload site
>>19407 >you like krabby patties don't you Sqidward?
>>19405 It got uploaded like a few hours ago, didn't last long
How do you guys usually go about getting details right? Cropping the details and adding proper tagging or just upping the resolution in training could do the trick?
>>19360 >3 Link for every mirror please. sd4fun kinda wonky more than often.
Anyone burning the witching hours/early morning oil? I slept through the evening so I'm wide awake working on my images.
Civit AI just begun a new giveaway for PROOMPTING to win a 4090 - no training required. Amazing chance for anyone to win a 4090 or other cool hardware to use SDXL to its full potential.
>>19413 oh interesti- >SDXL to it's full- go back
>Can't train SDXL? Prompt SDXL images to win in our SDXL 1.0 image generation contest! It's almost as if the shills have been reading everything we've been fucking saying and are now tripling down.
>>19412 woke up an hour ago, mornin' >>19413 don't you have a street to shit on
>SDXL shilling starts again >sdg is flooded with ugly realism shitskins GOOD MORNING SIRS
>"Civitai? Emad and Joe Guitar here, make an SDXL training contest with 4090s as prizes and we will give you more VC money" >"Sure but don't you need a 4090 to get people to train models?" >"No its easier to finetune XL! No worries" >bitching and seething on reddit about not be able to train on a 3060 or 4070 >"Quick! make a contest that only needs you to prompt to win a 4090!" man this is getting wild
Did we ever figure out why NAI's vae always fucking breaks?
>>19250 Well one of the Waifu Diffusion devs training SDXL is in contact with Danbooru admins now sooo... (cap from the AIBooru Discord)
>>19420 oh hey that's one of my friends
>>19419 Don't think so, is a fix even possible at all?
getting to latest a1111 was a pain. got fucked trying to update with git pull so started with fresh clone. only to get fucked there too with statvfs = wrap(os.statvfs) AttributeError: module 'os' has no attribute 'statvfs' aiofiles released a new version 3 hours ago that is clearly bugged. had to uninstall that and manually pip install aiofiles==23.1.0. seem to be getting somewhere now.
>>19419 Just bad training that ends up not liking certain latents I assume. Even casting to bf16 on pytorch nightly doesn't completely resolve it.
>>19420 It’s a start, I suppose it’s still gonna take some time to clean every single one of those images. Other thing would be to try and look for other sources of images so we aren’t stuck with NAI quasai-1.5. >>19421 Which one and are they useful?
>>19425 fredgido, the AI booru admin We don't talk much but he's cool
>>19419 It only breaks when I use controlnet reference and certain loras like Lowra for me. Obviously using --no-half-vae.
>>19427 based lowra user
Using remote desktop to prompt from the couch but it's a miserable experience, is there a better way to do it without enabling the share startup argument? Gradio breaks at random whenever I try that
>>19424 So, are we gonna actually try to do this dataset thing we were discussing the other day or was it just game theory craft talk?
>>19430 Ignore the reply, click you accidentally
>>19430 isn't the real problem paying for compute?
>>19429 --listen? If you're doing it locally there's no reason to use --share. If you are doing it remotely look into ZeroTier or Tailscale, don't use Gradio's unknown binary.
>doing the whole board for your deranged porn tech addiction Get help fuckers. https://www.youtube.com/watch?v=UO4N2qQdwuI
>>19434 Oh look, a kiwinigger.
>>19432 Dataset work doesn't require compute, plus it would be a good bargaining chip to get one of many potential third-parties to help train it, assuming you don't plan to outright sell the dataset to everyone.
>>19433 That worked, thanks. Somehow didn't know about --listen. Are there downsides to it?
>>19436 So you get a dataset together and then ... profit? Don't think anyone will line up to train for us.
>>19438 if you are not going the shekel route, someone said Haru at WD is open train datasets so he would be your Free Space on the bingo card.
>>19437 christ gradio is absolutely raping my phone
>download a char lora for funsies >its toasty as fuck
>>19442 >>19443 Was testing out little lyrical loras from civitai Only found Kyouka/Mimi with Misogi not existing and the two that exist need to be set at like 0.4 because of how toasty they are
>>19423 It was fixed now. The issue funnily enough stemmed from Gradio though. https://github.com/gradio-app/gradio/issues/5153
>>19444 >0.4 You can never fucking trust a civitai shitter's work >>19445 Gradio strikes again
>>19445 just my luck lol trying out some of the new extensions with this fresh clone. want to use the regional prompting
>>19426 Considering the type of stuff fred posts in the *booru servers this does not surprise me someone else here knows him lmao
I don't know any of these people so I don't know shit
>>19448 I mainly know him from the early NAI days, we formed a touhou-themed AI dickscord with bunch of people with good gens from the very early /hdg/ threads but it's been inactive for a while
if all you use models for is merging and generating, is there any downside to pruning everything to 2GB f16?
>>19452 I would say prune your end mixes only
>>19450 Now I'm curious if any of my gens landed in there :^)
>>19451 Cute >>19454 Probably as mkbin or whatever his name is yoinks stuff from here. Personally I'm not in any AI related discords as I'm too lazy to have 2 accounts.
>>19455 >Probably as mkbin or whatever his name is yoinks stuff from here. That is me. I've been here since the SD leaks on /g/ last year, been around for every /hdg/ thread. Also just one of my aliases
>>19456 I see.
><lora:LoRA_youmukun:1> Anyone got this? Couldn't find on civit or gayshit
>>19456 >just one of my aliases I see, another fellow multi internet identity user schizo, we may even be the same person!
(1.57 MB 1024x1024 catbox_y34xjl.png)

(1.51 MB 1024x1024 catbox_fyqlc0.png)

(1.79 MB 1024x1024 catbox_pm6bn2.png)

see if this catbox thing works...
>Use lycoris >It works >Add a lora to the prompt >Lycoris stops working completely I don't get it.
the real addiction is trying to inpaint a hand for 30 minutes straight only to give up in disgust
>>19462 >inpainting hands of everything when I picked up SD after a break, loras were pretty new, and after spending god knows how long on the lora, I inpainted a single pic (well, more like a set based on that pic) for 11 hours straight after this repeated for a few days I just decided to avoid inpaint if possible
>>19463 that's some dedication right there. but yeah, it gives me an ocd itch. I went away to make tea and I'm back trying to inpaint this fucking hand again. just going to try the blender rig to pose this hand out, if that doesn't work in a few rolls, I swear I'm done.
I found the solution!
>>19454 >>19455 Nah, it's a private dickscord and we only share our own gens. It's not THE touhou ai dickscord. I think the only other /hdg/ "e-celeb" we have is the sdupdates rentry anon
>>19464 >>19462 Even when the anatomy is kinda decent SD hands look... wrong. They look far, far too sinewy for female characters and I have no idea how you'd fix that. Same for arms at times.
>>19461 Seems like it's a mix problem? I made some shitty mix between Based64 and a model I found here using the "Checkpoint merger" options in WebUI and for some odd reason Lycoris does not work at all on the mix. This is some ultra bizarre error.
>>19466 they will never know I was the one who first implemented clip skip in voldy... I do miss the very early NAI days though, cozy times.
>>19469 How does clip skip work anyway? I always stay at 2 and never touch it
(2.27 MB 1024x1536 catbox_2mufeh.png)

(1.53 MB 1040x1280 catbox_y8wq4y.png)

(1.58 MB 1040x1280 catbox_6d9edq.png)

(1.59 MB 1040x1280 catbox_lj23wc.png)

>>19469 clip blend chads where you at
>>19471 that was a great read, thanks anon
>>19471 oh i get it i don't get it
>>19475 >trying to understand the multimodal classification jew
>>19466 >private /hdg/ dickscord was it run by that kimono foxgirl anon that was running the /hdg/ rentry?
>>19477 uuuh no? At least I don't think so. I'm technically the founder but I let the co-founder (who goes by manabe apparently) manage it. Again, it's not active anymore and there's only like 9 of us. oh and i think that guy either trooned out or is trying to pass for some artoid e-girl lmao
>>19477 >>19478 There are a couple of fatchouli anons from the early /hdg/ days, fred, the sdupdates anon and the chad Yukari anon.
Still using the degenerate onahole lora I grabbed from this place Surprised how it was made like 4-5 months ago but it continues to be better than anything else out there or in civitai (The Gtonero dude has a similar lora but its objectively worse in every aspect)
>>19480 Forgot about that one, I should make some gens with it later
>>19480 I'm still using the breasts on glass and the reverse bunny embeddings, they just work.
>the old shit, crafted would technique and a desire for good fap, still work better than rushed hackjobs on civitai many such cases
>>19480 Yeah theres like a wide skill gap between the chan imageboards and civitai, I have loras and embeds that work better from here and cukchan than civitai despite being made months earlier.
>>19481 The sole issue i've had with it is that sometimes it gets this HARDCORE desire to draw bedsheets, pillows and sheets on like everything and its really hard to get it out of that but its extremely good and consistent in general.
Prompt to win an RTX 4090! Can't train SDXL? Prompt SDXL images to win in our SDXL 1.0 image generation contest!
just to make sure, what's the final verdict on quality tags (masterpiece, best quality) and which neg embed is the least bad?
(1.87 MB 1024x1536 catbox_4peqys.png)

ASSERT DOMINANCE
(1.98 MB 1280x1600 catbox_zdk7f1.png)

The merge has begun
(1.56 MB 1024x1536 catbox_i8sjdj.png)

need
>>19487 >those curves on first GET PREGNANT GET CORRECTED BRAT PLAP PLAP GET CORRECTED
>browsing pixiv/exhentai without AI filtered >99% of posts are fried/NAI/AOM2/3dpd unlikable garbage Honestly sometimes I understand normalfag aversion towards AI sometimes. Good AIshit is more of an exception. This stuff is so fucking flooded with grifters and pajeets
>>19493 Don't go to deviantart holy shit, (I was there collecting for a personal lora dataset) It's as bad as you think.
>>19494 deviantart was bad long before AI but yeah it's even worse now. Honestly /hdg/ threads are the only places where I see good/passable anime AI stuff somewhat consistently.
>the normalnigger wannabe janny strikes again
>>19497 NUOOOOOOOH you are trying to avoid bans by giving the "loli" A cups!
>>19497 tranny janny hates cunny
Good morning sirs! I hate women!
>>19497 >>19498 i didn't think the xaxaxa lora would cause so much seething. it honestly makes me want to train even more similar artists to stoke the flames
>>19501 please don't make the janny too angry or xher gushing axewound is gonna explode
>>19501 Based, well I will keep on using it for a bit. Really digging the style.
>>19503 Catbox for this one pls?
>>19503 Thank you.
>>19503 This LoRA is literally printing gold right now and my backlog of gens that needs a bit of work is huge already.
>Deliberately UOHposting to trigger the troons basado
>>19508 Seems like it worked.
>>19509 they didn't even put up a fight lol
(2.45 MB 1152x1664 00044-3289717446.png)

>>19487 Catbox for first?
(2.11 MB 1024x1152 catbox_8eba2x.png)

(2.05 MB 1024x1152 catbox_08zkau.png)

(2.14 MB 1024x1152 catbox_0ak4tx.png)

(1.46 MB 1040x1280 catbox_v3fntx.png)

(1.47 MB 1040x1280 catbox_wsy0mr.png)

>>19513 ><lora:style-sunshine:1:0.0,0.0,0.25,0.25,0.25,0.25,0.25,0.25,0.25,0.25,0.0625,0.0625,0.25,0.25,0.25,0.0,0.125> <lora:pastelmix-lora:0.21> The hell?
>>19512 >Token merging ratio hr: 0.4 Is this with an extension or built-in? Not sure how it works. A quick google shows that it 'speeds up gens by merging redundant tokens', but I don't really know how it works beyond that. >>19515 Looks like lora block weights.
>>19516 >Looks like lora block weights. I know, I just haven't seen anyone make their own. Would just show the name if it was a preset but this madlad seems to have made it himself.
>>19515 >>19516 What practical uses does this have?
>>19518 In layman's terms it allows you to unfuck a lora as a last resort if you don't want to retrain it and/or make it more compatible with other loras. In non-layman's terms: I have no fucking idea
(28.65 KB 1845x170 token.png)

>>19516 Built-in and yeah it speeds up genning at the cost of a small loss in quality.
>>19516 >Is this with an extension or built-in? Not sure how it works. it's built-in NTA, but if you want to use it, only do it on the hr pass. It speeds up genning by sacrificing a bit of quality, so it is counter-productive on regular pass, but it's a nice speedup on the hr pass. I set it at 0.5, higher than that and it starts becoming noticeable.
>>19517 >I know, I just haven't seen anyone make their own Not sure what you mean, anyone can apply block weights to any lora, they just need the extension. Problem is most people aren't certain what layers do what exactly and which ones to adjust. I have no idea either, I tried asking before in >>18934 and didn't get a response. >>19518 Theoretically, different layers learn different things. Artstyle, concepts, poses, etc. That's a very simplified way of putting it though, layers do more than one thing at once. Hypothetically, you could train a character lora and adjust the block weights so that it applies lora weights related to the character's features, thus reducing the impact of artstyle, poses and whatnot. ...I think. See >>18934 for an example of what layers do what. I've yet to try this with artstyle/concept loras though.
>>19522 >>19519 Interesting. I had some chara loras with fried-in style, they didn't go well with style loras. Maybe I could try this
Lora block weights just stopped working for me like a month ago. Changing the block weights drastically made practically no change to the image
>>19522 >Not sure what you mean, anyone can apply block weights to any lora, they just need the extension. I meant that people just use the block weight presets that ship with the extension and very few people actually mess with the values themselves
>>19524 Did you remember to enable the extension itself? People forget to enable it and addnet all the time
>>19526 I did, I have that reflex because I used addnet exclusively until recently
>>19524 Did you pull? Syntax has changed somewhat for the newer versions of the extension.
>>19528 I pull everything everytime I start, so yeah could be the syntax, but I remember that the presets I made didn't work either, so...
>>19508 >he took the bait AGAIN
>out of space again >remember i have a c*mfy and invoke install i only used once >free 20gb
>be me a few days ago >want to look up an artist on gayshit or civ*tai because either he has no lora or the current one is shti >too busy genning so i don't check >be me now >spent the last 2 hours trying to remember what it was >can't it's over
>>19426 can you ask him to auto-tag uploads without metadata as metadata_request so i can filter all these glue-sniffers uploading their own stripped work
>>19533 done, he'll talk to the other admins first but it's probably gonna happen
>>19534 >>19533 until then you could probably also blacklist tags like watermark, username, etc
>>19532 i just remembered, it's letdie1414
>>19462 the secret is to photoshop the hand into about the correct form first >>19488 depends on your model :^) >>19534 nice, thanks dude
>>19522 >different layers learn different things Afaik, the layers are similar to a convolutional neural net (CNN), and I kinda understand those. If you open an image in GIMP/PS and run edge detection, that's like a level 1 convolution, which finds all the edges in the image by detecting where there's a sharp change in value. CNN does something similar, except it learns the convolution, so maybe it'll detect edges, maybe something else. It has a stack of different convolutions that it runs at every level to detect different things. With a CNN, you repeat this convolution process again (on your first level edge detected image) then you start to find "higher order" features. So in layer 1, you found edges and texture, layer 2 finds fine-grained details like parallel lines (eyelashes) or dimpled surfaces (skin), and this builds up into larger chunks -> facial features -> faces -> people -> composition. Picrel is a horse in a forest, with each layer of convolutions picking up different features.
>>19538 I hate NNs so much it's unreal (because I love that tech but its the complete opposite of my field so I can't really do more than light reading)
>>19537 >photoshop the hand I can't draw for shit
>>19540 Maybe he means literally copypasting a hand from somewhere else then doing img2img/controlnet?
>>19539 NNs aren't my field either but the layer thing is pretty intuitive. More and more abstract things live in the higher layers. Knowing more than that doesn't help with the block merging anyway, because the shit is so trial and error. Just know that if you want big compositional things from a model/lora, then you need the deepest layers. If you want things like line style, then you need the shallow layers.
>>19540 neither can i, doesn't matter. you just need roughly the right shape and color of blob and inpainting does the rest.
>>19543 you've shown me the light, I will try to unfuck my hands in future
>>19528 >Syntax has changed I'm retarded. I have been genning with the old blockweight syntax and it's been ignoring my lora. new syntax is: <lora:AAAA:1:lbw=INALL> or <lora:AAAA:1:lbw=1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1>
(1.32 MB 1024x1152 catbox_y98j5l.png)

(1.01 MB 1024x1152 catbox_4hs5ca.png)

(1.17 MB 1024x1152 catbox_51hvdv.png)

lowra is still kind of bright, tried adding epinoiseoffset to help it out
>>19546 >he resorted to boomerprompting
>>19544 clone stamp, magnetic lasso -> new layer from copy, patch tool, and liquefy are all i know how to do in photoshop and they've never failed me
>>19547 lol but yes, because normal tags invoke light
>>19533 use "has:metadata has:seed exif:PNG:Parameters"
I'm on a quest to find the least fried meme LoRAs on ci*itai
(4.25 MB 1536x2304 catbox_0fhv49.png)

only need three things in life my boots my truck my elf can i get a yee haw
>>19553 yeeeee
(2.33 MB 1664x1216 00091-1266706323.png)

>>19553 Four for me. Yee haw.
I finally fucking finished this (first) clean up of my HF 3 data, this took longer than it needed to be but thats why I was gonna give myself a month to fix problems. Here are some random gens I made throughout the week in-between my clean ups. Have been enjoying the artbook guide kind of theme.
(2.73 MB 2048x2048 catbox_h1etwf.png)

Any decent coom models other than B64?
>>19548 catbox for these?
>>19558 >yassy lora Hand that shit over and nobody gets hurt.
>>19561 it was done by somebody in a previous thread https://files.catbox.moe/rd2ltx.safetensors
>>19562 Yeah there are a bunch of dead anonfiles link on there, seems like there's one from ATF and one that was made here
>>19259 Raw NAI/AOM stuff is genuinely offensively bad
Should I crop anime screencaps? Never trained a lora like that before so idk if they too are rigid in aspect ratio
>>19565 for LoRAs the aspect ratio is not a big deal unlike my finetune that is currently 100% anime screen where it can become a problem. I assume you aren't just using screencaps right?
>>19566 nope, I have a dataset of about 800 pics after a bit of pruning (not including anime screenshots) with repeats that comes to around almost 1.8k (Idk why I always do big datasets when I know full well my 2060 will explode out of spite on day)
>>19567 I would say this, do a training with the images uncropped, then do one where they are cropped, compare results. If its a character LoRA , make sure crops focus the face and key details.
>>19568 Will try that, then. Yep, it's a character lora, and I'm trying to add a tag that would use the anime style.
>>19569 wait what are you trying to make?
>>19570 character lora + outfits (on tags) + anime style (on tag)
>>19571 That many things on 1 LoRA? I mean what you could try doing for the style is just use the studio name as your activation tag and tag all the screencap images with that and see if you have enough images to prompt the style amongst your total dataset.
>>19572 It all seemed to fit on the previous test, even with few images tagged for anime style there was a very noticeable style change when adding that tag. I'll harmonize with repeats if need be
>>19573 Hoping to see your success
>>19574 Thanks.
(1.60 MB 1024x1152 catbox_sktydb.png)

(1.68 MB 1024x1152 catbox_u7pibd.png)

(1.57 MB 1024x1152 catbox_hptv4a.png)

>wanted to prompt >divegrass tag team cup starts >kiki is the mvp well shit i guess i'll prompt football uniform kiki later
>>19575 finished screencaping, but I've added 3 outfits/character state to the list, all of which aren't really present in fanart, so this is going to be rough
has science gone too far?
>>19578 You just got to work with what you got. Maybe make additional images of just crops only? I'll admit that making LoRAs (or getting specifics in a LoRA in your case) with little data is not in my skillset.
>>19580 has to be controlnet, right?
>>19580 Saber became sluttier...
>>19582 not controlnet but img2img, would probably get much better results with controlnet in general and on some more complex CGs as seen on the three way scene Soon I will be able to ufotable-ize fate route because they sure as shit wont do it now post HF. Oh and Rin defenseless anus sex (True) hopefully, that will need controlnet for sure >>19583 she was always a secret slut
>>19584 Have you tried style loras with your finetune?
>>19585 The only thing I know that works without issues is TIs and that was by accident due to accidentally prompting an embed in one of my test prompts, I haven't really done any complex additional network testing since its just been trying to craft and QA this model. But I currently have the finetune mixed in Based64 configuration (so the entire B64 recipe but AD'ing my model instead of HLL3.1) and the finetune is trained with the exact settings as HLL 3.1 (hll anon he provided his script here in a much older thread here) so in theory anything that works on b64 should work on this, you are just trading vtubers for type-moon characters and ufotable style Oh hey, it worked on an Unlimited Codes CG too
>>19581 will try to see what I can do, yeah finished adding the main tags to a few of the previous images and all the anime screencaps, I'll do the autotagging + tag cleaning tomorrow What's the consensus on tag pruning? No for character features, Yes for outfits?
>>19587 >What's the consensus on tag pruning? No for character features, Yes for outfits? I think it's the opposite...
>>19588 Really? Intuitively, you'd want to have flexibility on the character, so no tag pruning, and less flexibility for special outfits, hence pruning.
(4.69 MB 2048x1576 09875-combo.png)

(1.51 MB 1024x1152 catbox_4rup36.png)

>>19589 Yes, really. It's always been the opposite. Or at least that's what everyone parrots. Just apply some common sense and only prune tags that you will NEVER change under any circumstance. For example if you're done a specific flavor of animal girls (eg Kemono Friends) you should absolutely prune tags relevant to features that are immutable to said characters - eg the animal ones.
>want to train a character lora >virtually no art of her, not even a booru tag >she's from a single doujin >the doujin got popular enough to receive a 13 min video adaptation > ... by that nigger godoy pain peko
>>19584 this is really fucking good dude. could probably mod the game with these replacing every CG
>>19584 >>19586 how much time do you think you need to finish this?
>>19593 Thanks. And just for transparency sake, this was my first time attempting ufotable-izing the original VN CGs on the suggestion of my friend helping me while we were on call, so I did do actual tests after the initial ones I posted to see how good it currently is using a previous revision model. The original 800x600 resolution images are kind of tricky to work with. I was using a source from ExHentai containing upscales of only the h-scenes to do ones I showed off. When I had to use the original rips because an (upscale of it does not exist or isn't on Panda) and picked images at random, results were not good or consistent. This could be a job for controlnet, or additionally do an upscaling of the source first before doing work. If one were to use this model to update the CGs for an "HD update", you'd probably have to upscale the source, then controlnet the image in sections, do manual touch ups on inpaint and photoshop of the pieces, then downscale back the finished product to 800x600 to slot into the VN. The Kotomine Unlimited Codes CG that I used was (2888 x 2137) so the results from the priest smiling on a raw img2img were great. The attached images are some testing with 800x600 images. If they are close ups, they can work, if the images become more wide-shot like (a battle scene for example or a very FX heavy CG), things get messy. The model can still be improved from a training parameters side as well since I'm using outdated settings compared to what current HLL is at. I just have to dig through the furry diffusion discord more or see what I can find on the WD discord but one thing I can do is bump the training resolution from 512x512 to 768x768 (HLL3.1 was trained at 512, but it has since been bumped up to 768), I would just need to see how I should adjust learning rates for the change in resolution. >>19594 I will probably have a better answer at the end of the month after me and my buddy finish the current tagging, clean up, and image quality control that I scheduled out for this month to see how close we get to a stable build. Even with that, without adding any new images, I could continue working and improving the dataset all year round just manual tagging, recropping, add/removing frames, touching up images themselves, and redoing older parts of the dataset. This is pretty much a 2 man team attempt at a mini Novel AI quality operation. I do still have a back log of Ufotable material such as the rest of the KnK movies and a ton of art books scans and key art pieces ripped from official Aniplex/Fate sites to sort through. For the latter, I just really wished there was a tool to help you pre-sort your images into the aspect ratio buckets so I can crop them manually into a divisible resolution size myself before they get resized down in training, that would help out so much when sectioning off scan pages of artbooks. Someone recommended the Kohya GUI but that function seems broken as it doesn't respond to launching the script and I can't take time away from cleaning the dataset to figure it out so I just put it aside.
(2.13 MB 4096x3152 catbox_f7jndu.jpg)

have you guys tried multidiffusion-upscaler? I'm really impressed with the results of using it together with the tile resample controlnet.
>>19596 I don't know how to use either multidiffusion or controlnet
>latest firefox nightly completely breaks 4chanX (instant crash of 4chan tabs) I fucking hate this nigger browser. Had to switch to the nightly version because base firefox ignores the flag to not check extension signature. I don't even know if you can easily downgrade nightly updates, or even download specific versions to downgrade.
>>19598 4chanx works fine on FF 116.0.2
>>19599 I know, but I'm on Nightly (118.0a1 2023-08-11) because otherwise FF completely ignores the flag that lets you install unsigned extensions (which I use for the captcha solver on 4chan)
(1.56 MB 3544x2304 image.jpg)

>>19596 Been using it for a while. It works well for certain gens but fails in others in terms of details. Some upscales come out sharp with good detail whereas others look blurry and lose detail. Still figuring out what settings to use on a case by case basis. Overall, it's a great extension but damn I need to figure out what to do when the latter happens.
>>19600 oh, thought you were saying 4chanx didn't work without nightly. my bad. >>19601 nice bar concept there, that looks great. yeah, after using it a bit more it did fail quite badly on some of the gens. I'm thinking it was if the auto-tile led to unlucky crops? playing with denoise strength and tile overlap has sometimes helped. but when it works, damn it's great. have you tried different samplers? the docs said euler a and so far it does seem to be the most reliable.
(2.57 MB 1792x1152 00011-2387883077.png)

>>19602 I've been fiddling around with noise inversion, which only supports the euler sampler. I should turn it off and give other samplers a try as you suggest. Image I posted was actually an example of a 'failed' gen, as the original while noisier had better detail. I inpainted a fair bit of stuff, but you can still look at the outfit's fabric, as well as the wood grain/wine bottles at the back. The bottles had a better feeling of depth and more complex shapes, post upscale it gives a very flat feeling. Wood grain is self explanatory.
>>19603 ah, looks good for a failed gen lol. what's a successful gen look like (before&after)? good hand inpainting, damn. I see what you mean about the blurry bottles and lost wood grain. faces look great though imo. seems you're wanting to keep a bit of the painterly look. maybe neg blurry to hell. or upscale the whole thing with ultrasharp first then multidiffuse. yeah, not sure. DDIM was absolutely cursed when I tried it. DPM++ 2M Karras is occasionally good but noisy.
(1.15 MB 2304x1536 image(1).jpg)

>>19604 >good hand inpainting, damn Thanks. Those were pretty easy fixes since the general shape was already there. >what's a successful gen look like https://imgsli.com/MTk4MDk4 I like how this one turned out, but tbh it just doesn't lose any important details I care about and highlights the ones I like. This was with img2img colour correction disabled, the increased saturation can be good or bad depending on the original image.
>>19605 nice, yeah that one definitely worked out super well. fabric and hair highlights came out really nice. I think the tiling might be the difference. In previous image, there's more space between characters, so the model might end up just looking at the counter top, or just the bottles. are you using the tile controlnet? I haven't really compared head-to-head with and without it but it does seem to work well with multidiffusion.
>>19606 Yeah, using tile resample. It's necessary to avoid lil' people and random faces popping up in the gens.
>>19578 fuck I'm going to have to check eye color tags because artist can't be arsed to be consistent
spring, summer, autumn, winter regional prompter is cool
>>19609 Pretty
>>19608 autotagging done (very quick with batch size 6 and the 3 tagger script) now gotta check the tags 1.3k images, 20 folders
>>19612 Yummy
how do I get two girls kissing or interacting with regional prompter? I am only getting cursed gens
(1.80 MB 1024x1536 catbox_85ti27.png)

>>19614 You could try my prompt, shits a bit wonky but it works.
(1.56 MB 1280x1920 00237-3062105739.png)

(2.14 MB 1280x1920 00002-165533387.png)

(1.77 MB 1280x1920 00988-1388779631.png)

(2.67 MB 1280x1920 00075-3191025407.png)

(2.59 MB 1280x1920 00012-2251244723.png)

(2.99 MB 1280x1920 00057-533736136.png)

(2.76 MB 1280x1920 00002-2551102226.png)

(3.09 MB 1280x1920 00027-3648049926.png)

>>19615 thanks, anon! ah, I see you're using a different extension. I need to get the regional prompting extension. I was trying with the regional prompting option inside tiled diffusion. >>11399 insanely late, but catbox? going through old threads I missed
is it just me or is regular img2img kinda shit? has everyone just moved on to controlnet?
from /h/ > New lycoris shit releasing soon, supposed to be a good way to train styles apparently > https://github.com/KohakuBlueleaf/LyCORIS/commit/48f0836f1e46650419faf7cd37744f10a48292a9 > https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/bd4da4474bef5c9c1f690c62b971704ee73d2860 someone responded with picrel and > Normalization layers were not touched during training up until now In theory, the more of the architecture you can alter, the better for training. In practice, locon/dylora/lohas are kinda bad even if you train the conv layers. > LoKr is good but I don't think it shares much of the training logic of the previous network types. > Hoping for the best
>>19621 So hold on, is that guy saying that LoRA COULD perform as well as finetuning in the future?
>>19622 In the context of LLMs, yeah. Not sure how much of an impact this would have on imagen though
>>19621 That tweet and quoted tweet is nothing new because the original implementation makes very clear it does not train every module. https://github.com/cloneofsimo/lora#lengthy-introduction >Also, not all of the parameters need tuning: they found that often Q, K, V, O (i.e., attention layer) of the transformer model is enough to tune. (This is also the reason why the end result is so small). This repo will follow the same idea.
would a properly trained sdxl-based anime model solve or at the very least greatly help our anatomy issues? do the extra parameters even matter for our use case?
>dataset tag editor doesn't recompute tag frequency when you add a tag filter >gonna have to whip up yet another script for that to get the most common tags associated with one of the master tags (for easy consolidation/pruning if needed) might just be the old version of it I keep because all newer ones are unusable (thanks gradio)
>>19625 Maybe? But a properly trained SD1.5/2.x model would also probably make it better, and wouldn't bring all the problems associated with SDXL
>>19620 You can combine both. Roughly speaking, img2img is good to preserve colors and controlnet is good to preserve shapes although it has bunch of other stuff now as well.
Hey guys, did you know that in terms of LoRA and base model compatibility, B64v3 is the most compatible model for LoRAs?
>>19630 shit's exploding on twitter rn
>>19630 Are you lost?
(1.12 MB 1024x1024 00516-2140620402.png)

(1.20 MB 1024x1024 00109-3700958666.png)

(853.26 KB 1024x1024 00069-495580690.png)

(954.83 KB 1024x1024 00131-254403935.png)

already posted some of these in /ss/, but reposting some /ss/ generations I made.
(1.15 MB 864x864 00644-426136846.png)

(893.64 KB 864x864 00643-2620262181.png)

(844.48 KB 896x896 00699-2788658604.png)

>never inpainted before >decide to try it out >watch a tutorial >reproduce the settings (inpaint masked, original, only masked) >inpaint >nothing happens, just blurs the masked part >??? >realize i have to randomize the seed >do it >still doesn't work >use the original image's positive prompt >manage to break it completely and it won't gen ??????????????????
>>19630 Do you live with Patrick Star?
>>19626 going crazy checking tags I mostly check false negatives, can't be arsed to check for false positives for some common tags, otherwise I would have kms sooner
>>19633 loras and catbox?
(1.92 MB 1024x1024 00121-2539604096.png)

>>19639 Can't upload to catbox, against the rules to upload anything relating to minors. Here is some of the Lora's I use though and can provide info on a specific image. https://mega.nz/folder/2N8znSqQ#euCPytnVagodSMHeZZVDeg Here is also a checkpoint with some Lora's mixed in, so don't add them to prompt if using it or it makes some crazy shit, like the attached image. https://mega.nz/folder/zQ11AKBS#N6M7u3TksjzCQdxsKCLGAQ
>>19640 people post cunny stuff on catbox 24/7 though. Can I at least ask if Latent Couple is involved?
>>19630 it's at the point i'm starting to mix artstyles instead of wasting drive space with specific checkpoints
>>19640 Trigger words?
>>19641 I have never used Latent Couple, but maybe will try it out now that I am aware of it. I will try and upload some to catbox here in a minute. >>19643 I usually try and add "1boy, shota, onee-shota, young male child, 1girl, adult woman" but with the Lora's added it seems like you can usually get away with just saying "1boy, 1girl". the shofel lora's are just fellatio but it seems to work with doggystyle as well for whatever reason, shofel_alt is definitely less stable but sometimes fun to throw in at low strength levels (0.4-0.6). I really am not sure what I'm doing, just sort of winging it as I go.
fuck off shotaniggers
repoasting my poast from /h/ Made two PRs to reduce a shit ton of VRAM for free. For batches the webui was encoding the latent all at once which potentially wastes 4-6GB of VRAM, and for decoding the latent it was leaving the sampler object around which wastes 500MB-2GB (depending on batch size). I think these were the root causes for what ruined hires fix for users many months ago. voldy also made a few improvements on the dev branch about a week ago but I think this resolves the rest of them. These should get merged soon I imagine. https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12514 https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12515
>>19646 based, doing gods work
>>19646 Thanks!
>>19640 >Can't upload to catbox, against the rules to upload anything relating to minors. HAHAHAHAAHAH faggot I've been uploading lolis to catbox since day one and I'm using it as a CDN too more or less
>>19649 'ased I have 120 pages of catbox images
>>19646 From /h/ >Merged in dev now, along with a few other QoL fixes. Also a new sampler. Github >AUTOMATIC1111 approved these changes >AUTOMATIC1111 merged commit 127ab91 into AUTOMATIC1111:dev
>>19651 all me btw :^)
>>19652 not sure if Voldy
>>19651 Does this mean anything for people who don't batch gen?
>>19654 There was an issue with single images too where the Torch garbage wasn't cleared before decoding the latent to an image. You should be saving at least 1GB of VRAM. Basically it should behave like the oldskool hires fix did months ago now.
>>19655 >Basically it should behave like the oldskool hires fix did months ago now. I never updated and I'm still on a commit from March kek
>>19656 Hires VRAM was broken in January by the rework of it. Your version is going to have the same issue.
>>19657 Fine fine I'll update
when should it be ok to pull on main?
>>19659 When its pushed to master, aka when a new release comes out. I have no idea when voldy plans to do that though. Maybe you can peak if the dev Discord linked in the wiki, I ain't joining that since I don't have a reason to.
>>19660 guess I'll just try the dev branch
>>19638 finished tagging, sanity checking tags, setting repeats and caching latents to disk since it's getting hotter, I'll train during work tomorrow (6-7h train)
>>19662 whats your tagging work flow like?
>>19663 >manually put main tags (outfits/char/style...) >autotag >remove retarded tags >see most common tags for each main tag >iterate through all main tags and each common tag, fix false negatives ( search with pos: main tag, neg: common tag) >quickly fix false positives (takes a lot more time usually for not much benefit since it doesn't happen often) >quick sanity check of some important tags to fix issues I use a python notebook for dumb batch tasks and a old webui (before gradio update) with dataset tag editor for granular control.
>>19664 I think you may want to invest into a learning a bit about Hydrus.
>massive ComfyUI shill going on in /sdg/ >around the same time a lot of Auto commits being pushed and tested interesting
>>19666 also a lot of them in /trash/
I really wonder if it's jeets or actual underage fucktards doing this
>>19668 shilling is done by jeets using c*mfy is by underage retards (not smart enough to just use python scripts directly)
>Got muted for "spam" on /g/ lol lmao, if I'm the only getting banned again I will be pissed
>>19668 one is a mentally ill udneraged, but some of the other anon posting share traits with pajeet crypto scammers unironically
I got an almost instant 3day for posting my test sdxl gen with a simple "fuck niggers" text yet avatarfaggots blatantly shart the general with their garbage for fucking months castrate cuckchan jannies for making me restart my router
>>19672 lol I saw that, I replied to that post with the greentext about them unable to ban avatarfagging
>>19672 >castrate cuckchan jannies They're all axewound trannies, how about we just shove them feet first into a woodchipper?
>>19674 put them in a saw trap and have them kill themselves
>>19673 I checked archives, post is marked as deleted, did you get banned for that too?
>You are temporarily blocked from posting for violating Global 8 - Complaining About 4chan. This rule is fucking stupid
>>19676 probably see >>19677
ok I was able to post after the block finished, I have never been hit by two blocks in a row before
>>19678 >>19677 >You are temporarily blocked from posting this sounds like a temporary janny ban that needs to be approved by a mod
Im pretty sure a janny/mod just replied to one of my posts, my spiddy sense went off and remembered a time a rogue /vt/ mod banned and I had to ban evade just to go off on the idiot as well as post that image when /vt/ mods were samefagging their shilling. >>19681 Short answer, no.
>/sdg/ OP is one of my gens ok now this is very suspicious lmao
>>19681 the difference between avatarnigger and anon who just likes to waifupost is intent. avatarnigger always needs to direct discussion towards himself and it is always obvious
I would probably be more productive if I got banned from cuckchan
>>19682 >>19684 I'm here to waifupost, complain about ANUS and AM5 voltage shenanigans, learn about things and encourage people to edit niggers out of their datasets or at the very least tag them properly so we can diminish the chances of SD genning them, thank you friends.
>>19686 all you have to do is see the discord quality garbage posts on /g/ to know you aren't avatarfagging. >encourage people to edit niggers out of their datasets I don't have any niggers in mine
I think a bunch of people got banned, it quieted down fast and I somehow managed to slip through the cracks
nvm I got banned for this now >>19677 lmao Welp now I can be more productive
>>19688 it's funny because after that 9 hour shutdown 4chan had a lot of banned people got unbanned, it's the reason I can post again but majority of the ai threads are not worth posting on anymore outside of this one
>>19687 >I don't have any niggers in mine My wife and I thank you for your service.
>>19691 On another note, uh, don't upscale pixel art. Whoops.
>>19690 I just wished we had a little bit more activity from quality posters. Not suggestion we need to advertise the link or anything but it gets tiring having to bounce around cuckchans for discussion. /Hdg/ remains the superior board but /vtai/ has the best image frequency.
It's a slow day so here's something to think about.
>>19693 vtai is pretty dead lately. But yeah it used to be pretty comfy, sadly I don't really care about vtubers
>>19696 I usually save the /vt/ catboxes and remove the chubbas from the prompt and It’s a 50/50 split of the model or LoRAs carrying the prompt or the prompter being clever and I can carry that over to my stuff and test.
>>19695 A classic
>>19697 yeah they use a bajillion models merges I just plug in some rando not fried lora of a chuuba into my prompt+style loras and coom away
>4gb+ dataset is barely 200mb in latents huh, interesting
>>19700 >5 billion images is barely 2gb in latents
>the anime pipeline face/crop scripts don’t work anymore Fuck
What are actual fucking storage limits of huggingface? I keep fucking dumping loras and models there, there's probably somewhere around 80gb of my shit there, maybe more damn lol
adetailer pls...
>>19706 post the Taiga lora and nobody gets hurt
>>19706 kek took a while to see it
(1.89 MB 1280x1536 catbox_4785v8.png)

>>19707 taiga-toradora-12: 07a54c966c39 https://litter.catbox.moe/vbf3wq.safetensors idk where I got it adds a style to the picture unfortunately
my webui gen counter just clocked over past 10000. half impressed, half sad.
>>19710 20k soon here
>>19710 im at 65k, and I lost a chunk of gens and was awol between April to early July
>>19711 >>19712 hot damn. respect
>>19710 i'm at about 90k going all the way back to the beginning >"Rei Ayanami taking a shower #artstation", pixray, december 2021 >"Rei Ayanami taking a shower by wlop", dall-e mini, june 2022 >something something emma watson something something alphonse mucha and greg rutkowski, sd 1.3, august 2022 >"masterpiece, absurdres, solo, {{takagi-san}}, long brown hair, parted bangs, brown eyes, karakai jouzu no takagi-san", novelai, october 2022 it's wild how fast the takeoff was once we had a model to work with
>>19714 Do you gen sfw?
>>19716 Very rarely.
>>19717 Damn. I envy your libido in a way
>>19718 Thanks squats and oats
>>19709 >idk where I got it Probably this one, the name matches https://civitai.com/models/111165/taiga-aisaka-toradora
(282.28 KB 662x937 msedge_0Atevhkcvv.png)

What the fuck is this thing called? Boorus just classify it as a crop top (just like how they classify spats as bike shorts even though they're not the same)
(693.44 KB 1200x1150 105989167_p2.png)

I wonder if training lora for this kind of position would be even possible without creating terrible deformities
>>19722 if only the "from below" tag actually went that low i hate how it's just a the average asian guy cam instead
>>19721 that is, let me check my notes, a [leotard|(sleeveless turtleneck:1.2)], crop top, midriff
>>19724 You absolute fucking legend. I was trying it out with a Parsee LoRA and it was working wonders. Unfortunately the sweater texture is extremely common outside of said LoRA, would love to know if you can somehow prevent that.
>>19725 try ribbed sweater in your negatives, failing that just shifting emphasis around. maybe add latex or spandex in there somewhere. it's a pretty squishy set.
>>19726 >try ribbed sweater in your negatives >maybe add latex or spandex will try asap, just waiting for my dick to recover from this Another issue is that sometimes it doesn't adhere to the skin so it looks more like a regular crop top, hope that the latex/spandex or skin tight tags will fix it.
>>19724 What's the different for [, |, and ) ?
would be kinda fun to redo the tom's hardware SD benchmark with all the "lossless" (ie no slightly faster but non-deterministic stuff, no half precision, etc) optimizations that have been introduced this far
>>19728 prompt editing. [A|B] alternates between the prompt containing A or B every step. you can also do [A|B|C|D] etc and it rotates every step. there's also [A:10] which will add A to the prompt on step 10 and [A::10] which removes it. (A:1.2) is the usual strength control.
>>19726 Adding spandex in the positive and ribbed sweater in the negative seems to help quite a lot in terms of texture (though I'm getting spats and pantyhose without prompting for them lmao, baby steps) Skin tight doesn't seem to help, will experiment with its position in the prompt tomorrow.
>>19405 damn nobody snagged this one?
(2.79 MB 1024x1536 catbox_g17fv4.png)

(2.49 MB 1024x1536 catbox_s99gtc.png)

(2.57 MB 1024x1536 catbox_jsqw9j.png)

(2.50 MB 1024x1536 catbox_i30zjj.png)

>>19616 cute style, well behaved mix element, 10/10 nice work dude
>>19733 i will take that mug of whisky and both glasses please and thank you
>>19723 Which is a shame, I always enjoyed juicy from below shots like this, or just simple upskirts.
>>19723 >>19736 https://danbooru.donmai.us/wiki_pages/from_below >Perspective that has the viewer looking up toward a character. >Traditionally used to emphasize a character's importance, as a child looking up to an adult. Just as often used simply to look up a skirt. >The horizon line will be very close to or below the bottom of the picture. Dataset issue
Man sad panda's galleries are really badly tagged, I have found shit on accident that is really usefully but because it has like 3 tags it's practically unsearchable if you are trying to look for it.
>>19737 Exactly. I only get shots that look like they were taken from a chair ever so slightly shorter than the subject at idk, a 10° angle at most?
(3.06 MB 1536x1536 grid-1466.png)

>>19740 Could you post a Catbox or two? I don't really have a problem achieving a shot that's from below below. Here's just a set of random - not cherry picked seeds
>>19662 finished bake, testing a bit >some weird frying in some cases >try the usual tenc/unet weight grid with additional networks >tenc doesn't change So, what's the syntax for doing that natively? Can't find it in the wiki, but I know it's been added in the last update
>>19741 I think the issue isn’t the from below shots in the form your are showing but more of a “from the ground” view as if looking up at the ceiling
>>19743 >I only get shots that look like they were taken from a chair ever so slightly shorter than the subject at idk, a 10° angle at most? They should probably post an image of what they are trying to achieve vs what they manage to achieve. Maybe controlnet + the right tagging can help 'm out
>>19744 They were talking about something similar to this >>19722
>>19745 That does look a lot more challenging indeed! Maybe regional prompt could help. Having the felatio and male related tags at the top third only?
>>19715 been there since the beginning, anon! >>19714 nice, been refining the maid technique for a long while there
>>19742 What I'm seeing I should change for next bake: -lower tenc, higher unet -Maybe more steps so I can shove another repeat? (currently 850 batches, batch size 4, 5 epochs, so 17k steps lmao, more than my ol' seraziel lora), but tensorboard says to stop earlier (never used that for training before, idk if it's to be trusted) -lower repeats on some main tags (currently baking a grid to see which ones are overrepresented, but casual seems underbaked, monster overbaked, anime maybe creeping up on other gens ...) -test pruning? the testing prompts are getting wild but I still like the granular control -see how much effect the token weights have (my gpu will explode) Everything would've been easier if I had ignored the extra outfits, but I can't just do that... full grids: tag comparison: https://files.catbox.moe/l6b0h9.png better tag comparison: https://files.catbox.moe/iq1fvy.png unet/tenc on fried: https://files.catbox.moe/9jmmsi.png
>>19748 Nice work
>>19095 Damn, I actually figured it out. The webui might have been doing img2img wrong this entire time. Gonna make a PR.
>>19750 what was the issue?
(461.78 KB 800x600 01889-4232854045.png)

>>19747 And rivers.
(1.45 MB 1024x1536 00240-3940237710e.png)

>>19746 Regional prompter won't be able to help with broken AI anatomy from such angle as on >>19722 but yeah it's a dataset issue
(895.16 KB 1024x1024 1.png)

(1.06 MB 1024x1024 2.png)

(4.66 MB 5120x1295 tmpdh89njnw.png)

>>19751 >>19752 https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12564 I still need to talk to voldy a bit more but basically the way img2img works currently is it takes the image and obviously noises it up a bit. The problem was it wasn't introducing any noise from the latent representation of the image, so it loses some finer details. People sometimes prefer latent upscaling because it introduces more details back but it requires a shit ton of denoising (>0.65 or something). This is essentially doing the same thing, but using the GAN upscaled version (like 4x-UltraSharp or whatever you prefer), converted to a latent, and using that latent to introduce just a tiny bit more noise. This technically was possible with the existing "Noise multiplier for img2img" option (initial_noise_multiplier), but that represents it in a not very understandable way because it's an extremely sensitive parameter. Basically the realistic value range for that parameter is 1 to 1.1. For this new option, the range is basically 0 to 1, something more graspable. It follows the same rule that NovelAI suggests which is it should be lower than your denoising strength. In other words, it's identical to what NovelAI offers. Picrel is a hires fix image at 0.45 denoising, without the new noise setting, and with (at a value of 0.2). Also posting the grid for reference. tl;dr it's introducing latent upscale details without actually using latent upscaling
>>19755 If this change does get pushed through, I'll be sure to try it out and redo all the Ufotableifcation img2imgs I was doing. Speaking of, my friend asked me to make this. Truly science has gone too far.
>>19756 Btw can your model do Shiki yet?
>>19757 It could always prompt Shiki, but I only have 3 movies in the data so I can't do the img2img stuff with KnK content like I can with Fate. I have a few Takeuchi doodles of Shiki I wanna test it on after the next training. I found a new KnK artbook I wasn't aware of on Sad Panda because it only had two fucking generic ass tags.
>>19755 very cool!
What do you guys use for noise offset? Normal/Pyramid? What settings? Can't find much info on pyramid noise
>>19757 In fact I gave it a go with what I thought would be an easier image, still needs work on the KnK dataset department, had downscale the source and inpaint the face to fix it. But it can be done.
>>19755 So you're saying we're about to get much closer to latent upscale level of detail without playing the latent upscale ghosting/smear/anatomy lottery?
>>19760 Looked around a bit. Is EasyScripts anon here? It seems khoya added --zero_terminal_snr not long ago, which is supposed to make regual/pyramid noise offset obsolete
>>19761 Looks nice even if lowres for now! Was wondering it's gonna be able to do mystic eyes, the loras weren't able to do that but if knk is not finished then probably not
I hate niggers
>>19761 Oh boy this looks great.
>>19765 Same
>>19764 I will need to tag every instance of the Mystic Eyes in my data for it to start working since the base NAI training alone doesn't know what that is with only 100 booru hits. And from the quick skim I did, some images are incorrectly tagged without the eyes being present or the images show generic "glowing eyes" slob which triggers me. I would assume that the Civitai LoRA/LyCORIS wasn't trained on anything too specific of KnK content if it can't do Mystic Eyes, other than maybe Rider's if it was trained on some chunk of HF screencaps.
>>19763 fuck, it's only available on a release after the last known working one (bitsandbytes problem) Idk if I just push my luck and try the latest version of the bmaltais/kyoya_ss repo, since training takes a lot of time and I really don't want to get actual noise as a lora
>>19770 A cat is fine too
fuck I'm using my fan to cool my craptop down during training I'll have to sleep in the heat
(8.72 KB 369x114 screen.png)

>>19768 >I would assume that the Civitai LoRA/LyCORIS wasn't trained on anything too specific of KnK content if it can't do Mystic Eyes There are 2 Shiki chara loras and they're both unable to do the eyes too, dunno. >I will need to tag every instance of the Mystic Eyes in my data for it to start working since the base NAI training alone doesn't know what that is with only 100 booru hits Yeah I just wondered, thought it would be cool to see if it worked.
>furry managed to shard models what in the niggerloving fuck (in jax)
>>19774 English?
>>19775 multi-gpu/tpu training, mostly for SDXL since it's otherwise a pain to train
>>19772 get a usb fan or two and point them at the craptop
>>19772 >Training on a laptop in this heat you have my condolences anon
>>19733 Which murata range lora is this?
>>19780 no idea, i downloaded it in january and it's 150mb lol let me know if you want me to upload it
>>19781 yeah, the ones i've got seem pretty bad especially mixing with other styles
>>19772 that's dedication
>>19769 trying khoya for zero_terminal_snr and it's 2x slower as expected may even shit out a broken lora All of that for what? SDXL compatibility lmao I hate SDXL so much it's unreal
>>19786 and easyscripts fucking removed "load weights" why are they all retarded
>>19783 can we appreciate that file name for one moment?
>>19773 >that pic lol, poor anon >Shiki LoRAs I tried browsing Civitai and there are 5 or 6? But the site is so slow and bad I can't even load the page to read the descriptions to see creator notes, I'm just gonna assume there is nothing of note other than a "How to use" or "Recommended settings" I can only assume that this just comes back to being a dataset issue, due to what I saw on booru is that the "authors" aren't using images with mystic eyes on just due to either not a lot being available or focusing on art of Shiki's other outfits using FGO Saint Graph and Craft Essence art which generally do not include her eyes active, and the fan art mystic eyes depictions are inconsistent across the board, even in some of takeuchi's own drawings kek. That and lots of ufotable key art from that pre-2010 era for KnK just wasn't preserved online because it wasn't a big ticket item unlike Fate Zero that came out shortly after, also doesn't help that their theme cafe and their comiket items (most of the time art pieces and books respectively) are absurdly hard to find in the wild.
>>19788 >>19783 OUOOHD 😭😭😭
>>19790 CATBOX LINK EROTIC 😭😭😭
>>19755 voldy was not entirely convinced it improves results but it's merged now. FYI, if you do pull from dev there's still some upstream issues needing to be worked out, one with ultralytics (which breaks adetailer), and Gradio had a major frontend performance regression that's going to be fixed in an upcoming version, hopefully whichever one they release next. Full changelog thus far. This could be written better but whateva. https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/dev/CHANGELOG.md
>>19792 >Gradio problems again Why can't we use something other than gradio? Anyway, nice job on the PR, I'll do some more cursed img2imgs later when I finish up this batch of image crops.
>>19793 >Why can't we use something other than gradio? Sunk cost (extensions) fallacy Catbox guy is a faggot so I won't use that but Invoke seems solid and you don't have to use the nodes if you don't want to. People should start making extensions for that instead.
>>19794 wat why is he a faggot?
>>19795 Extremely spiteful towards voldy, he's either endorsed by or works for JewbilityAI, apparently worked with them so that voldy wouldn't get SDXL support for a while, etc.
>>19794 >Catbox guy is a faggot uhhhh? >so I won't use that its part of the Auto1111 update, its not "his" thing
>>19796 you sure you aren't confusing him for C*mfyAnonymous?
>>19799 >>19797 >>19796 >>19795 .................fuck i don't have my glasses on, sorry catbox guy. I MEANT COMFYANON, FUCK
>>19800 KEK Get some tea or coffee buddy.
>>19798 at this point I wanna see some heavy smut doujin with your maids
>>19802 why stop at doujin when we can have AI generated smut hentai?
>>19802 Textgen is already writing the plot for the VN.
>>19801 I'M WIDE AWAKE, I JUST HAVEN'T CLEANED MY GLASSES YET SO I HAVE TO SQUINT AND MY EYES ARE BURNING
>>19800 No worries, anon. Working with voldy is actually cozy because I get the vibe he still treats it as his own little project he works on in his spare time, which I do likewise. There's no pressure involved. I could easily make a bunch of improvements to other UIs but they don't line up with my own philosophies or whatever you'd call them. >>19802 Gotta bring up this classic thread https://archiveofsins.com/h/thread/7333493/#7335671
>>19806 hey i'm in that thread
>>19806 >I have the catboxes from this thread saved guess I was in this thread lol
>>19803 I honestly prefer doujin hentai to animated, mostly because anime hentai looks and sounds really bad imo
(65.70 KB 669x500 49a5w7.jpg)

>>19806 >a shoebill and two anchovies
>/sdg/ has been autosaged >claim general is now dead oh thank fucking god
>>19812 >>/sdg/ has been autosaged LMAO
>someone makes a new /sd/g after the last one hits page 11 from autosage >gets deleted LMAO
>>19814 OOOOOOOOOH NONONONO SIRS NOT LIKE THIS WE CAN'T REDEEM ANYMORE
If the character has 2 hairstyles for training loras. Should you absorb one of them into activation tag or should you just let it be?
On another note, Steve "tech Jesus" Burke is figuratively beating the shit out of (((Linus Sebastian))) and I'm enjoying every second of it.
>>19816 Default then alternate hair? >>19817 Part 2 video just dropped and its amazing
>>19817 >>19818 Oh shit, I have to watch that.
>>19818 >Part 2 video just dropped and its amazing Watching it right now. I'm trying to come up with a way to describe (((Linus)))' behavior but it's a waste of time, this is textbook jewish behavior, he cries out as he strikes you.
>>19820 what you were looking for is: >*kvetching intensifies*
>>19821 Kvetching AND tripling down. The whole prototype tripling down thing is insane, even the most brainwashed zoomer SHOULD be able to see how that non-apology is bullshit. "Yeah we massively fucked up in every conceivable way while testing it and we knew we fucked up and we did nothing to fix it BUT LOL IT'S STILL A TERRIBLE PRODUCT LOLOLOLOLOL"
>>19822 >Yeah we massively fucked up in every conceivable way while testing it and we knew we fucked up and we did nothing to fix it This is what fucking made me mad the most about this situation. He gets told that the prototype ONLY works on a 3090, and you he uses a 4090 because... big number card, UNIRONICALLY? The mouse thing and forgetting to take off the stickers was also a slap in the face to that company.
>>19823 From what I've seen THEY SENT LMG A 3090 TO TEST IT WITH. AND DIDN'T RETURN IT EITHER.
>>19823 >The mouse thing and forgetting to take off the stickers was also a slap in the face to that company. I love how people defended that with "uhm if a pro journalist/reviewer didn't notice the tape then uhm what hope do YOU, the consumer have???"
>>19824 Wait, is that GPU from where that Linus clip of "How did we have an extra undocumented 3090" come from?
(600.98 KB 1920x1080 msedge_OaELK0LHMb.png)

>>19826 I think so, yeah. They even mentioned it while telling Billet they'd have their prototype back soon™
>>19827 so not only did they sell off (auction my ass) the prototype, they kept the 3090 Ti.
>>19827 >the good new is that it isn't just sitting on a shelf How the fuck is that good news???? I'd be fucking fuming. Not to mention that a competitor was the winner of that auction and is cloning the thing for a future product, something Billet Labs loses out on. I'd fucking sue the shit out of LMG.
>>19829 >Not to mention that a competitor was the winner of that auction Was it confirmed? Steve just said "WHAT IF A COMPETITOR GOT A HOLD OF IT?", I don't recall him saying it happened outright but seeing how both Corsair and ASUS are deeply involved with (((LMG)))...
>>19818 Uh, so you absorb one and tag alternate hair for the other?
>>19830 I don't trust that some random nobody would drop money on a water block.
>>19832 On one hand, some redditor would buy it immediately for bragging right and just to keep it as a conversation piece whenever he talk to other redditors, on the other hand you have god knows how many competitors...
>>19778 panty pull is a classic
Bros my collab added SDXL
(1.04 MB 1024x1280 00430-2082000296ed.png)

(1.31 MB 1024x1280 00272-3485551837.png)

(2.30 MB 1280x1600 00000-576129575_cleanup.png)

(1.37 MB 1024x1280 00267-2807367434.png)

https://huggingface.co/JujoHotaru/lora Reposting it in case people didn't see inpaint anon's post on cuckchan. Pretty cool loras, although some of them might require pushing weight to 1.5-2. Glownigger under spoiler. >>19789 >I tried browsing Civitai and there are 5 or 6? Only two of them are worth mentioning, other ones are completely fried unusable trash >But the site is so slow and bad I can't even load the page to read the descriptions to see creator notes Yeah it felt bloated from the start but now it's straight up painful to use. >I can only assume that this just comes back to being a dataset issue Yeah it is. I doubt they bothered getting screenshots from the movies, so it's just whatever averaged from fanarts, and they were aiming for style-neutral looks anyway.
>>19785 just realized I ran these on based65 (not even based64) and not NAI fucking forgot to change models after coomgens
>>19835 Yeah but it seems to oom with upscales. Base gens look fine though, it's not a bad general-use model for shit like memes or historical stuff I guess, but fuck pajeets that shill it for anime.
SON OF A FUCKING BITCH I WENT OUT FOR DINNER IN BOXERS AND SHORTS AND A MOSQUITO JUST BIT ME ON THE BALLS WHAT THE FUCK IT ITCHES SO MUCH FUCK
>>19834 Yes, very enjoyable. >>19839 Kek
>>19839 Fucking kek, does your ballsack stretch out of your pants like the old man johnny knoxville jackass prank or what?
>>19841 i use those...
>>19843 welp, now we got to kick you out of the kool kids klub
>>19843 how do you make non-cursed gens with that why would I want to make my anime models more 3D-like
>>19842 I wear baggy shorts and the restaurant had perk bench style seats and shitty picnic tables so i propped my feet up on one of the table's support beams/legs fucking hell it itches
>>19836 AAAAAAAAAAAIIIIIIIIIIEEEEEEEEEEE RUN THE CIA NIGGER OVER TERRY-SAMA! >Shiki LoRA again yea I'll do the needful and tag mystic eyes on my KnK stuff Guess I'll probably be the only autist doing full movie/anime screencap trainings other than the dude that shoved the entire Dark Side of Dimensions movie into a LoRA months back kek
>>19837 Wait, the melting only happens on NAI wtf, even on gens without the lora I'm going insane
>>19844 >>19845 3D converter helps when genning fizrot...
>>19847 I'm certainly not pushing, do whatever you like anon, I certainly wouldn't have the patience for shit like this, there's already too much shit to do with a full-blown fine tune.
>>19848 but did you train the stuff on NAI?
>>19851 Trained on NAI, no external VAE, yes
>>19785 "new" bake, actually just using the last one and resuming from last since it looked undercooked >>19852 here's the grid, one of them doesn't have the lora applied (did one less epoch but forgot to take that out from the grid) https://files.catbox.moe/dwv06f.png last run is light blue on tensorboard
>>19850 I was gonna do it anyway, I just know now that there is a want to be able to prompt details like that. And trust me, I don't have patience, I was hoping to take a break again but then I ran into my second real issue since starting this project so I got to take care of it.
>>19853 I mean, it looks good to me in terms of being able to do all sorts of stuff. I assume #5 is the one without the LoRA?
(730.36 KB 768x1024 catbox_u3189u.png)

(1010.53 KB 768x1024 catbox_auded3.png)

(685.91 KB 768x1024 catbox_58tq3m.png)

(991.22 KB 768x1024 catbox_j08ybn.png)

>>19855 Yep, #5 is without the lora. It's quite random and not that great compared to the previous version I made months ago. First two are latent, last two are remacri. Alternatively old and new. Noticeably meltier (though less than genning at a higher base res) and less well defined
Got bored, have another meme forgot to check dev branch img2img... will try it later
Is there a way to make the wd tagger actually include ratings in the captions? I want to use it to tag my hydrus stuff, but it only uses normal tags. There's https://github.com/abtalerico/wd-hydrus-tagger that imports into hydrus right away (which is neat), but running it on the gpu gives me missing cuda dependencies errors.
>>19857 release a closed alpha
>>19858 >ratings in the captions what do you mean by this? Like the the Booru rating from General to explicit? As for tagging, I jsut use the standalone taggers and manually import into hydrus because the API shit has never worked for me.
>>19860 >Like the the Booru rating from General to explicit? Yeah, they have them in the ui for single image interrogation, but they don't get included in the .txts. I also tried saving .jsons along the .txts and those contain the ratings, but they also contain every single tag with a score and I don't think hydrus can parse non strings.
>>19861 In the json is it listed as "rating:general" or just plain text "general" because if its the later it would be added as a keyword tag and not create its own hydrus groupspace even if you got the string included in the txt.
>>19862 It's like this, the ratings are separated on the top and the rest is in the second block, except image like thousands of tags there.
my balls don't itch anymore
>>19863 yea I'm not sure how you would be able to translate that into something for hydrus. You need something in the tagger to choose the rating with the highest score to apply that word in the txt, and then on hydrus end have a string filter that will convert "general", "explicit" into "Rating:General", "Rating:Explicit" so that it becomes a Hydrus group namespace for filtering purposes, but then when exporting the images back for training you have a string filter to remove "rating:anything" from the outgoing text so it doesn't train in the data.
>>19864 Thats good, make sure you dont wake up with a swollen lump tomorrow where it bit you are its gonna itch again
>>19856 okay, after some testing, I'll have to do some fixing 1- eye color tags are fucked, it tagged literally all but one of the orange eye pictures as red eyes. I can see some are ambiguous but others clearly aren't. 2- Some tags need exclusivity, applying the standard InoriYuzuriha onto the Monster and Mana version just isn't going to fly (first one has vastly different haircut, second one has different haircut, special eyes that I will never be able to replicate, and different accessories) 3- I'll prune outfit tags, because it seems to fail at dissociating their uses in similar outfits (VoidDress and WhiteVoidDress), leading to crosstalk 4- I may even prune character tags, since there isn't much variation otherwise if I dissociate Monster and Mana with regular Idk if the --zero_terminal_snr worked though, I don't really like having to use khoya for this. But Easyscripts didn't let me do all of what I wanted either (dynamic keep tokens, even if you can now set them by subfolder, since it's a multi-concept lora it's a pain being unable to do that) full grid: https://files.catbox.moe/4w9ikd.png
>>19867 Of course the only mention of zero terminal SNR are this sentence: >fix noise scheduler betas to enforce zero terminal SNR and this error message: >zero_terminal_snr is enabled, but v_parameterization is not enabled. training will be unexpected This is ESL so egregious even I as an ESL will call it out. Why are people so allergic to documentation?
>>19868 If its from the Kohya readme then its because Kohya is just using DeepL translation because hes doesn't engrish.
>>19869 Not even the readme, but from comments in the code. Deepl on the jap text gives a better translation lmao. Reading the paper (same one suggesting cfg rescale for vpred models), terminal SNR seems to fuck with eps-pred, but idfk how that actually affects eps-pred loras I just wish we'd have a vpred model so we can actually start using this shit that's been out for months instead of actual bandaids (noise offset)
>>19870 Oh sheeeeeeit What’s the secret?
>>19870 Good position.
>>19870 Hot, LoRA?
>>19873 Agreed. I'm most glad it works with my beloved Chitose. >>19872 A lora from civit of all places. Found it during my futile attempts to replicate this >>19722 >>19874 I used this one https://civitai.com/models/65342 . But the position is referred to as "Sword swallowing position". Maybe there's a better one out there
>>19875 >oh that looks inter-- >page loads >nigger in the first preview >close tab
>>19875 Fair warning, it does require some gacha, but it probably doesn't help that I'm combining other LoRAs. Oh and the guys lack a butthole, but that's a good thing as far as I'm concerned.
>>19877 welp it completely kills my style LoRAs, uhm I'll work on my backlog of gens for a bit longer.
>>19876 >>19878 There are multiple LoRAs floating around for the same concept (with similar names). Hopefully there's a better one out there that does suit all our needs. For now, I think it's as close as I can get to anything that resembles >>19722 , so hey, the LoRA did its job to a certain extent
>>19879 I'll give it another go later, I have like 200 gens or more in need of touching up... It never ends...
some retard trying to justify selling loras on /trash/ these people need to suffocate immediately
>>19881 Some fag on Civitai not only had to pay for >>19735 but he shilled the creator too. The guy sells LoRAs for like 5 bucks and trains for 15, what the fuck? I wanted to start my own service for like 5 bucks but it's probably too late.
>>19836 Thanks for the new toys
Tried combining the glowing eyes with the hypno lora that was posted a while back and it really does capture that empty eyes look I always wanted
>>19884 you do you but that looks like a shitty PS edit would look better if the glow wasn't so in your face
>>19885 I mean it's a quick test gen, I imagine it'd look better if I tried somewhat harder but I personally like the glowing eyes look so I'll get good use out of it even outside hypno shenanigans.
>Download lora that makes 2D models into 2.5D >Set it to negative values >Now converts my 2.5D models into 2D the big brain strats
>>19385 lllyasviel mentioning in the dev Discord that utilizing cumfy's backend would be equivalent to Microsoft Edge using Chromium (he's saying this as if it's a good thing), two SAI employees suddenly chiming in, and then cumfy posting this on 4chan is one the saddest chain of events I've seen. Literally only a single dev was interacting during this and never at any point did they say they "want" to do this. voldy never incited this. They did this to themselves.
>>19889 hope voldy never cucks out
>>19889 >>19890 thank god he came online...
>>19889 It's incredible how fucking annoying and unlikable this faggot is.
>>19891 so... fvcking... zased...
>>19889 40 bucks he stalks the discord under a false name. Voldy should kick the SAI employees out
>>19894 Cumfy is an obsessed insecure autistic faggot, I can almost guarantee that he samefags on chans and has multiple discord accounts
>>19895 Pretty sure Comfy and Debo are the same person
>>19889 >look him up >he's a chang >made a "new UI" and says he won't embed metadata in images genuinely retarded
>>19863 >>19865 So I wrote a small python script that picks a rating with the highest score from the .json and appends it to a corresponding .txt file with tags for every .json in a folder, so it should be importable. I still need to test it on real data. I might post it later if everything goes well.
(442.14 KB 1728x1248 catbox_i6r1ra.jpg)

(2.40 MB 1728x1248 catbox_qfy2s5.png)

(2.48 MB 1728x1248 catbox_ocdroa.png)

>>19763 Haven't had time to look around, only checking the thread for the first time in almost 2 weeks! Sorry I haven't been keeping people up to date. I'll be adding that and a mode that just generates config files soon. I also have been keeping up on updates for the most part, so if you are following my sdxl branch (which has become my dev branch as it stands right now) its got plenty of patches and bug fixes. I'll push it to main once kohya pushes sdxl to main for sd-scripts. The time I have to work on the scripts basically plummeted to being only on the weekends, so bare with the slower updates! I'll try and keep up with the thread a bit more often now that I have a small bit more time here and there to check. Here's some definitely scuffed gens from the definitely fucked tab_head lora I tried to make on sdxl. It's not good enough to release or anything, but better than expected. These aren't my gens, because I had already written it off as a failure, but apparently it was still possible to look decent.
sd/g/ is such a whiplash of a thread, it went from bitching about comfy, to some nigger crying about catboxes to talking about trying to pitch in for a finetune model
>>19899 >sdxl to main are people making the switch to that now actually?
>>19901 people are deepthroating SAI by trying to train a SDXL lora, so a lot of loras demand support
>>19902 a lot of people* brain hurt
>>19902 >>19903 I can't wait till that 4090 contest is over
ok which one of you was it
bros we should invite our friend cumfydev to here since everyone likes him so much
>>19906 go back or kill yourself nigger if you choose the later, do it outside so we don't got to clean up your brain matter
>>19907 wow thats not comfy at all
>>19909 is the dildo up your ass comfortable?
>>19910 are you 12? stop being trolled
>>19910 no its not I always find it really hard to prompt anal dildos that look natural :(
>anime >*give solid red eyes to girl* >artists >*add gradient orange tint to eyes, making some right in the fucking middle* AAAAAAAAAAAAAA
>>19914 Nani?
are you faggots neets or something? get a job
>>19916 Hook me up with a shilling gig I wouldn't mind shilling cumfy on /g/ if I got a check for doing so
>>19898 >>19865 Alright, here's the script. It picks the rating with the best score between the 4 and appends it at the end of the tag file, or it creates a new one containing only the rating. Also added an option to reformat the tag file so it's one tag per line, for easier hydrus import. https://files.catbox.moe/ibp68e.py I'm not really sure if it's the best way to do it, as I'm a codelet and did anything python related for the first time today, but it works.
>>19916 Getting BTFO’d on /g/ that you come here thinking we are easy targets or something? Are you out of tendies or something? Doesn’t seem like this shill gig of yours is any good.
>>19917 >>19919 >shill shill shill lol absolutely delusional retards masturbated all their brain away kill yourselves vermin
>>19920 >implying cumfy is used for anything but porn lmfao
>can only get here through an obscure link on a gitgud >”no it’s not shilling, it’s just being informative” I guess SAI got PR training from Linus’s Trash Tricks
>>19922 Linus Talmudic Tricks*
>>19916 /workfromhomecrew/ ww@ modest hours, modest money, generating loli while on business calls
(2.60 MB 1280x1600 catbox_883dwc.png)

(2.21 MB 1280x1600 catbox_9uxzb9.png)

(402.33 KB 4000x802 catbox_58h1on.jpg)

(402.70 KB 4000x916 catbox_hmmrar.jpg)

It's nice when loras work well with models instead of fighting with them. Hard to find a good balance with some styles
>>19922 >>19923 Didn't he put out a full on video apologizing? He went from 'we won't talk about this on WAN show' to full blown video with the CEO apologizing
(57.25 KB 1107x288 msedge_NMfBaUk9VT.png)

>>19922 Can you believe some people still shill him cause >s-steve must be jealous!1!1! >>19927 I don't know and I don't care, kike tears and kike apologies have negative value.
>>19927 It was a huge corporate lawyer speak non-apology, and he had a sponsored ad and LTT store plugs in it. For comparison, Steve’s video where he grilled Linus was not monetized
>>19915 all of these were autotagged red eyes last one is anime version, which is the defacto correct one I have to deal with artists putting fucking gradient eyes and wrong eye colors everywhere
>>19930 just tag them all as red bro, the fuck else ur gonna do, its fine
>>19931 Are you retarded?
>>19932 That guy is some seething troll from /g/, ignore him. And yea I see the issue, I'm not sure what you could do other than color correct the fan art and retrain again.
At last, I'm unbanned from /g/, probably gonna get banned again in 20 minutes though lol
>>19931 >it's fine here are ur red eyes bro (1st pic) >>19933 funny thing is that 1st iteration didn't have this problem (see 2nd pic) most likely the pruning + mistagging being the problem retagging > pruning but both should work
>>19935 I don't really do tag pruning honestly, I just make sure stuff has the right tags and run with it
shit's exploding on twitter rn so basically linus tried to use linux and like when he installed steam shit just stopped working and the computer became unusable lol now explain yo me why would anyone ever use dis instead of just enjoying stable work of windows and macos you guys praise linux like its 2nd coming or smth lol while actually its plain unusable Imagine typing shit in terminal just to install stuff fr man
anonfiles is done
>>19939 something fishy about his statement
>>19940 dunno. maybe it's true, maybe he actually got fucked by feds. most likely it was feds uploading cp there, they do this kind of shit. don't think site had a canary
>>19942 yea, I suspect he was blasted by the feds as well and got ID'd, as people pointed out that some of the things he was complaining about is basic CYA SOP
>>19943 imagine if catbox gets glow'd
>>19944 shut up shut up shut up shut up shut up
>>19938 >even someone who types like a niggerzoomer figured out that linux is a cancer
>>19945 It's a matter of time before it all comes tumbling down
>>19947 >FUDposting get out of here you glowing kike
>>19948 dumb nigger
>>19948 Do you even know what FUD means dumbass
Catbox seems to be airtight when dealing with adversaries, litterbox is pretty bullet proof, and I think the small size limits of catbox and its quick detect of shutting down questionable material help it avoid the issues anonfiles could have ran into by a potential glowops


Forms
Delete
Report
Quick Reply