/ais/ - Artificial Intelligence Tools

"In the Future, Entertainment will be Randomly Generated" - some Christian Zucchini

Index Catalog Archive Bottom Refresh
+
-
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0 (Temporarily Dead).

Ghost Screen
Celebrating its fifth anniversary all September


8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

Use this board to discuss anything about the current and future state of AI and Neural Network based tools, and to creatively express yourself with them. For more technical questions, also consider visiting our sister board about Technology

Hardware for hosting local A.I. Anonymous 04/19/2025 (Sat) 02:19:00 Id: 7f90d8 No. 7340
Since we lack a thread to discuss hardware, I made this. Whats the best GPU out there in terms of price, performance and availability regarding different uses from L.L.M. to image generators ? The documentation on this seems to be spotty at best.
(68.16 KB 1999x1124 macmini.jpg)

Looking at prices right now, and it looks like I will never be able to afford a top of the line card. I am quite happy with my 4060 TI 16GB, those 16 GB are very valuable for image generation, and for video I just go to sleep and leave it working overnight, so time isn't that important. I am very curious about mac minis because of the integrated memory, is mac an option for imagen and LLMs?
>>7571 I am not sure about mac minis, don't quote me on this but Nvidia GPUs have CUDA cores and are usually what LLMs are trained on, for other hardware such as intel or AMD you get lower performance and also need to jump throw some hoops in order to use any kind of A.I. Been thinking of upgrading my GPU, how well is a 4060 TI 16GB at generating images or video? tough of using a 5060 TI 16GB but its been a paper launch so far.
(460.67 KB 832x480 Video_00017.mp4)

(464.53 KB 832x480 Video_00012.mp4)

>>7719 For image is enough, I use Flux.D without issue. For Video it's underpowered, I could probably squeeze it more, but I have only created a couple video projects, and those I made by leaving it working overnight.
OP, I'd highly recommend trying a service like runpod (ass) or vast ai (slightly better). You can pick a specific piece of hardware and see if it can run the models you want at the speed you need. I have a shitty 3060 8gb and honestly? Its fine... I can do most any image generation I want, SDXL is a bit slow, but SD 1.5 is still fast. Then once my workflow is setup, I can rent a 4080 from a shell-shocked Ukranian for $0.15 and hour and generate all night long without burning my own chips to death.
>>7340 If I tell you the price will double and since I'm a poorfag I wont be able to afford my other one, I'm going to request that knowers of this information don't post it yet because it's cheap enough that if you know you know
What can I do with a 5080?
>>7973 Very fast for image, but you will run into memory limits for video. These should have had 24GB considering the 5090 has 32.
i remember coming across some logs from this so called prototype.essence that used a custom GPU setup back in the early 2010s to “map cognitive patterns.” it referenced a company called Fluvium Technologies which has no trace now, same as this prototype. i’ve read that it was part of a bigger system that never fully shut down, there’s footage and audio fragments that belongs to this laboratory and codes that pops up in weird corners of the web. anyone here ever worked with legacy GPU clusters or AI systems from that time... curious to know new info about this?
3060 is the most cost effective 12GB card. Given Nvidia is still releasing 8GB cards in 2025 you may as well go with this over hoping something changes. More vram doesn't matter because the models themselves have limited resolution so limited memory requirements. For models to have higher resolutions they would have to ignore the majority of their source images, drastically lowering their capabilities. Image generation is around 2 seconds. Video is pointlessly short until they resolve keeping the entire thing in vram at once. You can't get enough vram to matter. LLM I haven't bothered with because uncensored models are around 300GB so good fucking luck in our lifetime.
>>7571 All "AI-oriented" mini computers are utter trash. The self-advertised TOPS is only a fraction of what your average GPU can attain. The real value is probably even lower. Macintosh are especially shitty and overpriced.


Forms
Delete
Report
Quick Reply