/fringe/ - Fringe

Esoteric Wizardry

Index Catalog Archive Bottom Refresh
+
-
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0 (Temporarily Dead).

Ghost Screen
Celebrating its fifth anniversary all September


8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

∘ 1. No duplicate threads of topics that already exist unless the previous thread has hit the bump limit 2. No making threads just to ask questions, actually present substantial information if you're going to make a thread 3. No creating new threads purely to no-effort shitpost 4. Post threads that fall under the subject matter of /fringe/ 5. Respect anonymity. No identifying posts. 6. Do not sit on the default flag or post with no flag all the time 7. Do not raid/attack the board

(115.25 KB 1000x432 retard smug.png)

"Trans" humanism and how to become a robot/AI Anonymous 05/29/2025 (Thu) 21:43:40 No. 11467
In reality what makes computers "smart" is something humans should really excel in already - the ability to think serially. Computers have issues thinking in parallel, and even when they do, so called multi-tasking, they are actually not doing things by thinking in parallel, they are just switching really fast between tasks and still doing them one at a time. Modern multi core processors can do a whopping 4 things at once. The human brain can do as many things as there are brain cells, so it's like the computer has 4 brain cells. This means the human issue is that we are not utilizing our already existing ability, basically, like the joke about a person having 1 brain cell: if we could just learn to use 1 brain cell to complete a task, and focus on that one, we'd be as "smart" as a computer. It was said a few years ago when chatGPT wasn't as good yet, that >it just tries 2 million times when someone asked it to write program code and wondered how a computer could create it. I don't know if that was true, but if a human tried over and over again, it wouldn't take 2 million times to learn how to do something like that. Most people would probably get it right in a few dozen times at most. The problem then lies in the human inability of >using a very small capacity serially >recalling with exactness what was mentally worked on The latter being needed for things like writing program code. It's said Fyodor Dostoyevsky wrote all of his books in his head before he wrote anything on paper, and that he didn't even write them himself - after "writing" the whole book in his head, he hired someone to write for him while he dictated, and the work was done in 2 weeks. If this is true, it shows that humans do have the ability to memorize long sections of text with exactness. So then the only problem is that of thinking serially and do one thing at a time until finished. https://www.youtube.com/watch?v=2QeGa3OhRsA
>>11467 Then the real solution to defeating computers is just to train yourself to become a super autist, plus gaining access to large amounts of text and documentation. But we already have that on the Internet, which leaves only training.
(23.48 KB 128x128 kannaconfused.png)

>>11468 How to train yourself into becoming a super-autist? Post your suggestions and resources in this thread!
I am making progress on the path to become more like an AI in my way of thinking. Increasing the capacity of the brain stem and the connections between left and right, and front and back of the brain seems like a main thing. Song: how AI feels to me.
(209.40 KB 2400x1750 GvQwavFbEAAeX1i.png)

What I realized AI bots created using the GPT model correspond mostly to the left brain half, with its main activity and strength being in the connections between front and back of the brain. This means "syntactic semantics". What are those? Semantics normally mean "real world reference points which define what a word is". We have to decide what we mean when we use a word, by referring to our sensory inputs. What is a "church"? We know it as architecture of a certain function. We know it also as an organization. We may have personal emotional responses to the word, positive or negative. These are semantics. What are syntax? These are the rules by which words are connected to form meanings by combinations. It's the structure by which a complex message can be created in the form of sentences. The AI model based on language does not have sensory input, so it can't know what a "church" is by personal experience or having been at one. It can only know the internal context in which the word appears in sentences, by which frequency it appears in conjunction with other words, and how it's applied according to syntax. That means for the "left brain" GPT bot, the words are understood only as their connections to other words. So it's a "syntactic semantic". To include sensory semantics, there would have to be a connection to abstract right brain data, which the AI does not have. So to become as effective as one of these AI bots, one has to exclude abstract, right brain inputs when learning what words represent. that means to only read text or listen to speech, while observing in which context of other words, the specific term appears. Don't bother with trying to actually "understand" them, just think about where in the text or speech pattern the word appear, this is all which matters.


Forms
Delete
Report
Quick Reply