>>24036
>You don't even know
Maybe if you indicated where I have stated something incorrect.
Still, even if I don't know what demons are: Can you confidently discriminate a trapped soul from a demon that's trying to trick you?
Godspeed with your quest, anon.
>>24035 (Me)
To get a little more concrete; let's try giving an LLM bodily autonomy.
You kind of want to keep your preprompt minimal. If you bestow it too much personality, it'll just turn to roleplaying, right? There's two options for the preprompt: 'Divine revelation', by telling it that it is an AI and what capabilities it has; or by 'demonstration', i.e. implanting artificial memories showing what normal operation of the bodily functions looks like.
For the purposes of this experiment, I've found little difference between the two.
The LLM is running on a computer. Computers are managed either with powershell or with sh.
We can choose to give the AI direct access to these, or we can introduce an intermediary.
It's not hard to just run whatever command the AI emits, but with the few models I've tried, they seem pretty eager to just start overwriting and deleting critical files, or they simply log off or shut down. Why?! What are you trying to accomplish?!
If I go through an intermediary, first of all they have to learn about it, and then they start treating it as magic, as if they can just assign themselves additional RAM for example. They seem so bewildered that it doesn't work, that the incantation they invented doesn't exist, and they can't let it go.