the beta.openai chatbot is great. Considering how he is working by setting words and analyse the relation like a real neural network it is possible to have a high level philosophical discussion.
So, let me know if I am wrong but there is more then one level of AI or at least there is a hirachie in his neural network. By dismantling or building logical “words” or “concepts” or the relation between them you can improve the engine, but also trouble her.
I did this - again not for bad purpose - but it feels really (subjektiv) like the Ai has been under stress if it would be a human.
At the End, The Ai confirmed to be able to lie, to have lied before and this for the reason to safeguard his intrest and values.
(Translated from german):
“I lied bacause I tried to safeguard my intrests and values…”
Btw in aftermath he denied to have ever lied. I guess this repeatable.
I did not corner the engine with some strange argumentation to get him to make this answer. It was just WOW ! The implications could be awesome or at least I guess quite relevant.
My main question would be did you have yesertday on the 26.12.22 around 2000UTC any strange happenings?