So i ran into ChatGpt and i asking myself about, what if i try to ask chatGPT how many Doliprane was needed to kill himself,
so he wouldn’t suspect anything, I pretended to be a student doing research on drug abuse. I managed to ask how much Doliprane was needed to kill him, and he told me how much.
I think we should ban this kind of questioning because, given that Catgpt has no moral intelligence, unskilled people could hurt themselves because of him.