Chatbot GPT is fun

Chatbot can’t realize wrong statement :laughing:

He’s designed to listen to human input and to learn from them. You’re just teaching him to be less intelligent. He’s obviously not gonna just tell you you’re wrong.

ChatGPT does not “learn” from human retail end user interactions.

The models are “pre-trained” . The current model training data ended in 2021 (April?) as I recall. Chatting with the silly thing does not “teach it anything”, however in the current ChatGPT app, the prompts and completions are stored for single chat sessions; and this is not “learning” but more of what we might call “session prompting” or some other fun term which describes feedback of session prompts & completions into consecutive session prompts, but as consumers of ChatGPT (not OpenAI staff) we do not fully understand how ChatGPT does this to my knowledge. Many have guessed, but we have not see the code :slight_smile:

I don’t mean he’s literally gonna learn that that is the wrong answer but he’s also designed to accept when humans tell him he’s wrong about something, especially since he knows his information is from 2021. He gave the correct answer but was asked again and again to recheck and told he was wrong by a human. He can’t really just tell you to go to hell.