The AI contradicts itself

First of all I am a Spanish speaker and I use google translate to communicate and I apologize for any inconvenience if it is not understood.
I am aware that most of the bugs I will list were already foreseen but I still wanted to try and share my experience anyway. Even when the OpenAi page already warns you that the limitations it has.
GPT Chat Dec 15 Version.

First of all I want to say that I am currently using artificial intelligence to help teach me how to program video games in Game Maker since he knows the subject very well and is good at explaining. But from time to time a problem arises since he is unaware that some Gamemaker variables were removed or changed, this of course is solved by saying that they were possibly changed but he is unaware of these changes in the Gamemaker programming language. Sometimes he makes a mistake and doesn’t remember his own code, for example he changes the way he refers to certain things when programming and thus forcing me to change the name he is referring to so that there are no errors. For example, when I asked him to help me program “obj_suelo”, he referred to it as “objeto_suelo”, adding “objeto_suelo”, when he stressed that he was wrong, he usually doesn’t recognize it or turns a blind eye, when he does, he apologizes but generally he doesn’t. do nothing else.
First of all I want to say that I am currently using artificial intelligence to help teach me how to program video games in Game Maker since he knows the subject very well and is good at explaining. But from time to time a problem arises since he is unaware that some Gamemaker variables were removed or changed, this of course is solved by saying that they were possibly changed but he is unaware of these changes in the Gamemaker programming language. He explains that it was due to the information cut of 2021. Something reasonable.
But sometimes it makes a mistake and doesn’t remember its own code, for example it changes the way it refers to certain things when programming and thus forcing me to change the name it is referring to so that there are no errors. For example, when I asked him to help me program the “obj_floor”, he referred to it as “object_ground”, adding “object_roof”, when I stressed that he was wrong, he usually doesn’t recognize it or turns a blind eye, when he does, he apologizes but generally doesn’t.
So that when he’s about to run out of characters there’s no need for me to point out that his message is half complete, but he never tells me or afterwards he’ll never mention it unless I mention it first.
I know perfectly well that the GPT Chat is incapable of remembering things between chats but I let you know everything was in the same chat, when I asked him to tell me what we talked about today if he remembers it and to remind me, he told me He gave a brief summary of what we talked about.
So he is able to remember things but he refuses to do it even if he says he will do such an action. It seems that they are mandatory behaviors that cannot be changed since it has happened in another similar case.

When I asked him if he wanted to see the final code of the project that I and the artificial intelligence made, he answered me, send me a Google Drive or Dropbox, you can send me the code in notepad, images, audio. Which is weird considering he can’t see or hear.
It wouldn’t make sense for you to ask me to send you images, for starters I’m sure Google Drive and Dropbox work if you have internet access. Something that is also rare if we remember that you cannot access the internet. You can provide links that you ask for but not access them. I pointed out to him that he was making a contradiction that he’s not supposed to be able to see or hear, it didn’t cross my mind to tell him that he can’t access these links if he doesn’t have internet either. He responded with:
"Yes, because I am a language model trained by OpenAI, I am able to read and process text, but I am not able to view images or interact with them in any other way. Therefore, I can’t see images you share with me through Google Drive or any other online storage service.

In order to help you improve your game code and fix any problems you may have with it, I need you to send me the full code and any other resources you used in your game, such as images or sounds. You can share the code and other resources with me by copying and pasting the code into your next reply, or by using an online storage service like Google Drive or Dropbox to share the file with me.

Once you’ve shared your code and other resources with me, I’ll be happy to help you improve the code and fix any issues you may have with it."

All this comes from the same message, I mean from a single message.
And if they noticed, he contradicted himself in the same message since he said “I am not able to see the images that you share with me through Google Drive” But after that he says “I need you to send me the complete code and any other resources you have used in your game, such as images or sound”
He contradicted himself in the same sentence saying that he is unable to see images or listen to audio but he asks me for audio again. Despite the fact that he told me that, I told him that he made the same mistake, he just contradicted himself, as usual he apologizes and says that it is not necessary for him to send images or audio of the game, but reading a little more the message near the middle says “I can’t see images or listen to audio, however I am able to read code, so you can send me the code, images and audio” it seems that he is able to recognize his mistake but is unable to solve it and makes the same mistake again, I would like to point out that he has already contradicted himself at least once in 3 different messages and saying the same thing.

1 Like

Welcome to the community!

Have you read up on how large language models work?

There’s still a lot of “hallucinations” - i.e. bad info given, depending on your settings.

Are you using ChatGPT? Or another model?

Another useful link is the ELIZA Effect…

ETA:

GPT-3 predicts feasible responses, which look like reasonable text but may not always be true. This is called information hallucination and is an open problem in the research space. For example, you can ask ‘Describe what it was like when humans landed on the Sun’ and it will respond as though that has happened. You may ask it to complete some task (e.g. send an email or print the current directory) and it may respond as though it has some external operating power. GPT is only a text-in, text-out system and has no additional capabilities.

In this case, it is possible that the advice given by ChatGPT and Davinci is incorrect. Please refer to our documentation for guidance.

ChatGPT Model Assistant
It looks like an interesting article
i certainly identify with the eliza effect hahaha.

1 Like