Why second person pronouns is such a huge issue for chatGPT?

I use chatGPT as a translating assistant for a few languages. But in every session, chatGPT is going nuts whenever there’s a phrase with a 2nd person pronoun. Instead of processing this phrase as it’s instructed to do, it starts treating it as a personal address.

I have very specific prompts in every session and I always instruct it what to do with such phrases. I’m telling it not to treat it in its favour, not to take it as a personal address. But it’s all in vain.

Folks, do you have a clue or secret prompt formula to get this fixed?

Could you give an example of what doesn’t work? It seems fine for me. Normally I just separate them in some manner. It understands markdown easily, like ``` and =========

So you could do something like

This is a phrase:

(phrase here)

Convert the phrase into Italian, but try to maintain the poetic meaning.

edit: ironically the forum is markdown too, so it doesn’t show the ``` I put at the top and bottom of (phrase here)

As for markdown, it does not seem to work:

Now back to translation.

Here is my new “Ru > Tr” translation session.

Since you barely can read Russian, let me translate.

After I prompted it with the necessary instructions, asking it to translate my phrases from Russian to Turkish, it started to mess things up.

Me: “Could I make copies?” (in Russian)
chatGPT: “Could you make copies”? (in Russian)
Me: “Could I make copies of the documents?” (in Russian)
chatGPT: “Could you make copies of the documents”? (in Russian)

No Turkish. What is going on? There is no any sense in what it is doing.

Yeah, sorry, my Russian is horrible, but it could be that first prompt isn’t so clear to it.

Maybe try something like

Translate the following to Turkish

Could I make copies?

Or simply Translate: "Could I make copies?"

It seems to be confusing the line to mean that you’re asking it a question.

Strange. It should interpret any language the same way because that is my understanding of what a language model is. But ok, maybe it understands now prompts only in English. I can give it a try.

As for pretending every request with a command “Translate :” - well that doesn’t look like a good idea, because that’s what prompt is for. Also, wrapping in quotes is a real pain on mobile.

My possibly wrong assumption is that it doesn’t store everything you say to it perfectly. It stores a summary of the instructions for optimization.

So if you tell it something like “You are a translation bot. You convert all Russian text to Turkish. Please respond with ok if you understand these instructions.”, it might actually be stored in the database as “I have been given instructions to convert all Russian text to Turkish”

Some details here could be lost in translation, including the pronouns, and dealing with multiple languages may make it worse. So sometimes you have to keep repeating it for clarity. What I end up doing is just editing the first instructions instead of using it as a chat.

You might also want to look at the API, which gives you finer control over how it works.

Could you please clarify what you mean here?

I mean if I wanted to translate several sentences, I’d edit the new sentences to translate into the original prompt instead of more conversations with it. It seems to lose context as a conversation goes on.

Also on another note, GPT-4 seems to be facing this problem of losing context more than 3.5 recently.

See the solution for a similar problem using the API here,

You can modify this to work in ChatGPT by putting the system message into a Custom Instruction.