I use chatGPT as a translating assistant for a few languages. But in every session, chatGPT is going nuts whenever there’s a phrase with a 2nd person pronoun. Instead of processing this phrase as it’s instructed to do, it starts treating it as a personal address.
I have very specific prompts in every session and I always instruct it what to do with such phrases. I’m telling it not to treat it in its favour, not to take it as a personal address. But it’s all in vain.
Folks, do you have a clue or secret prompt formula to get this fixed?
Since you barely can read Russian, let me translate.
After I prompted it with the necessary instructions, asking it to translate my phrases from Russian to Turkish, it started to mess things up.
Me: “Could I make copies?” (in Russian)
chatGPT: “Could you make copies”? (in Russian)
Me: “Could I make copies of the documents?” (in Russian)
chatGPT: “Could you make copies of the documents”? (in Russian)
No Turkish. What is going on? There is no any sense in what it is doing.
My possibly wrong assumption is that it doesn’t store everything you say to it perfectly. It stores a summary of the instructions for optimization.
So if you tell it something like “You are a translation bot. You convert all Russian text to Turkish. Please respond with ok if you understand these instructions.”, it might actually be stored in the database as “I have been given instructions to convert all Russian text to Turkish”
Some details here could be lost in translation, including the pronouns, and dealing with multiple languages may make it worse. So sometimes you have to keep repeating it for clarity. What I end up doing is just editing the first instructions instead of using it as a chat.
You might also want to look at the API, which gives you finer control over how it works.
I mean if I wanted to translate several sentences, I’d edit the new sentences to translate into the original prompt instead of more conversations with it. It seems to lose context as a conversation goes on.
Also on another note, GPT-4 seems to be facing this problem of losing context more than 3.5 recently.