I know this is already a huge topic but I would like to ask it from my own angle.
One thing I have noticed so far is that a conversation with ChatGPT can become “corrupted” or “polluted”. This means that for some reason there is enough pre-existing context earlier in the conversation that you can not get ChatGPT to change course anymore, no matter how directly and even strictly you try to instruct it to respond differently. One good example of this was trying to converse with it in Turkish. In one chat, the first message I sent was in Turkish, so the entire conversation proceeded in Turkish too. If I ever asked it was to say a certain English word, it responded with a complete explanation in Turkish. Even if I told it to switch to English occasionally, it didn’t work. The opposite happened in the second conversation. I started by asking it a question about a Turkish word, in English. From then on, no matter how many ways I tried to suggest to it that we should practice in Turkish now, it kept just taking my own messages in Turkish and explaining what they mean, in English.
Another thing I have been having trouble with is getting ChatGPT to stick to a pretty strict requirement, like, please be more concise. Please answer in only a few words, or less than 100 characters. Or also, please only use the simplest words of the Turkish language.
I’m currently exploring general paradigms for trying to prompt effectively, and not just seeking one specific prompt.
It depends on the scenario, but sometimes you do not need to plan/design/engineer the conversation beforehand. Sometimes natural back and forth goes great. For the situations that do not, I have seen a lot of prompts where people begin by being super precise and kind of demanding that ChatGPT do something, with additional “tricks” to make it think it can do something under a special context. If it works, I support it, but so far, I find it too much up front effort if you’re going to do that for constantly new situations, and I also think OpenAI is getting better at combatting “jailbreaking” just as soon as someone thinks of a new one.
I have been trying pretty hard to get ChatGPT to speak to me in Turkish alone, and to only use simple Turkish words, and to be brief. I have tried tons of approaches to no avail. It loves to speak a lot and use hard words.
I have tried giving it lots of examples but it hasn’t been effective.
Does anyone have a prompt that works for this?