I would like to make a suggestion: use System instead of (User) prompt for context rules.
Write the context instructions in the System field like this (Italian example):
"role": "system", "text": "Follow the three instructions below for your outputs:"
"role": "system", "text": "1. Use Italian language only;"
"role": "system", "text": "2. Check spelling and grammar for the Italian language;"
"role": "system", "text": "3. Make sure all expressions are compatible with the Italian language;"
We may consider the System as context reinforcement, it keeps the model on the context path more than the User prompt. It can be used as a set of general rules to be applied to a conversation about the same context. Please advise us about the results.
Let’s keep in mind that for most of the model training, even more than 50% is or was in English, and less than 50% is distributed among many other languages. There is an interesting thread in this forum (I lost the title): someone asked for the 5 words without the letter “e” in a few languages - the only language the model was capable of full completion was English. It failed for all other tested languages - one or two letters “e” in all 5 words for each language.
The most interesting thing is that if we translate all 5 words from any tested language to English - the model would be correct in all words, no “e”. They conclude the model performs internally in English then it translates to a desired language.
Maybe they are right, but I am not quite sure - I suppose that model training has a better tokenization development in English than in any other language, it is also a great influence.