I am working on an assistant that generate answers to customers emails based on a knowledge base. I have mentioned in the prompt to strictly answer in the predominant language of the email:
Always reply in the same language used predominantly in the customer’s query. The language of response should not vary based on the sender’s name or other details. If the email is predominantly in English, reply in English; if it’s in French, reply in French, and so on.
The issue is that if, for example ,the name of the person is french or the last word of the email is in french. The generated answer will be in french…
I can’t get a way around that, even tried to rewrite the prompt various times or tell the assistant to be english only but no luck.
My first recommendation would be to try with the same prompt and switch between models. They are very different in their ‘strengths and weaknesses’ I would say.
then I would look at the details of the prompt and determine how precise you are with your description of what to look at ‘the email’ -you could go pretty far at defining exactly where to look. And since you’re talking about an email - have you looked in detail at the process of going from the ‘raw’ email to the text that you feed the Assistantant? I know in my email processor I’m still not compltely done picking between feeding html - or grabbing text from HTML of a combination.
One fix might be to send the message to a cheap classifier model which identifies the dominant language, then send the original message with “Respond using only the X language” appended at the end.
@noor025 yes, i was working on a make scenario so I divided the process in 2 steps: one query to gpt 3.5 in order to detect the language of the message and then the answer generation passing the language to use