Prompt Language English or == Response Language

I am working on a custom assistant in a non-English language and I am wondering if you have any experience whether it is preferred to use English for the prompting or directly start in the desired response language. I thought using an English prompt might be advantageous since the API could have more robust language understanding capabilities. Using the response language directly could be better for specific industry vocabulary etc.
What are your experiences?

1 Like


I have never explicitly measured the performance difference, but when introducing a second language to the conversation, you run into the problem of ChatGPT responding in either of the two languages. To keep things simple, I would try to avoid unnecessary complexity and later work towards adding a translation dictionary/knowledge file separately if required.

What is your experience with the common issue that industry-specific terms often do not have direct counterparts in every language?

1 Like

Aside from the issue of token count, I think it is fine to create a custom GPT (or custom assistant) in the response language. But, gpt-4o might lack understanding of specific languages.

In the case of a custom GPT, you can’t choose the language model, so there’s nothing we can do about that.

If you are using the assistant via the API, gpt-4-turbo seems to capture the nuances of a wider range of languages best.

1 Like