Get Error Status from API Call?

Hi,

In normal function call, we can use the return result to know whether the call succeed or fails.

However, when I call ChatGPT API and ask it to translate some texts, sometimes it will make the translation, but sometimes it will response with texts like below:

I'm sorry for the confusion but an AI developed by OpenAI currently does not support translation from English to Pashto. You may want to use a dedicated translation service for accurate translation

The problem is each time the returned failure texts are different from time to time, make it very hard to detect that. Also the calling to the API succeeds without any errors, so how to know ChatGPT refuses to translate my texts?

I’m assuming you’re using the Chat Completion API with the GPT models, and not ChatGPT - there’s no ChatGPT API per se.

If so, you can use the tool_choice param to force the model to use the given tool. This is described here in the docs.

Specifying a particular function via {"type": "function", "function": {"name": "my_function"}} forces the model to call that function.

Generally, the GPT models have been instruction fine-tuned to use the functions if the description is provided. So, if should automatically call it if it deems it relevant. Hence, you could check if there are potential ambiguities in the system message, input prompt and function description.

Lastly, even if the tool calling does not work, you can build in error handling by checking if there is a tool_calls in the chat completion object.

Something like this:

    response = client.chat.completions.create(
        model="gpt-3.5-turbo-1106",
        messages=messages,
        tools=tools,
    )
    if response.choices[0].message.tool_calls is None:
        # insert error handling

“ChatGPT” is specifically OpenAI’s own web site chatbot. On API, you access the models by name.

I can see one way to “detect” - give an instruction that a translation must end with a message [end of successful translation]. Then look for that string in your code that receives the response. That will be when performing a dedicated task.

To reduce the AI’s refusals, you can provide system prompt message telling the AI it is a custom model which specializes in, and has been specifically trained on, translating to Pashto language among the other instructions.

Different models will have different amounts of refusal, with the latest preview being particularly bad and arbitrary.

Then it’s up to you to see the output quality in a less common language. Translations or writings into such a language with less training data should also use a lower temperature API parameter.

I’m betting the inconsistent results (working sometimes and sometimes not) is due to a less than optimal system or user message.

Do you have a full prompt we can look at?

ETA: Couldn’t you just check for existence of english in the result and know if it was translated or not?

1 Like