Efficiently translate a single document to multiple languages?

In the playground, I can create an implicit context and refer to it as “it”. Is something similar possible via the API (4-turbo)?

E.g., from my interactive session:

Me: Translate [sentence] into German.
Response: [translated sentence]

Me: Translate it into French.
Response: [translated sentence]

etc.

I’d like to do this via the API with 500-word Markdown texts. The reason is to save on input tokens. I’m translating into 16 or so languages.

Thanks in advance.

I asked GPT, and here’s what it told me. Haven’t tested it yet:


Yes, in the context of the OpenAI API, you can carry out a conversation where the context is maintained across multiple requests without having to repeat the original text in every request. The OpenAI API maintains a conversation state in so-called “sessions,” where it remembers the context of the conversation, much like a human would.

You can simply continue the conversation where you left off, and the model should be able to understand and refer back to previously mentioned subjects or topics without their needing to be explicitly restated. For this to work, you must use the same session identifier across multiple requests.

Here’s a simplified example of how you might use the API in such a scenario:

  1. You start a session and send a prompt “Translate ‘Edit on GitHub’ into French.”
  2. The API responds with “Modifier sur GitHub.”
  3. Later in the same session, you can reference the earlier translation by asking, “How would you say that in Spanish?” without repeating the original text “Edit on GitHub.”
  4. The API would use the context of the session to understand that “that” refers to the English phrase “Edit on GitHub” and provide a translation into Spanish, such as “Editar en GitHub.”

For technical implementations using the OpenAI API, you would structure your API calls accordingly, using whatever method the API provides for maintaining session state or context.

This is weird. Here’s the code it produced, and check out the commentary it gave.

import openai

# Make sure to set your OpenAI API key here
openai.api_key = 'your-api-key'

# Start a translation session (dummy session_id for illustration purposes)
session_id = 'translation-session-1234'

def translate(prompt, session_id):
    # Make the API call
    response = openai.Completion.create(
        model="text-davinci-003",
        prompt=prompt,
        temperature=0.3,
        max_tokens=60,
        stop=None,
        n=1,
        stream=False,
        logprobs=None,
        echo=False,
        stop_sequences=None,
        presence_penalty=0.0,
        frequency_penalty=0.0,
        best_of=1,
        logit_bias=None,
        user=session_id  # using 'user' as a stand-in for 'session_id'
    )
    return response.choices[0].text.strip()

# Provide the original translation prompt
english_sentence = "Edit on GitHub"
first_translation_prompt = f"Translate '{english_sentence}' into French."
first_translation = translate(first_translation_prompt, session_id)
print(f"French translation: {first_translation}")

# Now, without repeating the full original sentence, ask for a translation into a different language
second_translation_prompt = f"Now translate that into Spanish."
second_translation = translate(second_translation_prompt, session_id)
print(f"Spanish translation: {second_translation}")

Please note that the OpenAI API doesn’t provide a native session_id functionality for carrying context directly. In practice, user is used only for tracking user statistics and not managing conversational context. However, I’ve added it here in the spirit of the example.

The OpenAI API expects each request to be stateless. For real conversational context management, you would need to manually manage the context, either by sending the whole conversation as a prompt or using a workaround to emulate statefulness.

Replace 'your-api-key' with your actual OpenAI API key before running the script. If you need the API to maintain context across calls, you will need to track the context in your code and provide it with the prompts you send to the API.

Keep in mind, API features and available models may change over time, so what works as documented at the time of writing this could be subject to future updates. Always refer to the most current documentation for the OpenAI API for the best practices and most up-to-date instructions.

Yes, it is possible. The playground is just a GUI over the API with some added features.

The chat functionality you are referring to uses chat completion endpoint.

OP also created a newer second thread. Continue there for a coding example: