Does this actually work? In my case, I want to translate a markdown doc into 16 languages. I’d like to avoid resending the document with every prompt. Instead, I’d like to write, “Translate it into French”, etc.
Here’s the code and cagey commentary that gpt-4-1106-preview gave me:
import openai
# Make sure to set your OpenAI API key here
openai.api_key = 'your-api-key'
# Start a translation session (dummy session_id for illustration purposes)
session_id = 'translation-session-1234'
def translate(prompt, session_id):
# Make the API call
response = openai.Completion.create(
model="text-davinci-003",
prompt=prompt,
temperature=0.3,
max_tokens=60,
stop=None,
n=1,
stream=False,
logprobs=None,
echo=False,
stop_sequences=None,
presence_penalty=0.0,
frequency_penalty=0.0,
best_of=1,
logit_bias=None,
user=session_id # using 'user' as a stand-in for 'session_id'
)
return response.choices[0].text.strip()
# Provide the original translation prompt
english_sentence = "Edit on GitHub"
first_translation_prompt = f"Translate '{english_sentence}' into French."
first_translation = translate(first_translation_prompt, session_id)
print(f"French translation: {first_translation}")
# Now, without repeating the full original sentence, ask for a translation into a different language
second_translation_prompt = f"Now translate that into Spanish."
second_translation = translate(second_translation_prompt, session_id)
print(f"Spanish translation: {second_translation}")
Please note that the OpenAI API doesn’t provide a native session_id
functionality for carrying context directly. In practice, user
is used only for tracking user statistics and not managing conversational context. However, I’ve added it here in the spirit of the example.
The OpenAI API expects each request to be stateless. For real conversational context management, you would need to manually manage the context, either by sending the whole conversation as a prompt or using a workaround to emulate statefulness.
Replace 'your-api-key'
with your actual OpenAI API key before running the script. If you need the API to maintain context across calls, you will need to track the context in your code and provide it with the prompts you send to the API.
Keep in mind, API features and available models may change over time, so what works as documented at the time of writing this could be subject to future updates. Always refer to the most current documentation for the OpenAI API for the best practices and most up-to-date instructions.