Gpt-4-1106-preview told me hack to `user` attribute to store a session id

Does this actually work? In my case, I want to translate a markdown doc into 16 languages. I’d like to avoid resending the document with every prompt. Instead, I’d like to write, “Translate it into French”, etc.

Here’s the code and cagey commentary that gpt-4-1106-preview gave me:

import openai

# Make sure to set your OpenAI API key here
openai.api_key = 'your-api-key'

# Start a translation session (dummy session_id for illustration purposes)
session_id = 'translation-session-1234'

def translate(prompt, session_id):
    # Make the API call
    response = openai.Completion.create(
        model="text-davinci-003",
        prompt=prompt,
        temperature=0.3,
        max_tokens=60,
        stop=None,
        n=1,
        stream=False,
        logprobs=None,
        echo=False,
        stop_sequences=None,
        presence_penalty=0.0,
        frequency_penalty=0.0,
        best_of=1,
        logit_bias=None,
        user=session_id  # using 'user' as a stand-in for 'session_id'
    )
    return response.choices[0].text.strip()

# Provide the original translation prompt
english_sentence = "Edit on GitHub"
first_translation_prompt = f"Translate '{english_sentence}' into French."
first_translation = translate(first_translation_prompt, session_id)
print(f"French translation: {first_translation}")

# Now, without repeating the full original sentence, ask for a translation into a different language
second_translation_prompt = f"Now translate that into Spanish."
second_translation = translate(second_translation_prompt, session_id)
print(f"Spanish translation: {second_translation}")

Please note that the OpenAI API doesn’t provide a native session_id functionality for carrying context directly. In practice, user is used only for tracking user statistics and not managing conversational context. However, I’ve added it here in the spirit of the example.

The OpenAI API expects each request to be stateless. For real conversational context management, you would need to manually manage the context, either by sending the whole conversation as a prompt or using a workaround to emulate statefulness.

Replace 'your-api-key' with your actual OpenAI API key before running the script. If you need the API to maintain context across calls, you will need to track the context in your code and provide it with the prompts you send to the API.

Keep in mind, API features and available models may change over time, so what works as documented at the time of writing this could be subject to future updates. Always refer to the most current documentation for the OpenAI API for the best practices and most up-to-date instructions.

That’s a classic case of model hallucination.

The user param is passed to the API so that OpenAI could contact you if any user of yours were violating ToS, by providing a reference to the user.

The chat completions API is stateless, however if you want your conversation to be managed by OpenAI you can use the Assistants API as it stateful.

3 Likes

Also note that the AI hasn’t been trained on coding for the latest AI models and API methods. That AI model you show no longer exists. You’ll have to resort to reading the API Reference, and learning from parameter descriptions and examples shown there to write your own code.

This is more a “text processing batch process” than a “chat history” task, so you can make a for loop that substitutes different languages from a list into your “translate the user’s text into {output_language}” instruction. I just did that as an example.

import openai
client = openai.OpenAI()
translated_outputs = []  # Python list to store the translated AI responses
messages = [
 {"role": "system", "content":
  'You are an AI-based language translator. Output will be only a full translation of user text. '
  'Preserve formatting. Preserve code.'},
 {"role": "system", "content": 'output language placeholder'},  # list item 1
# user provides the text in original language
 {"role": "user", "content": 'The function `get_weather` retrieves the local forecast from an API.'},
]

for language in ["French", "Spanish", "Indonesian",]:
    messages[1]['content'] = f"As response, translate input into {language} language."
    response = client.chat.completions.create(
      messages=messages, model="gpt-3.5-turbo", top_p=0.1, max_tokens=900,
    )
    translation_format = {"language": language, "content": response.choices[0].message.content}
    translated_outputs.append(translation_format)

print(translated_outputs)  # whatever you want to do with list of translations

this produces a python list with python dictionaries:

[
{‘language’: ‘French’, ‘content’: “La fonction get_weather récupère les prévisions locales à partir d’une API.”},
{‘language’: ‘Spanish’, ‘content’: ‘La función get_weather recupera el pronóstico local de un API.’},
{‘language’: ‘Indonesian’, ‘content’: ‘Fungsi get_weather mengambil ramalan cuaca lokal dari sebuah API.’}
]

Having the AI produce multiple outputs at once from one instruction and one input text is also possible, but the quality will start to degrade and be limited around 800 tokens total.

this is interesting, as I’ve been using chat completion for a batch for this with GPT3.5, starting with a system prompt then a batch process for each line (over a 1,000 or so lines per batch). But in my case I have been doing it per output language and saving them as {locale}_translation.json.

I’m going see if this is any better.