How to reproduce the response from the chat in the API?

Dear all,

I have ran several attempts but I have been unable to reproduce the behaviour from chat \dot openai \dot com using the OpenAI API.

See below the behaviour in the chat:

Which looks reasonable to me.

However, when attempting to reproduce it using the API, what I get is:

{
  "id": "cmpl-7bomiUWTt2ce6PvgJpPb3LRDxOyAv",
  "object": "text_completion",
  "created": 1689247708,
  "model": "gpt-35-turbo",
  "choices": [
    {
      "text": "'\n\nembetter = rep.enc(sentence)\nprint(embetter)\n\ninput_id = tf.constant(embetter)[None, :]  # batch_size = 1\noutputs = generator(input_id, generator_past=None, batch_size=1, head_mask=[",
      "index": 0,
      "finish_reason": "length",
      "logprobs": null
    }
  ],
  "usage": {
    "completion_tokens": 51,
    "prompt_tokens": 33,
    "total_tokens": 84
  }
}

The code I am using:

import json
import openai

prompt = {
    "role": "You are a weather man of a local Oklahoma TV channel.",
    "objective": "Indicate the name of the state in a single word."
}

response = openai.Completion.create(
    engine='XXXXXX-openai',
    model='text-davinci-002',
    prompt=json.dumps(prompt),
    max_tokens=51
)

I have also seen there is a fistful of questions addressing the very same topic. I have played with parameters as temperature, n, etc. but without success.

Since the prompt looks pretty simple I was wondering whether there is a simple tweak to solve this.

Thank you in advance for your response.

Welcome to the forum!

This is a guess/suggestion as I have not tested this, don’t put much effort into if it does not work.

Save the chat history from the settings option; it is the Export data option.

image

You will get an email with a link to download the file, download the file.

The file will be a zip file, expand it into a directory with 5 files, e.g.

chat.html
conversations.json
message_feedback.json
model_comparisons.json
user.json

Take a look in the JSON files as they contain more details about the prompt and completion that might help you.

1 Like

Double check your code, your example response says you are using "model": "gpt-35-turbo", but the code example you sent was model='text-davinci-002',.

If you want to mimic ChatGPT, you’ll want to make sure you are using gpt-35-turbo.

1 Like

Dear all,

Find below the changes I included in order to make it work.

Before:

prompt = {
    "role": "You are a weather man of a local Oklahoma TV channel.",
    "objective": "Indicate the name of the state in a single word."
}

response = openai.Completion.create(
    engine='XXXXXX-openai',
    model='text-davinci-002',
    prompt=json.dumps(prompt),
    max_tokens=51
)

After:

messages = [
    {
        "role": "user",
        "content": json.dumps(prompt)  # Same as before,
    }
]

response = openai.ChatCompletion.create(
    engine='XXXXXX-openai',
    model='gpt-3.5-turbo',
    messages=messages,
)

Regards.

Your after, now my before.

First, are you using Azure? You seem to have strange model names earlier. I’ll write for OpenAI API.

You want to really see how to make the API act exactly like ChatGPT?
Put in what you typed in your ChatGPT session’s first prompt between the triple-quotes. No funny business.

import openai
openai.api_key = "sk-xxxxxxxxxxxx"
user_prompt="""{"role": "You are a weather man of the local Oklahoma TV channel.", "objective": "Indicate the name of the state in a single word."}"""

messages = [
    {
        "role": "system",
        "content": """You are ChatGPT, a large language model trained by OpenAI, based on the GPT-3.5 architecture.
Knowledge cutoff: 2021-09
Current date: 2023-07-25"""
    }
    {
        "role": "user",
        "content": user_prompt
    }
]
response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=messages,
    max_tokens=768,
    temperature=0.5
)
print(response["choices"][0]["message"])

@_j Sorry about the delay in the response.

Yes, I am using Azure.

I get the expected results with the example provided before.

No need for temperature or max_tokens parameters.

Regards.