Dear all,
I have ran several attempts but I have been unable to reproduce the behaviour from chat \dot openai \dot com using the OpenAI API.
See below the behaviour in the chat:
Which looks reasonable to me.
However, when attempting to reproduce it using the API, what I get is:
{
"id": "cmpl-7bomiUWTt2ce6PvgJpPb3LRDxOyAv",
"object": "text_completion",
"created": 1689247708,
"model": "gpt-35-turbo",
"choices": [
{
"text": "'\n\nembetter = rep.enc(sentence)\nprint(embetter)\n\ninput_id = tf.constant(embetter)[None, :] # batch_size = 1\noutputs = generator(input_id, generator_past=None, batch_size=1, head_mask=[",
"index": 0,
"finish_reason": "length",
"logprobs": null
}
],
"usage": {
"completion_tokens": 51,
"prompt_tokens": 33,
"total_tokens": 84
}
}
The code I am using:
import json
import openai
prompt = {
"role": "You are a weather man of a local Oklahoma TV channel.",
"objective": "Indicate the name of the state in a single word."
}
response = openai.Completion.create(
engine='XXXXXX-openai',
model='text-davinci-002',
prompt=json.dumps(prompt),
max_tokens=51
)
I have also seen there is a fistful of questions addressing the very same topic. I have played with parameters as temperature, n, etc. but without success.
Since the prompt looks pretty simple I was wondering whether there is a simple tweak to solve this.
Thank you in advance for your response.