Hey,
Following the March 1st release of ChatGPT API, I’m willing to perform batching like explained in OpenAI API (Example with batching).
As it currently seems there is no option to do so.
I am aiming at sending multiple prompts and receiving multiple answers, 1 for each prompt as if it was on separate conversations.
e.g.
openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{'role': 'user', 'content': 'this is prompt 1'},
{'role': 'user', 'content': 'this is prompt 2'},
]
)
I get back:
...
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "Sorry, there is no context or information provided for either prompt 1 or prompt 2. Can you please provide more information?",
"role": "assistant"
}
}
]
...
Although I would like ChatGPT to look at every message separately and not as part of the same conversation.
Yes @sps, but OpenAI has made the hilariously bad product decision to steer all of its GPT users towards the ChatCompletion API.
At 1/10th the price, all GPT users should and will be writing their own best attempts at utility classes to get around ChatCompletion’s ugly abstraction. But batching (and n) warrant a better approach than “dispatch many requests asynchronously”
Hi @yotam.martin, Thanks for pointing this out. I’m suffering from the same issue at the moment. And here’s the workaround I discovered, which I hope will be helpful to you. My problem is using text completion to auto-answer hundred thousands of responses (sentences or paragraphs) to a specific code. like, does this response describe a ‘xxx’ or ‘yyy’.
prompts=[
{'this is prompt 1'},
{'this is prompt 2'},
]
)
to make it work on chapGPT, i adjust to:
model="gpt-3.5-turbo",
max_tokens=256,
messages=[
{'role': 'user', 'content': 'here is the background, and what i want to achive'},
{'role': 'user', 'content': 'here are the xxxreponses list:
1:sentence1.
2:sentence2.
3:sentence1.
4:sentence2.
5:sentence1.
....
100:sentence2.'},
{'role': 'user', 'content': 'Please determine whether each sentence relates to xxx. Your response should take relevant details from the background, the response, and the label. The output should only contain the sentence index number and the short answer yes or no.'}
]
)