I am trying to send multiple prompts at a time using api, like:
import openai
completion = openai.Completion.create(
model="text-davinci-003",
prompt=['Say this is a test', 'Hello'],
temperature=0,
)
print(completion)
And I got openai.error.InvalidRequestError: Invalid request, please check your input and try again.
In the API Reference, prompt
can be a string, array of strings, array of tokens, or array of token arrays
.
What is the right way to do so?
Hi and welcome to the Developer Forum!
That model is due to be shut down in few months, so I would not advise making use of it, there are newer and more capable models available now, please see :
https://platform.openai.com/docs/models
1 Like
Thanks for you quick reply.
I also tried gpt-3.5-turbo-instruct ( which should compatible with the legacy Completion
api ), but that still not worked.
It would be worth your time to have a read of the documentation on the link provided in the post above, the API syntax and use has changed a fair bit and refreshing your knowledge with the latest updates will probably solve your issue.
Feel free to ask here if you have further issues.
1 Like
_j
December 6, 2023, 9:01am
5
Noted is your use of old conventions but with mystery python library installed.
A quick version check over here:
>>> import openai
>>> openai.__version__
'1.3.3'
Then let’s code for that:
from openai import OpenAI
client = OpenAI()
response = client.completions.create(
model="gpt-3.5-turbo-instruct",
prompt=["Hi", "Hello"],
max_tokens=1,
top_p=0.5,
)
print(response.model_dump().get('choices'))
We get our two answers:
[{‘finish_reason’: ‘length’, ‘index’: 0, ‘logprobs’: None, ‘text’: ‘,’}, {‘finish_reason’: ‘length’, ‘index’: 1, ‘logprobs’: None, ‘text’: ‘,’}]
2 Likes
Thanks for your example!
But I still failed with that.
Code:
from openai import OpenAI
client = OpenAI()
import openai
print(openai.__version__)
response = client.completions.create(
model="gpt-3.5-turbo-instruct",
prompt=["Hi", "Hello"],
max_tokens=1,
top_p=0.5,
)
print(response.model_dump().get('choices'))
Result:
1.3.7
Traceback (most recent call last):
File "/mnt/home/openai_service/test.py", line 18, in <module>
response = client.completions.create(
File "/mnt/data/conda/envs/openai/lib/python3.10/site-packages/openai/_utils/_utils.py", line 301, in wrapper
return func(*args, **kwargs)
File "/mnt/data/conda/envs/openai/lib/python3.10/site-packages/openai/resources/completions.py", line 559, in create
return self._post(
File "/mnt/data/conda/envs/openai/lib/python3.10/site-packages/openai/_base_client.py", line 1096, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "/mnt/data/conda/envs/openai/lib/python3.10/site-packages/openai/_base_client.py", line 856, in request
return self._request(
File "/mnt/data/conda/envs/openai/lib/python3.10/site-packages/openai/_base_client.py", line 908, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': 'Invalid request, please check your input and try again. (request id: 20231206171918158672760yDjEQedc) (request id: 20231206091918152610810ysMb3K9S)', 'type': 'invalid_request', 'param': '', 'code': None}}
OK! Finally!
Because I change the base_url, which don’t support this usage.
Thanks for your kindly help.