I’m using Python.
I got this following syntax to work a month ago, then didn’t use it. Now I’m using it again and this time it doesn’t work. I’m trying to get openai to output the result with a json object. here is my syntax
prompt3 = """You are a helpful assistant that translates sentences from
Ancient Greek into English. For each sentence provide a translation
in JSON structure like this {'sentence':'<The Ancient Greek sentence>',
'translation':'<The English translation>'}"""
prompt2 = f'These are the sentences: {txt}'
prompt3 = prompt3 + prompt2
messages = [{"role": "assistant", "content": prompt3},
{'role': 'user', 'content': {'sentence': "διαμερίζομεν δ' αὖ τοὺς ἀριθμοὺς ὧδέ πως.",
'translation': 'We divide numbers in the following way.'}
}]
And
obj = client.chat.completions.create(model=model,
messages=messages,
temperature=temperature,
frequency_penalty=frequency_penalty,
response_format={"type":'json_object'}
)
I’m using model: jpt-4o
The error message I’m getting is:
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {‘error’: {‘message’: “Invalid type for ‘messages[1].content[0]’: expected an object, but got a string instead.”, ‘type’: ‘invalid_request_error’, ‘param’: ‘messages[1].content[0]’, ‘code’: ‘invalid_type’}}
This is not valid. You are sending an object
and not a string
.
If you are intending to send the object AS A STRING you would ideally process it first by serializing/stringifying it, and THEN adding it to the prompt.
Do you mean: The rules require you to send an object but you are sending a string, or do you mean: the rules require you to send a string but you are sending an object? Whatever the answer is, how do I change my syntax to make it correct?
The error message itself is incorrect. It’s thinking that you are trying to send an array, which is one of the two applicable options.
You cannot send an object. You need to serialize it first.
What do you mean? Do you mean?
json.loads(messages)
obj = client.chat.completions.create(model=model,
messages=messages,
temperature=temperature,
frequency_penalty=frequency_penalty,
response_format={"type":'json_object'}
)
something like that?
The opposite:
serialized = json.dumps({'sentence': "διαμερίζομεν δ' αὖ τοὺς ἀριθμοὺς ὧδέ πως.",
'translation': 'We divide numbers in the following way.'})
[...]
{'role': 'user', 'content': serialized}
Thanks, that worked but it only translated the first sentence. It says that there were 122 completion_tokens but that there were 1424 prompt tokens. Do you know why that happened? Do you know how I can check to see if more than 122 tokens would have violated my daily limit?
I don’t want to translate each sentence one by one because each call to the server costs money. If I sent 100 sentences at a time, it’s cheaper I think and certainly faster. At the same time, the sentences are disconnected. They don’t form a narrative, so I need the sentences to all be translated and return in a dictionary format.
Never mind. I solved it. I revised the messages like so:
prompt3 = """You are a helpful assistant that translates sentences from
Ancient Greek into English. For each sentence provide a translation
in JSON structure like this {'sentences':['<First Ancient Greek sentence>','<Second Ancient Greek Sentence>'],
'translations':['<First English translation>','<Second English Translation>']}"""
lst=txt.split('.')
lst = [f'{z}.' for z in lst]
prompt2 = f'These are the sentences: {lst}'
prompt3 = prompt3 + prompt2
serial = json.dumps({'sentences': ["διαμερίζομεν δ' αὖ τοὺς ἀριθμοὺς ὧδέ πως.", "οἷον τὸ φυλλορροεῖν ἅμα ἀκολουθεῖ τῆι ἀμπέλωι καὶ ὑπερέχει καὶ συκῆι καὶ ὑπερέχει ἀλλ’ οὐ πάντων ἀλλ’ ἴσον."],
'translations': ["We divide numbers in the following way.", "Just as the shedding of leaves accompanies the vine and surpasses it, and the fig tree, and surpasses it, but not all equally."]})
messages = [{"role": "assistant", "content": prompt3},
{'role': 'user', 'content': serial
}]
1 Like
Thanks for helping me out, I really appreciate it.
1 Like