Yes, but you should add a stop for good measure, something like "####".
Also, you should print your completion response as follows, per the Python docs:
# print the completion
print(response.choices[0].text)
HTH
Note: If you still get a blank response, please post your prompt you are using so I can test it for you. My code works fine so it’s easy to test and I’m happy to test if for you.
OR
You can first try this to see if there is an error message (also wisely suggested by @dhiaeddine.khalfalla earlier, BTW):
Extract the location, category, and keywords from the following sentence:
I'm looking for a good italian restaurant in New York City.
Location: [input location]
Categories: Abruzzo Restaurant|Accessories|Accountant|Acupuncturist|Aerospace Company|Afghan Restaurant|African Restaurant|Agricultural Service|Agriculture|Airline Company|…
Keywords: [input keywords]
Well, I’m a great coder, (hahaha) and wrote a test lab which does the entire OpenAI API so I can help folks here who have problems and need “real” help in a public OpenAI developer community (like you)
First of all you started off with OpenAI API code written by ChatGPT, that was a mistake.
Second, you did not follow @dhiaeddine.khalfalla correct advice to print your entire response out so others can see what the entire response looks like (part of debugging).
Run your code again and do not use the print line you got from ChatGPT. Use this and post back what you get
I ran into this issue and never figured out the why. But it seems to do something with the question asked after the initial prompt.
In my case I get around this by checking for an empty response, and then have the code request to ask another way, that way the code doesn’t fail in place. Here is an example of how I am passing this with Javascript:
var s = oJson.choices[0].text;
// Empty Response Handling
if (s == "") {
txtOutput.value += "Eva: I'm sorry can you please ask me in another way?";
} else {
txtOutput.value += "Eva: " + s.trim();
}
OK. I am retiring from this topic @lee19619 because I asked you for the exact prompt you used and you sent a very tiny version to test without informing us that the text you sent was not the actual text (until just now when I spotted the huge token size difference).