Difference in responses for the same dataset in playground and command line for Q&A

When using the answers API endpoint, the responses on playground are pretty accurate whereas command line gives out error. Engines used in both cases is davinci.
Screenshots below -

Would anybody know why could this be happening?

Hi @blucrys ,

What you’re doing in Playground is a simple Completion using the text-davinci-001 at temperature 0.7. While in the terminal, you’re calling the answers endpoint with a JSONL file specified, which hopefully would contain the text shown in playground, and search_model and engine set to ‘davinci’.

To replicate what you did in playground, use the completions endpoint with similar params as in playground i.e engine,temperature etc. Set the prompt as all the text you have in playground along with the question.

Also OpenAI changed the engine nomenclature, so it’s better to use the new names.

As a best practice, it is absolutely recommended/essential to keep your OpenAI API key private and blur/redact it when sharing any screenshots. I’d advise you to deactivate the current API key if you haven’t done so.

1 Like