Prompt parameter debugging for answers api

I have been making use of the /answers api and occasionally see very strange results. When I see this I set the value to return the prompt for debugging in the playground.

The problem is the playground returns different results for the exact same prompt as what the answers api call uses.

I don’t know what parameter settings are used by answers (aside: many of them I can’t see how to set with answers).

For example, what is the top_p and precence settings, are there any stop settings above what I pass in. I can’t use the prompt to debug if it isn’t reproducible outside of the answers api.

I am wondering if I should not use the answers api, and instead use search of my “knowledge” combined with more hand engineered prompts?

(the specific problem I am having here is a truncated completion when there are plenty of tokens and plenty of examples in the prompt, which I see when I try to debug it - it answers reasonably)

Any help appreciated.

Hi there, the playground is an interface to the Completions endpoint, which is not the same as the Answers endpoint.

With different settings, you’ll naturally see different results.

These parameters are not present in the Answers documentation: OpenAI API.

That’s totally doable! Some users use Search and the Completions endpoint separately.

yes I was using the completions endpoint to make use of the OpenAI API for debugging, as mentioned (as the answers API as I see it is a front end on search + completions with a computed prompt, which I wanted to alter, but cant seem to be able to)

That’s right, you can’t directly alter the prompt used in the Answers endpoint.

That said, you can use examples to steer the model towards the tone and answer format you’d like, alongside examples_context to provide the contextual information used to generate the answers for the examples you provide.