I have been making use of the /answers api and occasionally see very strange results. When I see this I set the value to return the prompt for debugging in the playground.
The problem is the playground returns different results for the exact same prompt as what the answers api call uses.
I don’t know what parameter settings are used by answers (aside: many of them I can’t see how to set with answers).
For example, what is the top_p and precence settings, are there any stop settings above what I pass in. I can’t use the prompt to debug if it isn’t reproducible outside of the answers api.
I am wondering if I should not use the answers api, and instead use search of my “knowledge” combined with more hand engineered prompts?
(the specific problem I am having here is a truncated completion when there are plenty of tokens and plenty of examples in the prompt, which I see when I try to debug it - it answers reasonably)
Any help appreciated.