API not responding like the tests on Playground

I feel like I am missing something fundamental.
I have been playing for ages with playground and today I tried the API for the first time.
I ask questions and it keeps completing more questions.

For example on playground i will type:
‘what is the capital of the uk’
and it will respond in a new line: London

when i do this via an API call, i get a huge response:
“choices”:[{“text”:“\n\n"london"\n\n"how old are you?"\n\n”

and if i increase the max tokens (so i will not be limited if answer is longer), it will just add more chat scenarios…

what am i missing? i want to limit it to respond rather than try and complete. just like it detects on playground that i just asked a question rather than ask it to generate a poem etc…

The newer models like 002 and 003 are more apt to give you Q and A type responses. I don’t use them much so I have little experience with that, but the idea is that it’s far simpler to get what you want out of them. So my guess is that you’ve got model 003 running so when you ask a question it automatically assume that you want an answer instead of generating more similar questions like that. davinci 001 model is the best for auto text completion… whatever you put in, it’ll try to do more of. So give it lots of context and examples (ideally repeated) and it’ll generate more of them. So 5 questions about capitals in a row (with *** or other separator) will likely give you a bunch more questions about capitals. So, just try the 001 (oldest) model instead: OpenAI API