Hi everyone,
I’ve just started experimenting with the OpenAI API and running into a small issue. Sometimes when I send a prompt, I get either a very short reply or just a few words when I expect a longer answer.
Here’s what I’m doing:
- Using the
gpt-4
model via the chat/completions endpoint - Prompt example: “Write a 200-word article on climate change”
- Max tokens is set to 300, temperature 0.7
Sometimes I get good output, but other times it feels like the response gets cut off. Is there something I might be doing wrong with the token settings or prompt structure?
Any guidance would be appreciated!
Thanks