Completions API response always considerably shorter than ChatGPT response

I am having a hard time getting the completions API to return a detailed response. I have tried specifically asking for numbers of words, asking for 3-4 paragraphs per section. I have said all variations of requesting a verbose response with lots of details and be informative.

Every single time, it comes back with a response that is around 3000 characters. If i copy and paste the exact same prompt into chatGPT (chat dot openai dot com), it will generate a response that is typically twice the size of the response i got back from the completions API.

I have tried using gpt3-turbo, gpt4 and gpt4-turbo as the model in my request but nothing seems to make it give me back a larger response. Here is the core request that i send where the model is a variable based on a dropdown and the transcript is the prompt. Is there anything about this request that could be causing the poor response size?

			model: document.getElementById("gptModel").value,
			max_tokens: 4000,
			temperature: 0.7, 
			top_p: 0.9, 
			frequency_penalty: 0.5, 
			presence_penalty: 0.6, 
			messages: [
				"role": "user",
				"content": transcription

ChatGPT wraps your conversation in another, hidden prompt. Maybe consider adding a system role message to instruct the model how to behave?

Also, do you know what you’re doing with your parameters? :thinking:

So for the messages array, add a new record at the start with role: system and then content along the lines of “please return a detailed response with X paragraphs per heading etc…” and then in the role: user content, give the standard prompt without needing to specify how verbose i want the response to be?

i asked chat gpt for the best parameter suggestions to get a longer output and this is what it gave me.

Yeah, the gpts don’t really know how to use themselves.

something like that, yeah.

Have you read through this?

I’m also running into this issue and it’s frustrating. I don’t necessarily want longer responses, I just want it to have better responses.

Oftentimes I’ll get a really shorthand answer in the openai api response and then chatgpt gives this awesome bulleted or numbered detailed response that is exactly what I wanted.

I have no idea how to fix this…