Gpt-3.5-turbo-16k with long context not work

I’m testing out gpt-3.5-turbo-16k with a system content (long context) but the response not using the total of context. Seems to limit the context to a number of tokens.

Is anybody else having this problems with the gpt-3.5-turbo-16k?

when i using gpt-3.5-turbo work fine!

1 Like

Hi hascdev.
I’ve tested using gpt-3.5-turbo-16k. GPT-3.5-16k turbo model has a maximum token limit of 16k tokens. The token limit includes both input tokens (the tokens in the prompt or conversation history) and output tokens (the tokens in the generated response)
it means the number of characters that can be represented by 16k tokens is approximately 98,304(6 characters per token).

2 Likes

Are you seeing cut-off responses or responses that just aren’t as long/detailed as you want?

1 Like

Hi @novaphil.

the answers aren’t as long/detailed because it doesn’t use all of the given context.

I am comparing the answer using gpt-3.5-turbo-16k and gpt-3.5-turbo with the same question and the same context but the answer with gpt-3.5-turbo-16k is shorter and states “I need more information to answer”

Hi @dennis.cline

I understand. My tokens counts on test is de next:

Context: 2430 tokens
Query: 85 tokens
Answer: 314 tokens

This works fine when i using model gpt-3.5-turbo.

The longer context doesn’t automatically mean it will produce longer answers. The primary use case for more context is so that you can send more data in your prompt. If you want longer answers, you’ll need to work on your prompt to ask for longer answer.