What happens if input token exceeds what gpt-4 can handle?

Here I’ve sent 23k tokens to GPT-4 which can only handle 8k tokens. What happens?

This window simply shows you cumulative requests. You did 4 requests, each probably near 6k or so. You didn’t send 23k in one go.

I hope the way I explained it makes sense :slight_smile:

1 Like

If, however, you would attempt to feed too many tokens in a request I believe an error is returned that looks something like “This model’s maximum context length is 8192 tokens. However, you requested 9136 tokens (5136 in the messages, 4000 in the completion). Please reduce the length of the messages or completion.”

2 Likes