I am sending requests to the new GPT4.1. model via Langsmith playground (look at the picture).
There is around 330000 tokens in the message
From this I get an error with this message:
openai.BadRequestError: Error code: 400 - {‘error’: {‘message’: “This model’s maximum context length is 300000 tokens. However, your messages resulted in 330294 tokens (including 57 in the response_format schemas.). Please reduce the length of the messages or schemas.”, ‘type’: ‘invalid_request_error’, ‘param’: ‘messages’, ‘code’: ‘context_length_exceeded’}}
According to the documentation there should be 1M tokens am I right?
Same issue with using gpt-4.1-mini, “This model’s maximum context length is 300000 tokens. However, your messages resulted in 320732 tokens (including 109 in the response_format schemas.). Please reduce the length of the messages or schemas.”