What is the token context window size of the GPT-4 o1-preview model?

Hi everyone, I’m working with the GPT-4 o1-preview model and would like to know the token limit for the context window used by this model in conversations. If anyone has information on the maximum token memory capacity it utilizes, I’d appreciate your input. Thanks in advance!

Hi @luisdemiguel !

According to the docs, it’s 128,000 tokens context window for both o1-preview and o1-mini. For max output tokens, it’s 32,768 for o1-preview and 65,536 for o1-mini.

2 Likes