Preprocessing Chat History with TransformMessages thinks I'm using gpt-3.5-turbo-0613 but I'm using gpt-4o

I’m trying to set the max tokens to truncate the context there and avoid hitting that context overflow error.
Here is my code:
context_handling = transform_messages.TransformMessages(
transforms.MessageTokenLimiter(max_tokens=127000), ]
I’m getting the following error:
Max token was set to 127000, but gpt-3.5-turbo-0613 can only accept 4096 tokens. Capping it to 4096.

I’ve set OAI_CONFIG_LIST like so:
os.environ[“OAI_CONFIG_LIST”] = ‘[{“model”: “gpt-4o”}]’.

So, I dont understand why it’s giving me this error. Any ideas?