As far as I know GPT-4o still has a 8k token limit, so yes this is expected.
You might find this thread interesting:
TL;DR: See if you can prepend line numbers and have the model return start and end line numbers instead of full text.
As far as I know GPT-4o still has a 8k token limit, so yes this is expected.
You might find this thread interesting:
TL;DR: See if you can prepend line numbers and have the model return start and end line numbers instead of full text.