Rate limit increase for reaserchers

so I am working on a paper in which I need to run 1000s of files through gpt to create training data. each file is in the area of 10k-100k tokens… and u can see the problem here.

now for billing purposes I have it on a seprate acount for specifcly that study and it ofc puts a heavy rate limit on this.

I am not sure who to turn to but I would love to use openai over a local model. tho if this is the throughput I may not have a choice