Gpt4 api parallel computing

I’m trying to use gpt-4 API to do some text analysis work. I created functions to loop over rows of a data frame with text. Do you happen to know how I can implement parallel computing with multiple cores to accelerate the process? It takes one minute each row right now. It is almost impossible for me to complete a task with millions of rows. I tried .apply_parallel from multiprocesspandas, Parallel from joblib, and multiprocessing. But I got different errors. Thank you so much! I would really appreciate your answers if you have any ideas about this.

The processing is not happening on your computer, it’s a remote network call. So you need to look into parallel or asynchronous HTTP calls in your language/framework. If you are trying to process millions of rows you’ll probably want to look into some sort of job/queue system with multiple workers (serverless functions can be good for this)

1 Like

Be sure to keep rate limits in mind as well…

1 Like

You’re right, Paul. After 200 lines, the model stopped working for me :hot_face: :hot_face: Don’t know how they assigned the quota. I may have to wait after 12 am.

Thank you so much, Phil. I understand it now. :heart: