Location of OpenAI Servers

Where are the servers located? I want to improve my latency and am looking at moving my app’s code closer to the OpenAI API servers.

They are distributed globally on the Azure platform, you will be connected to whichever one azure routes you to, the latency of the model is an order of magnitude greater than any ping latency you will encounter.

6 Likes

The documentation says otherwise:
https://platform.openai.com/docs/guides/production-best-practices/improving-latencies#:~:text=Our%20servers%20are%20currently%20located%20in%20the%20US

Yea, it says US only, However, what city specifically should I aim for? For example, I can use data center for my Google (Firebase) hosted app in California, Oregon, etc. I am assuming openAI is probably in San Francisco area? I’d like to place the API call as close as possible to OpenAI’s data center.

1 Like

Why?

I mean I get why, I just don’t get why.

Realistically, the difference in latency between the nearest and furthest server is going to be on the order of under 100ms.

Although the servers are all in the US, they’re going to be well-distributed throughout, so the maximum latency penalty is realistically going to be much less than half that.

The biggest latency you’re ever going to experience calling the OpenAI models is going to come from the time to first token, not network latency.

So, yeah, sure, it’s good to reduce latency where you can, but even if you were able to make your requests from within the same data center as the GPT-4 servers, the impact on total observed latency would be relatively minor.

Beyond which, unless you plan on physically being at your server location, you still need to wait for everything to be relayed to you…

IDK, maybe you’ve got some crazy use-case I’m incapable of imagining where reducing the total latency between your application and OpenAI’s servers by 5% is make-or-break.

¯⁠\⁠_⁠(⁠ツ⁠)⁠_⁠/⁠¯

If so I’d be interested to read about it.

Unfortunately, for now, the answer seems to be there’s no way to know exactly where these servers are.

I’d just pick a logical major city in Virginia, Texas, California, or Washington.

You’re never going to observe a functional difference between being 1-mile from the data center and being 100-miles from the data center.

But, knowing that there servers are on Azure, I suspect the best thing you could do would be to just host your application on Azure if you’re really well that concerned about it.

Let’s say a datacenter is in Ohio and you try to find nearby colocation…

Where is the Cloudflare firewall datacenter that you are actually communicating with?

Good points! I agree. I just thought that Google recommended you colocate their services together as close to the data source as possible to help with server cloud function ‘cold starts’. I think at the end of the day as you rightly said it does not matter much.