Building data centers. And infrastructure

Does Openai own it’s own infrastructure, since you guys have issues scaling up for video and knowing the amount of infrastructure it takes I can imagine scaling up AI now is going to take especially for video ever more compute and VRAM.

Do you guys have contractors? Do you build overseas? Is your cloud infrastructure dispersed all over the world, with the same distributor, like AWS, Google cloud, Azure (I guess it’s Azure since Microsoft owns Openai)?

I need to expand myself, but don’t have money, but can build server infrastructure. Are you planning to develop in EU or could I get into the US for development? I can image there to be vastly insufficient resources for coming expansions.

Or are SORA and the new multimodal image models actually resource efficient and low in cost. I know training can abstract away a lot of noise, but how low can you go in data usage to create models that are actually complex in understanding, yet low in resource utility? How do you know if a model is efficiently trained for lowest cost and lowest resource utility? How do you detect in properly trained areas in your models, or areas that could be wired together to make it more efficient?

Have you achieved machines that can do inference over a network for optimisation? Is it feasible? I know it’s possible, but are speeds fast enough? Jensen Huang in his latest GTC was talking about this, but I failed to understand why he was hinting at the 1000$ cables, do you really need like 1TB speed cables would 10 or 100Gbps not be sufficient?