GPT-4 Cost Estimate (UPDATED)

Alright, I have posted this before, but here’s a update. GPT-4 is rumoured to be a MoE model consisting of 8x 222B model, and when completing a token, it would use two experts, which is 444B running at a time. GPT-3.5 is a 175B model, doing 0.001 / 175 gets us 0.00253714032$. I choose 0.001 as reading tokens are not any more computational demanding than outputing tokens. Which is the price of running a 1B model for OpenAI. Doing 0.00000571428 * 444 gets us 0.00253714032$. There you go the output and input price of GPT-4 is 0.00253714032$.

Is this the post you are noting?

Yes, but this a follow-up the last one’s calculations were incorrect.

1 Like

There are many assumptions being made here, but the one that stands out the most is that you’re using $0.001 from GPT-3.5-Turbo-1106 to calculate what the cost of GPT-4 is?

At that point you might as well just use $0.03 as they say on the pricing page.

I’m not trying to negate the effort, but I think the logic may be a bit faulty.

It is pretty hard to figure out these calculations when there is almost no information about it. All of it is basically just estimation, no one knows for sure except some of the staff members at OpenAI.

1 Like

It is pretty hard to figure out these calculations when there is almost no information about it. All of it is basically just estimation, no one knows for sure except some of the staff members at OpenAI.

Or we could ask Nvidia about how much they’ve made :money_mouth_face:

I think it’s important to consider the difference between the operating and capital costs as well.
Essentially it comes down to the utility bill vs paying full price for next-gen GPUs.