Add ContextSize to the Model schema in the API

Is it possible to add a property (suggested name: context_size) to the Model schema in the API?
And, maybe, also the prices per token? (suggested names: price_per_request_token, price_per_response_token, price_currency)

It is not good to keep a local list of models just to manually keep track of that information.
This directly affects the validation of ‘max_tokens’ property in the chat completion schema.
And with the new models the difference is very significant.
Also, keeping an up-to-date price is important since there were already changes to it in the past and know exactly what is being charged to us is fundamental to how it affects our business and clients.

Thanks.

2 Likes

That would be nice. Metadata such as total_context, max_output, price_in, price_out, tunable, token_encoder, shutdown date all would be beneficial.

The last revision to the models endpoint removed information without announcement.

1 Like
export const models = {
    'gpt-4-1106-preview':       {input: 0.000_01,     output: 0.000_03,   context: 128_000},
    'gpt-4-vision-preview':     {input: 0.000_01,     output: 0.000_03,   context: 128_000},
    'gpt-4-0314':               {input: 0.000_03,     output: 0.000_06,   context:   8_192},
    'gpt-4-32k-0314':           {input: 0.000_06,     output: 0.000_12,   context:  32_768},
    'gpt-4-0613':               {input: 0.000_03,     output: 0.000_06,   context:   8_192},
    'gpt-4-32k-0613':           {input: 0.000_06,     output: 0.000_12,   context:  32_768},
    //'gpt-4':                  {input: 0.000_03,     output: 0.000_06,   context:    8192},
    'gpt-3.5-turbo-16k':        {input: 0.000_003,    output: 0.000_004,  context:  16_000},
    'gpt-3.5-turbo-16k-0613':   {input: 0.000_003,    output: 0.000_004,  context:  16_000},
    'gpt-3.5-turbo-0301':       {input: 0.000_001_5,  output: 0.000_002,  context:   4_096},
    'gpt-3.5-turbo-0613':       {input: 0.000_001_5,  output: 0.000_002,  context:   4_096},
    //'gpt-3.5-turbo':          {input: 0.000_001_5,  output: 0.000_002,  context:    4096}
};

here’s one of my lists, no warranty on accuracy :slight_smile:

edit: as far as I remember, msft and openai have never changed pricing on models. they tend to rather add new products with new pricing.

1 Like

Having this kind of local list is exactly what should be avoided.
We (the developers) are not the owners of that information.
It is solely OpenAI’s privilege to change it as they please when they please. They can have a policy to avoid it but having that information in the model eliminates the information gap and the need of a policy.
Also, we do not know the future we can only trust in what was done (and said) so far.

OpenAI must embrace that it is not only a startup anymore. Businesses are depending on their platform and their content more every day.

Time to also consider the professionalism and responsibility with the consumers.
A good thermometer for if an information is part of a public facing schema is:
“If it has to be announced to the public every time an entity of the schema is created or updated, then it should be part of the public schema.”

And remember that this is not even a breaking change.

I dunno bro, HATEOAS seems like it’s nice in theory but nobody ever uses it in the end.

I’d rather have a simple and stable API.

I am sorry, I fail to see why adding additional useful information to a existing endpoint without any breaking changes is not simple and stable.

That is the natural evolution of any professional api.

My understanding is that OpenAI’s pricing is almost completely arbitrary. If you host and calculate the cost of your own models, you’ll know what I mean. What I’m saying here is that it may not be a technological issue, it might be an organizational one. But I don’t work for openai, so I can’t say.

I’m not going to sit here and be an openai apologist, but it seems like to me that this company doesn’t have (or maybe isn’t even aiming for) the maturity required to deliver what you’re asking of them.

I don’t disagree that it would be nice to have. I’m just saying I’d personally be happier if they focused on their core product instead of adding bells and whistles they may or may not be able to maintain.

I agree with you that OpenAI didn’t seem to care about the API or (as you said) didn’t had the maturity to do so properly.

But I think that is our responsibility as developers and also paying custumers to push them to do better.

If we just accept things as is, they will never change.

It is pocket change for them to pay for a very good senior architect to design an excellent api. And it is many degrees of magnitude less complicated to implement than the LLMs and their research.

It may not be as glamorous or make the news headlines but it is part of the job.

One of the things that OpenAI can learn from MS is that if you do a product for devs you need to hear them and work with them.

So bottom line is, my role here is to raise the case and ask for the change with reasonable arguments. They may chose not to hear me but I will keep trying because when they do we will all benefit from it.

1 Like