Is there an endpoint to programmatically fetch OpenAI model pricing?

Hi everyone,

I’m trying to find a way to programmatically retrieve the list of OpenAI models along with their current pricing using an API endpoint. I’ve checked the official documentation but couldn’t find anything related to pricing being exposed via the API.

Does anyone know if such an endpoint exists, or if there’s any recommended way to access this information automatically?

Thanks in advance for any help!

2 Likes

You’d think, you’d wish, but no. No metadata for you.

Deep in one of the scripts that powers the platform site model pricing page, you’ve got some data object that could be ripped out of 2MB as long as nothing changes beyond your Oauth2 and session token expiring in a week…which, like other model metadata that powers the playground, shows that they could put this on an API for you, but OpenAI has continued to not do so.

sample…

class J8{constructor(t){this.data=t}}const K8={name:"Latest models",subsections:[{title:"Text tokens",price_type:"Text tokens",show_batch:!0,show_price_unit:!0,show_snapshots:!0,columns:[{name:"input",label:"Input"},{name:"cached_input",label:"Cached input"},{name:"output",label:"Output"}],items:[{name:"gpt-4.5-preview",current_snapshot:"gpt-4.5-preview-2025-02-27",description:"Our most generally capable model, excelling at creative tasks",values:{main:{input:75,cached_input:37.5,output:150},batch:{input:37.5,output:75}},snapshots:[{name:"gpt-4.5-preview-2025-02-27",values:{main:{input:75,cached_input:37.5,output:150},batch:{input:37.5,output:75}}}]},{name:"gpt-4o",current_snapshot:"gpt-4o-2024-08-06",description:"High-intelligence model for complex tasks",values:{main:{input:2.5,cached_input:1.25,output:10},batch:{input:1.25,output:5}},snapshots:[{name:"gpt-4o-2024-11-20",values:{main:{input:2.5,cached_input:1.25,output:10},batch:{input:1.25,output:5}}},{name:"gpt-4o-2024-08-06",values:{main:{input:2.5,cached_input:1.25,output:10},batch:{input:1.25,output:5}}},{name:"gpt-4o-2024-05-13",values:{main:{input:5,output:15},batch:{input:2.5,output:7.5}}}]},{name:"gpt-4o-audio-preview",current_snapshot:"gpt-4o-audio-preview-2024-12-17",values:{main:{input:2.5,output:10}},snapshots:[{name:"gpt-4o-audio-preview-2024-12-17",values:{main:{input:2.5,output:10}}},{name:"gpt-4o-audio-preview-2024-10-01",values:{main:{input:2.5,output:10}}}]},{name:"gpt-4o-realtime-preview",current_snapshot:"gpt-4o-realtime-preview-2024-12-17",values:{main:{input:5,cached_input:2.5,output:20}},snapshots:[{name:"gpt-4o-realtime-preview-2024-12-17",values:{main:{input:5,cached_input:2.5,output:20}}},{name:"gpt-4o-realtime-preview-2024-10-01",values:{main:{input:5,cached_input:2.5,output:20}}}]},{name:"gpt-4o-mini",current_snapshot:"gpt-4o-mini-2024-07-18",description:"Fast and affordable model for simple tasks",values:{main:{input:.15,cached_input:.075,output:.6},batch:{input:.075,output:.3}},snapshots:[{name:"gpt-4o-mini-2024-07-18",values:{main:{input:.15,cached_input:.075,output:.6},batch:{input:.075,output:.3}}}]},{name:"gpt-4o-mini-audio-preview",current_snapshot:"gpt-4o-mini-audio-preview-2024-12-17",values:{main:{input:.15,output:.6}},snapshots:[{name:"gpt-4o-mini-audio-preview-2024-12-17",values:{main:{input:.15,output:.6}}}]},{name:"gpt-4o-mini-realtime-preview",current_snapshot:"gpt-4o-mini-realtime-preview-2024-12-17",values:{main:{input:.6,cached_input:.3,output:2.4}},snapshots:[{name:"gpt-4o-mini-realtime-preview-2024-12-17",values:{main:{input:.6,cached_input:.3,output:2.4}}}]},{name:"o1",current_snapshot:"o1-2024-12-17",description:"Advanced reasoning model",values:{main:{input:15,cached_input:7.5,output:60},batch:{input:7.5,output:30}},snapshots:[{name:"o1-2024-12-17",values:{main:{input:15,cached_input:7.5,output:60},batch:{input:7.5,output:30}}},{name:"o1-preview-2024-09-12",values:{main:{input:15,cached_input:7.5,output:60},batch:{input:7.5,output:30}}}]},{name:"o1-pro",current_snapshot:"o1-pro-2025-03-19",description:"Advanced reasoning model",values:{main:{input:150,output:600},batch:{input:75,output:300}},snapshots:[{name:"o1-pro-2025-03-19",values:{main:{input:150,output:600},batch:{input:75,output:300}}}]},{name:"o3-mini",current_snapshot:"o3-mini-2025-01-31",description:"Small reasoning model for math, science and coding",values:{main:{input:1.1,cached_input:.55,output:4.4},batch:{input:.55,output:2.2}},snapshots:[{name:"o3-mini-2025-01-31",values:{main:{input:1.1,cached_input:.55,output:4.4},batch:{input:.55,output:2.2}}}]},{name:"o1-mini",current_snapshot:"o1-mini-2024-09-12",description:"Small reasoning model for math, science and coding",values:{main:{input:1.1,cached_input:.55,output:4.4},batch:{input:.55,output:2.2}},snapshots:[{name:"o1-mini-2024-09-12",values:{main:{input:1.1,cached_input:.55,output:4.4},batch:{input:.55,output:2.2}}}]},{name:"gpt-4o-mini-search-preview",current_snapshot:"gpt-4o-mini-search-preview-2025-03-11",description:"Specialized model for search",values:{main:{input:.15,output:.6}},snapshots:[{name:"gpt-4o-mini-search-preview-2025-03-11",values:{main:{input:.15,output:.6}}}]},{name:"gpt-4o-search-preview",current_snapshot:"gpt-4o-search-preview-2025-03-11",description:"Specialized model for search",values:{main:{input:2.5,output:10}},snapshots:[{name:"gpt-4o-search-preview-2025-03-11",values:{main:{input:2.5,output:10}}}]},{name:"computer-use-preview",current_snapshot:"computer-use-preview-2025-03-11",description:"Specialized model for computer use",values:{main:{input:3,output:12},batch:{input:1.5,output:6}},snapshots:[{name:"computer-use-preview-2025-03-11",values:{main:{input:3,output:12},batch:{input:1.5,output:6}}}]}]},{title:"Audio tokens",price_type:"Audio tokens",show_batch:!1,show_price_unit:!0,show_snapshots:!0,columns:[{name:"input",label:"Input"},{name:"cached_input",label:"Cached input"},{name:"output",label:"Output"}],items:[{name:"gpt-4o-audio-preview",current_snapshot:"gpt-4o-audio-preview-2024-12-17",description:"Audio model for Chat Completions",values:{main:{input:40,output:80}},snapshots:[{name:"gpt-4o-audio-preview-2024-12-17",values:{main:{input:40,output:80}}},{name:"gpt-4o-audio-preview-2024-10-01",values:{main:{input:100,output:200}}}]},{name:"gpt-4o-mini-audio-preview",current_snapshot:"gpt-4o-mini-audio-preview-2024-12-17",values:{main:{input:10,output:20}},snapshots:[{name:"gpt-4o-mini-audio-preview-2024-12-17",values:{main:{input:10,output:20}}}]},{name:"gpt-4o-realtime-preview",current_snapshot:"gpt-4o-realtime-preview-2024-12-17",description:"Audio model for Realtime API",values:{main:{input:40,cached_input:2.5,output:80}},snapshots:[{name:"gpt-4o-realtime-preview-2024-12-17",values:{main:{input:40,cached_input:2.5,output:80}}},{name:"gpt-4o-realtime-preview-2024-10-01",values:{main:{input:100,cached_input:20,output:200}}}]},{name:"gpt-4o-mini-realtime-preview",current_snapshot:"gpt-4o-mini-realtime-preview-2024-12-17",values:{main:{input:10,cached_input:.3,output:20}},snapshots:[{name:"gpt-4o-mini-realtime-preview-2024-12-17",values:{main:{input:10,cached_input:.3,output:20}}}]}]}]},X8={name:"Fine tuning",subsections:[{show_batch:!0,show_price_unit:!0,show_snapshots:!1,columns:[{name:"training",label:"Training"},{name:"input",label:"Input"},{name:"cached_input",label:"Cached input"},{name:"output",label:"Output"}],items:[{name:"gpt-4o-2024-08-06",values:{main:{training:25,input:3.75,cached_input:1.875,output:15},batch:{input:1.875,output:7.5}}},{name:"gpt-4o-mini-2024-07-18",values:{main:{training:3,input:.3,cached_input:.15,output:1.2},batch:{input:.15,output:.6}}},{name:"gpt-3.5-turbo",values:{main:{training:8,input:3,output:6},batch:{input:1.5,output:3}}},{name:"davinci-002",values:{main:{training:6,input:12,output:12},batch:{input:6,output:6}}},{name:"babbage-002",values:{main:{training:.4,input:1.6,output:1.6},batch:{input:.8,output:.8}}}]}]},Q8={name:"Built-in tools",subtitle:"The tokens used for built-in tools are billed at the chosen model's per-token rates.\nGB refers to binary gigabytes of storage (also known as gibibyte), where 1GB is 2^30 bytes.",subsections:[{show_batch:!1,show_price_unit:!1,show_snapshots:!1,columns:[{name:"cost",label:"Cost"}],item_type:"Tool",items:[{name:"Code Interpreter",units:{cost:"session"},values:{main:{cost:.03}}},{name:"File Search Storage",units:{cost:"GB/day (1GB free)"},values:{main:{cost:.1}}},{name:"File Search Tool Call (Responses API only*)",units:{cost:"1k calls (*Does not apply on Assistants API)"},values:{main:{cost:2.5}}},{name:"Web Search",units:{cost:"Web search tool pricing is inclusive of tokens used to synthesize information\n from the web. Pricing depends on model and search context size. See below."},values:{main:{cost:" "}}}]}]},e_={name:"Web search",subtitle:"Web search is a built-in tool with pricing that depends on both the model used and the search context size.",subsections:[{show_batch:!1,show_price_unit:!1,show_snapshots:!1,columns:[{name:"context_size",label:"Search context size",text_align:"left"},{name:"cost",label:"Cost"}],items:[{name:"gpt-4o or gpt-4o-search-preview",units:{cost:"1k calls"},values:{main:{context_size:"low",cost:30}}},{name:"gpt-4o or gpt-4o-search-preview",units:...
2 Likes

It would be wonderful if they could add it to the already existing list models API

1 Like