List of models and capabilities - json formatted:

{
  "object": "list",
  "data": [
    {
      "object": "model",
      "id": "gpt-4o-audio-preview-2024-12-17",
      "supported_methods": [
        "chat.completions"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming",
        "audio",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "advanced_config",
        "file_search",
        "file_content"
      ],
      "max_tokens": 16384
    },
    {
      "object": "model",
      "id": "dall-e-3",
      "supported_methods": [],
      "groups": [],
      "features": [],
      "max_tokens": 2049
    },
    {
      "object": "model",
      "id": "text-embedding-3-large",
      "supported_methods": [
        "embedding"
      ],
      "groups": [],
      "features": [],
      "max_tokens": 2049
    },
    {
      "object": "model",
      "id": "dall-e-2",
      "supported_methods": [],
      "groups": [],
      "features": [],
      "max_tokens": 2049
    },
    {
      "object": "model",
      "id": "o4-mini-2025-04-16",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "reasoning"
      ],
      "features": [
        "streaming",
        "function_calling",
        "developer_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "reasoning_effort",
        "detailed_reasoning_summary",
        "file_content"
      ],
      "max_tokens": 256000
    },
    {
      "object": "model",
      "id": "gpt-4o-audio-preview-2024-10-01",
      "supported_methods": [
        "chat.completions"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming",
        "audio",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "advanced_config",
        "file_search",
        "file_content"
      ],
      "max_tokens": 16384
    },
    {
      "object": "model",
      "id": "gpt-4-32k",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "advanced_config"
      ],
      "max_tokens": 32768
    },
    {
      "object": "model",
      "id": "o4-mini",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "reasoning"
      ],
      "features": [
        "streaming",
        "function_calling",
        "developer_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "reasoning_effort",
        "detailed_reasoning_summary",
        "file_content"
      ],
      "max_tokens": 256000
    },
    {
      "object": "model",
      "id": "gpt-4.1-nano",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4_1"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "advanced_config",
        "file_search",
        "file_content"
      ],
      "max_tokens": 32768
    },
    {
      "object": "model",
      "id": "gpt-4.1-nano-2025-04-14",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4_1"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "advanced_config",
        "file_search",
        "file_content"
      ],
      "max_tokens": 32768
    },
    {
      "object": "model",
      "id": "gpt-4o-realtime-preview-2024-10-01",
      "supported_methods": [
        "realtime"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming",
        "function_calling"
      ],
      "max_tokens": 4096
    },
    {
      "object": "model",
      "id": "gpt-4o-realtime-preview",
      "supported_methods": [
        "realtime"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming",
        "function_calling"
      ],
      "max_tokens": 4096
    },
    {
      "object": "model",
      "id": "babbage-002",
      "supported_methods": [
        "completions"
      ],
      "groups": [],
      "features": [
        "streaming",
        "advanced_config"
      ],
      "max_tokens": 2049
    },
    {
      "object": "model",
      "id": "gpt-4-turbo-preview",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "advanced_config",
        "file_search",
        "file_content"
      ],
      "max_tokens": 4096
    },
    {
      "object": "model",
      "id": "gpt-4-32k-0613",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "advanced_config"
      ],
      "max_tokens": 32768
    },
    {
      "object": "model",
      "id": "tts-1-hd-1106",
      "supported_methods": [
        "audio.speech"
      ],
      "groups": [],
      "features": [],
      "max_tokens": 2049
    },
    {
      "object": "model",
      "id": "gpt-4-0125-preview",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "advanced_config"
      ],
      "max_tokens": 4096
    },
    {
      "object": "model",
      "id": "gpt-4",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "advanced_config"
      ],
      "max_tokens": 8192
    },
    {
      "object": "model",
      "id": "text-embedding-ada-002",
      "supported_methods": [
        "embedding"
      ],
      "groups": [],
      "features": [],
      "max_tokens": 2049
    },
    {
      "object": "model",
      "id": "o3-2025-04-16",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "reasoning"
      ],
      "features": [
        "streaming_if_verified",
        "function_calling",
        "developer_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "reasoning_effort",
        "detailed_reasoning_summary",
        "file_content"
      ],
      "max_tokens": 256000
    },
    {
      "object": "model",
      "id": "tts-1-hd",
      "supported_methods": [
        "audio.speech"
      ],
      "groups": [],
      "features": [],
      "max_tokens": 2049
    },
    {
      "object": "model",
      "id": "gpt-4o-mini-audio-preview",
      "supported_methods": [
        "chat.completions"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming",
        "audio",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "advanced_config",
        "file_search",
        "file_content"
      ],
      "max_tokens": 16384
    },
    {
      "object": "model",
      "id": "gpt-4o-audio-preview",
      "supported_methods": [
        "chat.completions"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming",
        "audio",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "advanced_config",
        "file_search",
        "file_content"
      ],
      "max_tokens": 16384
    },
    {
      "object": "model",
      "id": "o1-preview-2024-09-12",
      "supported_methods": [
        "chat.completions"
      ],
      "groups": [
        "reasoning"
      ],
      "features": [
        "streaming"
      ],
      "max_tokens": 32768
    },
    {
      "object": "model",
      "id": "o3",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "reasoning"
      ],
      "features": [
        "streaming_if_verified",
        "function_calling",
        "developer_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "reasoning_effort",
        "detailed_reasoning_summary",
        "file_content"
      ],
      "max_tokens": 256000
    },
    {
      "object": "model",
      "id": "gpt-4o-mini-realtime-preview",
      "supported_methods": [
        "realtime"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming",
        "function_calling"
      ],
      "max_tokens": 4096
    },
    {
      "object": "model",
      "id": "gpt-4.1-mini",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4_1"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "advanced_config",
        "file_search",
        "web_search",
        "file_content"
      ],
      "max_tokens": 32768
    },
    {
      "object": "model",
      "id": "gpt-4o-mini-realtime-preview-2024-12-17",
      "supported_methods": [
        "realtime"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming",
        "function_calling"
      ],
      "max_tokens": 4096
    },
    {
      "object": "model",
      "id": "gpt-3.5-turbo-instruct-0914",
      "supported_methods": [
        "completions"
      ],
      "groups": [
        "gpt_3"
      ],
      "features": [
        "streaming",
        "advanced_config"
      ],
      "max_tokens": 4096
    },
    {
      "object": "model",
      "id": "gpt-4o-mini-search-preview",
      "supported_methods": [
        "chat.completions"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming",
        "system_message",
        "response_json_schema",
        "web_search"
      ],
      "max_tokens": 16384
    },
    {
      "object": "model",
      "id": "gpt-4.1-mini-2025-04-14",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4_1"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "advanced_config",
        "file_search",
        "web_search",
        "file_content"
      ],
      "max_tokens": 32768
    },
    {
      "object": "model",
      "id": "tts-1-1106",
      "supported_methods": [
        "audio.speech"
      ],
      "groups": [],
      "features": [],
      "max_tokens": 2049
    },
    {
      "object": "model",
      "id": "chatgpt-4o-latest",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [],
      "features": [
        "streaming",
        "system_message",
        "image_content",
        "advanced_config"
      ],
      "max_tokens": 16384
    },
    {
      "object": "model",
      "id": "davinci-002",
      "supported_methods": [
        "completions"
      ],
      "groups": [],
      "features": [
        "streaming",
        "advanced_config"
      ],
      "max_tokens": 2049
    },
    {
      "object": "model",
      "id": "gpt-3.5-turbo-1106",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_3"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "response_json_object",
        "advanced_config"
      ],
      "max_tokens": 4096
    },
    {
      "object": "model",
      "id": "gpt-4o-search-preview",
      "supported_methods": [
        "chat.completions"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming",
        "system_message",
        "response_json_schema",
        "web_search"
      ],
      "max_tokens": 16384
    },
    {
      "object": "model",
      "id": "gpt-4-turbo",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "advanced_config",
        "file_search",
        "file_content"
      ],
      "max_tokens": 4096
    },
    {
      "object": "model",
      "id": "gpt-4o-realtime-preview-2024-12-17",
      "supported_methods": [
        "realtime"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming",
        "function_calling"
      ],
      "max_tokens": 4096
    },
    {
      "object": "model",
      "id": "gpt-3.5-turbo-instruct",
      "supported_methods": [
        "completions"
      ],
      "groups": [
        "gpt_3"
      ],
      "features": [
        "streaming",
        "advanced_config"
      ],
      "max_tokens": 4096
    },
    {
      "object": "model",
      "id": "gpt-3.5-turbo",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_3"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "response_json_object",
        "advanced_config"
      ],
      "max_tokens": 4096
    },
    {
      "object": "model",
      "id": "gpt-4o-mini-search-preview-2025-03-11",
      "supported_methods": [
        "chat.completions"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming",
        "system_message",
        "response_json_schema",
        "web_search"
      ],
      "max_tokens": 16384
    },
    {
      "object": "model",
      "id": "gpt-4o-2024-11-20",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "advanced_config",
        "file_search",
        "web_search",
        "file_content"
      ],
      "max_tokens": 16384
    },
    {
      "object": "model",
      "id": "whisper-1",
      "supported_methods": [
        "audio.transcription"
      ],
      "groups": [],
      "features": [],
      "max_tokens": 2049
    },
    {
      "object": "model",
      "id": "gpt-4o-2024-05-13",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "advanced_config",
        "file_search",
        "web_search",
        "file_content"
      ],
      "max_tokens": 4096
    },
    {
      "object": "model",
      "id": "gpt-4-turbo-2024-04-09",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "advanced_config",
        "file_search",
        "file_content"
      ],
      "max_tokens": 4096
    },
    {
      "object": "model",
      "id": "gpt-3.5-turbo-16k",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_3"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "response_json_object",
        "advanced_config"
      ],
      "max_tokens": 16385
    },
    {
      "object": "model",
      "id": "o1-preview",
      "supported_methods": [
        "chat.completions"
      ],
      "groups": [
        "reasoning"
      ],
      "features": [
        "streaming"
      ],
      "max_tokens": 32768
    },
    {
      "object": "model",
      "id": "gpt-4-0613",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "advanced_config"
      ],
      "max_tokens": 8192
    },
    {
      "object": "model",
      "id": "computer-use-preview-2025-03-11",
      "supported_methods": [],
      "groups": [],
      "features": [],
      "max_tokens": 2049
    },
    {
      "object": "model",
      "id": "computer-use-preview",
      "supported_methods": [],
      "groups": [],
      "features": [],
      "max_tokens": 2049
    },
    {
      "object": "model",
      "id": "gpt-4.5-preview",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4_5"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "advanced_config",
        "file_search",
        "file_content"
      ],
      "max_tokens": 16384
    },
    {
      "object": "model",
      "id": "gpt-4.5-preview-2025-02-27",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4_5"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "advanced_config",
        "file_search",
        "file_content"
      ],
      "max_tokens": 16384
    },
    {
      "object": "model",
      "id": "gpt-4o-search-preview-2025-03-11",
      "supported_methods": [
        "chat.completions"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming",
        "system_message",
        "response_json_schema",
        "web_search"
      ],
      "max_tokens": 16384
    },
    {
      "object": "model",
      "id": "tts-1",
      "supported_methods": [
        "audio.speech"
      ],
      "groups": [],
      "features": [],
      "max_tokens": 2049
    },
    {
      "object": "model",
      "id": "omni-moderation-2024-09-26",
      "supported_methods": [],
      "groups": [],
      "features": [],
      "max_tokens": 2049
    },
    {
      "object": "model",
      "id": "text-embedding-3-small",
      "supported_methods": [
        "embedding"
      ],
      "groups": [],
      "features": [],
      "max_tokens": 2049
    },
    {
      "object": "model",
      "id": "gpt-4o-mini-tts",
      "supported_methods": [
        "audio.speech"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming"
      ],
      "max_tokens": 2049
    },
    {
      "object": "model",
      "id": "gpt-4o",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "advanced_config",
        "file_search",
        "web_search",
        "file_content"
      ],
      "max_tokens": 16384
    },
    {
      "object": "model",
      "id": "o3-mini",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "reasoning"
      ],
      "features": [
        "streaming",
        "function_calling",
        "developer_message",
        "response_json_object",
        "response_json_schema",
        "reasoning_effort",
        "detailed_reasoning_summary",
        "file_search"
      ],
      "max_tokens": 100000
    },
    {
      "object": "model",
      "id": "o1-2024-12-17",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "reasoning"
      ],
      "features": [
        "function_calling",
        "developer_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "reasoning_effort",
        "detailed_reasoning_summary",
        "file_search",
        "file_content"
      ],
      "max_tokens": 100000
    },
    {
      "object": "model",
      "id": "o3-mini-2025-01-31",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "reasoning"
      ],
      "features": [
        "streaming",
        "function_calling",
        "developer_message",
        "response_json_object",
        "response_json_schema",
        "reasoning_effort",
        "detailed_reasoning_summary",
        "file_search"
      ],
      "max_tokens": 100000
    },
    {
      "object": "model",
      "id": "gpt-4o-mini",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "advanced_config",
        "file_search",
        "web_search",
        "file_content"
      ],
      "max_tokens": 16384
    },
    {
      "object": "model",
      "id": "o1-pro-2025-03-19",
      "supported_methods": [
        "responses"
      ],
      "groups": [
        "reasoning"
      ],
      "features": [
        "function_calling",
        "developer_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "reasoning_effort",
        "detailed_reasoning_summary"
      ],
      "max_tokens": 100000
    },
    {
      "object": "model",
      "id": "o1",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "reasoning"
      ],
      "features": [
        "function_calling",
        "developer_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "reasoning_effort",
        "detailed_reasoning_summary",
        "file_search",
        "file_content"
      ],
      "max_tokens": 100000
    },
    {
      "object": "model",
      "id": "gpt-4o-2024-08-06",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "advanced_config",
        "file_search",
        "web_search",
        "file_content"
      ],
      "max_tokens": 16384
    },
    {
      "object": "model",
      "id": "gpt-4.1",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4_1"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "advanced_config",
        "file_search",
        "web_search",
        "file_content"
      ],
      "max_tokens": 32768
    },
    {
      "object": "model",
      "id": "gpt-4o-transcribe",
      "supported_methods": [
        "audio.transcription"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming"
      ],
      "max_tokens": 2049
    },
    {
      "object": "model",
      "id": "gpt-4.1-2025-04-14",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4_1"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "advanced_config",
        "file_search",
        "web_search",
        "file_content"
      ],
      "max_tokens": 32768
    },
    {
      "object": "model",
      "id": "o1-pro",
      "supported_methods": [
        "responses"
      ],
      "groups": [
        "reasoning"
      ],
      "features": [
        "function_calling",
        "developer_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "reasoning_effort",
        "detailed_reasoning_summary"
      ],
      "max_tokens": 100000
    },
    {
      "object": "model",
      "id": "gpt-4o-mini-2024-07-18",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "response_json_schema",
        "advanced_config",
        "file_search",
        "web_search",
        "file_content"
      ],
      "max_tokens": 16384
    },
    {
      "object": "model",
      "id": "gpt-4o-mini-transcribe",
      "supported_methods": [
        "audio.transcription"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming"
      ],
      "max_tokens": 2049
    },
    {
      "object": "model",
      "id": "o1-mini",
      "supported_methods": [
        "chat.completions"
      ],
      "groups": [
        "reasoning"
      ],
      "features": [
        "streaming"
      ],
      "max_tokens": 65536
    },
    {
      "object": "model",
      "id": "gpt-4o-mini-audio-preview-2024-12-17",
      "supported_methods": [
        "chat.completions"
      ],
      "groups": [
        "gpt_4o"
      ],
      "features": [
        "streaming",
        "audio",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "advanced_config",
        "file_search",
        "file_content"
      ],
      "max_tokens": 16384
    },
    {
      "object": "model",
      "id": "gpt-3.5-turbo-0125",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_3"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "response_json_object",
        "advanced_config"
      ],
      "max_tokens": 4096
    },
    {
      "object": "model",
      "id": "o1-mini-2024-09-12",
      "supported_methods": [
        "chat.completions"
      ],
      "groups": [
        "reasoning"
      ],
      "features": [
        "streaming"
      ],
      "max_tokens": 65536
    },
    {
      "object": "model",
      "id": "gpt-4-1106-preview",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_4"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "image_content",
        "response_json_object",
        "advanced_config"
      ],
      "max_tokens": 4096
    },
    {
      "object": "model",
      "id": "omni-moderation-latest",
      "supported_methods": [],
      "groups": [],
      "features": [],
      "max_tokens": 2049
    }
  ]
}

you are welcome

8 Likes

Also a hint to OpenAI to update the models API.

https://api.openai.com/v1/models

gives us only this:

``
{
  "object": "list",
  "data": [
    {
      "id": "gpt-4o-audio-preview-2024-12-17",
      "object": "model",
      "created": 1734034239,
      "owned_by": "system"
    },
...
2 Likes

In the meanwhile you can use this and hope I won’t miss a release (which is pretty unlikely though haha)

https://raw.githubusercontent.com/Intelligent-Intern/openai-model-capabilities/refs/heads/main/latest.json

1 Like

Nice, it is interesting to see that you have also found this as well. You are just missing,

    {
      "object": "model",
      "id": "gpt-3.5-turbo-16k-0613",
      "supported_methods": [
        "chat.completions",
        "responses"
      ],
      "groups": [
        "gpt_3"
      ],
      "features": [
        "streaming",
        "function_calling",
        "parallel_tool_calls",
        "system_message",
        "response_json_object",
        "advanced_config"
      ],
      "max_tokens": 2049
    }

Also, if you label your code block like,

```json
...
```

You’ll get nice syntax highlighting.

Also, if you wanted to tell people about the other models endpoint we could crowd source this thing and get a comprehensive list of models and their functions.

This is being the node script I am using to collect the datas from the internal OpenAI APIs.

#!/usr/bin/env node
// openai_internal_apis.js

const fs = require('fs');
const path = require('path');
const puppeteer = require('puppeteer');
const yargs = require('yargs/yargs');

const TOKEN = process.env.OPENAI_API_KEY;
if (!TOKEN) {
  console.error('Error: set OPENAI_API_KEY in your environment.');
  process.exit(1);
}

const argv = yargs(process.argv.slice(2))
  .usage('Usage: $0 <endpoint> [output.json] [options]')
  .demandCommand(1, 'You must provide the API endpoint URL.')
  .option('sort', {
    alias: 's',
    type: 'string',
    description: 'Dot-separated path to sort on, e.g., data.id'
  })
  .help()
  .argv;

const [endpoint, outputPath] = argv._;
const sortArg = argv.sort;

(async () => {
  const browser = await puppeteer.launch({ headless: true, args: ['--no-sandbox'] });
  const page = await browser.newPage();

  await page.setUserAgent(
    'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 ' +
    '(KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36'
  );

  await page.setExtraHTTPHeaders({
    'accept': '*/*',
    'accept-language': 'en-US,en;q=0.9',
    'authorization': `Bearer ${TOKEN}`,
    'content-type': 'application/json',
    'dnt': '1',
    'origin': 'https://platform.openai.com',
    'priority': 'u=1, i',
    'referer': 'https://platform.openai.com/'
  });

  const response = await page.goto(endpoint, { waitUntil: 'networkidle2' });
  const text = await response.text();

  let payload;
  try {
    payload = JSON.parse(text);
  } catch (err) {
    console.error('❌ Failed to parse JSON:', err);
    console.error('Raw response:', text);
    await browser.close();
    process.exit(1);
  }

  if (sortArg) {
    const parts = sortArg.split('.');
    const arrayKey = parts.shift();
    const keyPath = parts;

    if (Array.isArray(payload[arrayKey])) {
      const getNested = (obj, pathArr) =>
        pathArr.reduce((o, key) => (o && o[key] != null ? o[key] : ''), obj);

      payload[arrayKey].sort((a, b) => {
        const aVal = String(getNested(a, keyPath));
        const bVal = String(getNested(b, keyPath));
        return aVal.localeCompare(bVal);
      });
    } else {
      console.warn(`Warning: payload.${arrayKey} is not an array, skipping sort.`);
    }
  }

  const out = JSON.stringify(payload, null, 2);
  if (outputPath) {
    const dir = path.dirname(outputPath);
    if (dir && dir !== '.' && !fs.existsSync(dir)) fs.mkdirSync(dir, { recursive: true });
    fs.writeFileSync(outputPath, out, 'utf-8');
    console.log(`âś… Output written to ${outputPath}`);
  } else {
    console.log(out);
  }

  await browser.close();
})();

It is being called like,

node openai_internal_apis.js <path to interal models API> models.json --sort=data.id

You are just being needing nodejs, npm, puppeteer, and yargs.

If we are getting more people to be posting their models pages we might be finding occasional variations.

was just too lazy.. changed it

Could have done that… But isn’t that like a hostile act? :wink:

I remember when people stole some libs I wrote like 20 years ago and they just included them and used my server as a CDN and then I delivered a different version that redirected to my page haha

Validating schema - March 20, anyway..

{
  "$schema": "https://json-schema.org/draft-07/schema#",
  "type": "object",
  "properties": {
    "object": {
      "type": "string",
      "const": "list"
    },
    "data": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "object": {
            "type": "string",
            "const": "model"
          },
          "id": {
            "type": "string"
          },
          "supported_methods": {
            "type": "array",
            "items": {
              "type": "string",
              "enum": [
                "chat.completions",
                "completions",
                "responses",
                "embedding",
                "audio.transcription",
                "audio.speech",
                "realtime"
              ]
            },
            "uniqueItems": true
          },
          "groups": {
            "type": "array",
            "items": {
              "type": "string"
            },
            "uniqueItems": true
          },
          "features": {
            "type": "array",
            "items": {
              "type": "string",
              "enum": [
                "streaming",
                "audio",
                "function_calling",
                "parallel_tool_calls",
                "system_message",
                "advanced_config",
                "file_search",
                "file_content",
                "image_content",
                "response_json_object",
                "response_json_schema",
                "developer_message",
                "reasoning_effort",
                "web_search"
              ]
            },
            "uniqueItems": true
          },
          "max_tokens": {
            "type": "integer",
            "minimum": 0
          }
        },
        "required": [
          "object",
          "id",
          "supported_methods",
          "groups",
          "features",
          "max_tokens"
        ],
        "additionalProperties": false
      }
    }
  },
  "required": [
    "object",
    "data"
  ],
  "additionalProperties": false
}

It is an organization’s models list that is returned. Like, your list probably won’t have gpt-4-32k-0314 whatever.

No, I am not thinking so. But, I am also not sharing the internal API endpoint in the case being that OpenAI is not wanting people to be accessing it.

I also am thinking the feature_flags end point is being interesting.

{
  "onboard_evals_ui_rebase": {
    "enabled": true,
    "version": 2,
    "optional": false,
    "default_value": true,
    "display_name": "Evals UI API Rebase",
    "description": "Uses the latest version of the EvalAPI for dashboard Evals"
  },
  "sdk_required": {
    "enabled": true,
    "version": 1
  },
  "onboard_eval_v2": {
    "enabled": true,
    "version": 1
  },
  "onboard_generate": {
    "enabled": true
  },
  "onboard_generate_prompt_editing": {
    "enabled": true
  },
  "block_user_key_access_settings_enabled": {
    "enabled": true
  },
  "allow_setting_stored_completions_default": {
    "enabled": true
  },
  "allow_spend_limit_on_default_project": {
    "enabled": true
  },
  "enable_project_users_on_default_project": {
    "enabled": true
  },
  "share_api_data_incentives_v1": {
    "enabled": true
  },
  "customer_api_mtls": {
    "enabled": true
  },
  "customer_api_can_use_ip_allowlist": {
    "enabled": false
  },
  "fine_grain_admin_key": {
    "enabled": true
  },
  "platform_groups": {
    "enabled": false
  },
  "onboard_usage_api_beta": {
    "enabled": true,
    "version": 1
  },
  "onboard_finetuneapi_allow_method_select": {
    "enabled": true
  },
  "onboard_docs_new_model_page": {
    "enabled": true,
    "version": 1
  },
  "onboard_responses_reasoning": {
    "enabled": true
  },
  "enable_trusted_organization_flow": {
    "enabled": true
  }
}

I am hoping it will be showing new features before they are being announced and released. Like I am seeing they are having a flag for customer_api_can_use_ip_allowlist, which is being false for me. I have not been hearing of this before but maybe I just missed it?

Seems to be saying there is a feature where developers can ensure their API key is only accepted if the request comes from their IP address. That would be a nice feature to be having.

I am not being sure what platform_groups is being though.

Might as well be a special feature for selected customers.

If I remember it correctly there was an issue with IP Adresses changing unpredictably previously.

A feature for not-you-the-average-API-developer (because you are too easily confused a mortal subject or something) - a whitelist of IPs that API calls can be made from, obvs.

Can we enable that somewhere?

You can talk to your Enterprise account manager. :smiley:

1 Like

Yes that is what I had written, but why is it not available feature for everyone, other than the tech support issue of vibe coders not being understanding of the difference of a static and a dynamic IP address?

1 Like

Most probably because that’s done manually and codex can’t get it right yet…

I mean in the end it’s in the internal API. It wasn’t even published.

OpenAI support confirmed that response_json_object and response_json_schema are not supported with gpt-4.1 so you may need to adjust some of these.

After they let me know, I’m pretty sure support tried to link me to an internal OpenAI document on Notion…

1 Like

I’ll check it out and add a new version then

1 Like

Discontinued would be cool if you still have old models… Maybe discontinued dates if planned or even expired?

:smiley: Rue the day

(Maybe need a bot for that)

Hmmm maybe stuff like that should come from an announcement api.

I mean in the end who are they announcing it to? To humans or the systems that have to change themselves?

1 Like

It would appear that the Playground that receives endpoints and model capabilities in a backend format disagrees when utilizing gpt-4.1.

3 Likes

1 Like