More detailed information about models

Regarding:

List models

get https://api.openai.com/v1/models

Could you please extend this with friendly name, model characteristics etc?

That way extending applications with new models would be way easier.

The question is how to make the data structure future-safe.

Of course, to use new features the code would have to be changed anyway.

2 Likes

Regarding:

List models

GET https://api.openai.com/dashboard/models

Could you please extend your API exploration to this endpoint?

Add a model ID to the path to just get one model instead of 3000+ lines:

That way extending applications may be possible if you can anticipate the existence of future feature flags that are in array “features”.

The question is how to make the data structure future-safe against new strings that act like boolean gates. You might not have been able to anticipate a new model that added "reasoning_effort_none".

Thus of course, to use new feature strings appearing in the list of features you receive, the code would have to be changed anyway.

Read up here:


In terms of “friendly names”, you can employ only “aliases” without dates, which are not indicated by the models endpoint either when they are returned or by where they point…You’d need some object that relates “alias” (short name) back to the full “snapshot” (version with date that is not a snapshot in time) that is being run, where you might present only alias in a UI…

Here’s hours of labor compiling and verifying where OpenAI documentation is lacking, that you get for free today in consumable format, (while the support functions, strict functions, custom functions, json_object, json_schema, patches tool support, token costs for standard/priority/batch, cached pricing and per-model if 24hr, audio, images costs, deprecations warns and auto-shutoffs, etc I will hold out, as actual work that benefits someone usually means +0 likes in this forum):

Python code, but should avoid non-json besides the comments.

MODEL_CAPABILITIES = {
    # GPT-5.2 Family (special-case sampling behavior at low-effort settings on non-chat)
    "gpt-5.2-2025-12-11":      {"cc": 1, "r": 1, "msg": 4, "plus": 2, "max_out": 125, "vision": "patch", "vision_mult": 1.2, "verbosity": 1, "alias": "gpt-5.2"},
    "gpt-5.2-pro-2025-12-11":  {"cc": 0, "r": 1, "msg": 4, "plus": 2, "max_out": 125, "vision": "patch", "vision_mult": 1.2, "verbosity": 1, "alias": "gpt-5.2-pro"},
    "gpt-5.2-codex":           {"cc": 0, "r": 1, "msg": 4, "plus": 2, "max_out": 125, "vision": "patch", "vision_mult": 1.2, "verbosity": 1},
    "gpt-5.2-chat-latest":     {"cc": 1, "r": 1, "msg": 4, "plus": 2, "max_out": 16, "vision": "patch", "vision_mult": 1.2, "verbosity": 1},  # constrained effort


    # GPT-5.1 Family
    "gpt-5.1-2025-11-13":      {"cc": 1, "r": 1, "msg": 4, "plus": 2, "max_out": 125, "vision":  "tile", "tile_tokens": 70, "verbosity": 1, "alias": "gpt-5.1"},
    "gpt-5.1-codex-max":       {"cc": 0, "r": 1, "msg": 4, "plus": 2, "max_out": 125, "vision":  "tile", "vision_mult": 1.0,"tile_tokens": 70},
    "gpt-5.1-codex":           {"cc": 0, "r": 1, "msg": 4, "plus": 2, "max_out": 125, "vision":  "tile", "tile_tokens": 70},
    "gpt-5.1-codex-mini":      {"cc": 0, "r": 1, "msg": 4, "plus": 2, "max_out": 125, "vision": "patch", "vision_mult": 1.2},
    "gpt-5.1-chat-latest":     {"cc": 1, "r": 1, "msg": 4, "plus": 2, "max_out": 16, "vision":  "tile", "tile_tokens": 70},

    # GPT-5 Family (verbosity supported on reasoning variants)
    "gpt-5-2025-08-07":        {"cc": 1, "r": 1, "msg": 4, "plus": 2, "max_out": 125, "vision":  "tile", "tile_tokens": 70, "verbosity": 1, "alias": "gpt-5"},
    "gpt-5-mini-2025-08-07":   {"cc": 1, "r": 1, "msg": 4, "plus": 2, "max_out": 125, "vision": "patch", "vision_mult": 1.2, "verbosity": 1, "alias": "gpt-5-mini"},
    "gpt-5-nano-2025-08-07":   {"cc": 1, "r": 1, "msg": 4, "plus": 2, "max_out": 125, "vision": "patch", "vision_mult": 1.5, "verbosity": 1, "alias": "gpt-5-nano"},
    "gpt-5-pro-2025-10-06":    {"cc": 0, "r": 1, "msg": 4, "plus": 2, "max_out": 125, "vision":  "tile", "tile_tokens": 70, "verbosity": 1, "alias": "gpt-5-pro"},
    "gpt-5-codex":             {"cc": 0, "r": 1, "msg": 4, "plus": 2, "max_out": 125, "vision":  "tile", "tile_tokens": 70},
    "gpt-5-chat-latest":       {"cc": 1, "r": 1, "msg": 4, "plus": 3, "max_out": 16, "vision":  "tile", "tile_tokens": 70, "sampling": 1},

    # O-Series (no verbosity)
    "o3-pro-2025-06-10":       {"cc": 0, "r": 1, "msg": 4, "plus": 2, "max_out": 97.65625, "vision":  "tile", "tile_tokens": 75, "alias": "o3-pro"},
    "o3-2025-04-16":           {"cc": 1, "r": 1, "msg": 4, "plus": 2, "max_out": 97.65625, "vision":  "tile", "tile_tokens": 75, "alias": "o3"},
    "o4-mini-2025-04-16":      {"cc": 1, "r": 1, "msg": 4, "plus": 2, "max_out": 97.65625, "vision": "patch", "vision_mult": 1.72, "alias": "o4-mini"},
    "o1-pro-2025-03-19":       {"cc": 0, "r": 1, "msg": 4, "plus": 2, "max_out": 97.65625, "vision":  "tile", "tile_tokens": 75, "alias": "o1-pro"},
    "o3-mini-2025-01-31":      {"cc": 1, "r": 1, "msg": 4, "plus": 2, "max_out": 97.65625, "vision":    None, "alias": "o3-mini"},
    "o1-2024-12-17":           {"cc": 1, "r": 1, "msg": 4, "plus": 2, "max_out": 97.65625, "vision":  "tile", "tile_tokens": 75, "alias": "o1"},
    "codex-mini-latest":       {"cc": 0, "r": 1, "msg": 4, "plus": 2, "max_out": 97.65625, "vision": "patch", "vision_mult": 1.72},
    # deep-research in model requires internal RAG tool or web_search with context:medium, no user location
    "o3-deep-research-2025-06-26": {"cc": 0, "r": 1, "max_out": 97.65625, "vision": "tile", "tile_tokens": 75, "rag_req": 1, "alias": "o3-deep-research"},
    "o4-mini-deep-research-2025-06-26": {"cc": 0, "r": 1, "max_out": 97.65625, "vision": "patch", "vision_mult": 1.72, "rag_req": 1, "alias": "o4-mini-deep-research"},

    # GPT-4x Family with vision (standard sampling)
    "gpt-4.1-2025-04-14":      {"cc": 1, "r": 1, "msg": 4, "plus": 3, "max_out": 32, "vision":  "tile",  "tile_tokens": 85, "sampling": 1, "alias": "gpt-4.1"},
    "gpt-4.1-mini-2025-04-14": {"cc": 1, "r": 1, "msg": 4, "plus": 3, "max_out": 32, "vision": "patch", "vision_mult": 1.62, "sampling": 1, "alias": "gpt-4.1-mini"},
    "gpt-4.1-nano-2025-04-14": {"cc": 1, "r": 1, "msg": 4, "plus": 3, "max_out": 32, "vision": "patch", "vision_mult": 2.46, "sampling": 1, "alias": "gpt-4.1-nano"},
    "gpt-4o-2024-11-20":       {"cc": 1, "r": 1, "msg": 4, "plus": 3, "max_out": 16, "vision":  "tile",  "tile_tokens": 85, "sampling": 1},
    "gpt-4o-2024-08-06":       {"cc": 1, "r": 1, "msg": 4, "plus": 3, "max_out": 16, "vision":  "tile",  "tile_tokens": 85, "sampling": 1, "alias": "gpt-4o"},
    "gpt-4o-mini-2024-07-18":  {"cc": 1, "r": 1, "msg": 4, "plus": 3, "max_out": 16, "vision":  "tile",  "tile_tokens": 85, "vision_mult": 33.333, "sampling": 1, "alias": "gpt-4o-mini"},
    "gpt-4o-2024-05-13":       {"cc": 1, "r": 1, "msg": 4, "plus": 3, "max_out": 4, "vision":  "tile",  "tile_tokens": 85, "sampling": 1},
    "chatgpt-4o-latest":       {"cc": 1, "r": 1, "msg": 4, "plus": 3, "max_out": 16, "vision":  "tile",  "tile_tokens": 85, "sampling": 1},
    "gpt-4-turbo-2024-04-09":  {"cc": 1, "r": 1, "msg": 4, "plus": 3, "max_out": 4, "vision":  "tile",  "tile_tokens": 85, "sampling": 1, "alias": "gpt-4-turbo"},

    # GPT-4 Turbo / Preview, no vision
    "gpt-4-1106-preview":   {"cc": 1, "r": 1, "msg": 4, "plus": 3, "max_out": 4, "sampling": 1},
    "gpt-4-0125-preview":   {"cc": 1, "r": 1, "msg": 4, "plus": 3, "max_out": 4, "sampling": 1, "alias": "gpt-4-turbo-preview"},

    # GPT-4 (legacy, no vision)
    "gpt-4-0314":      {"cc": 1, "r": 1, "msg": 4, "plus": 3, "max_out": 8, "sampling": 1, "vision": None},
    "gpt-4-0613":      {"cc": 1, "r": 1, "msg": 4, "plus": 3, "max_out": 8, "sampling": 1, "vision": None, "alias": "gpt-4"},

    # GPT-3.5 (legacy, no vision)
    "gpt-3.5-turbo-16k":      {"cc": 1, "r": 1, "msg": 4, "plus": 3, "max_out": 4, "sampling": 1, "vision": None, "to": "gpt-3.5-turbo-0125"},
    "gpt-3.5-turbo-1106":     {"cc": 1, "r": 1, "msg": 4, "plus": 3, "max_out": 4, "sampling": 1, "vision": None},
    "gpt-3.5-turbo-0125":     {"cc": 1, "r": 1, "msg": 4, "plus": 3, "max_out": 4, "sampling": 1, "vision": None, "alias": "gpt-3.5-turbo"},

    # Audio Models - needs newer "gpt-audio-2025-08-28", mini-10-06" , mini-12-15 models available on cc 
    "gpt-4o-audio-preview-2024-12-17":      {"cc": 1, "r": 0, "msg": 4, "plus": 3, "max_out": 16, "vision": "tile", "tile_tokens": 85, "sampling": 1},
    "gpt-4o-audio-preview-2025-06-03":      {"cc": 1, "r": 0, "msg": 4, "plus": 3, "max_out": 16, "vision": "tile", "tile_tokens": 85, "sampling": 1, "alias": "gpt-4o-audio-preview"},
    "gpt-4o-mini-audio-preview":            {"cc": 1, "r": 0, "msg": 4, "plus": 3, "max_out": 16, "vision": "tile", "tile_tokens": 85, "vision_mult": 33.333, "sampling": 1,},
    "gpt-4o-mini-audio-preview-2024-12-17": {"cc": 1, "r": 0, "msg": 4, "plus": 3, "max_out": 16, "vision": "tile", "tile_tokens": 85, "vision_mult": 33.333, "sampling": 1,},

    # special: "computer-use-preview" takes only screenshot tool return; any CC "search" model takes no images
    "computer-use-preview-2025-03-11":    {"cc": 0, "r": 1, "msg": 4, "plus": 2, "max_out": 16, "vision": False, "tile_tokens": 65, "alias": "computer-use-preview"}, # requires truncation:auto

    "gpt-5-search-api-2025-10-14":        {"cc": 1, "r": 0, "msg": 4, "plus": 2, "max_out": 16, "vision": False, "alias": "gpt-5-search-api"},
    "gpt-4o-search-preview-2025-03-11":   {"cc": 1, "r": 0, "msg": 4, "plus": 3, "max_out": 16, "vision": False, "sampling": 1, "alias": "gpt-4o-search-preview"},
    "gpt-4o-mini-search-preview-2025-03-11": {"cc": 1, "r": 0, "msg": 4, "plus": 3, "max_out": 16, "vision": False, "sampling": 1, "alias": "gpt-4o-mini-search-preview"},

    # shutoff chat models, remains for persisting fine-tuning models, history
    "gpt-4.5-preview":                 {"cc": 0, "r": 0, "msg": 4, "plus": 2, "max_out": 4, "vision": "tile", "tile_tokens": 75},  # announced 2025-02-27

    "gpt-4o-audio-preview-2024-10-01": {"cc": 0, "r": 0, "msg": 4, "plus": 3, "max_out": 16, "vision": "tile", "tile_tokens": 85, "sampling": 1},
    "o1-preview-2024-09-12":           {"cc": 0, "r": 0, "msg": 4, "plus": 2, "max_out": 97.65625, "vision": None, "alias": "o1-preview"},
    "o1-mini-2024-09-12":              {"cc": 0, "r": 0, "msg": 4, "plus": 2, "max_out": 97.65625, "vision": None, "alias": "o1-mini"},
    "gpt-4-1106-vision-preview":       {"cc": 0, "r": 0, "msg": 4, "plus": 3, "max_out": 4, "vision": "tile", "tile_tokens": 85, "sampling": 1, "alias": "gpt-4-vision-preview"},
    "gpt-4-32k-0613":                  {"cc": 0, "r": 0, "msg": 4, "plus": 3, "max_out": 8, "sampling": 1, "vision": None, "alias": "gpt-4-32k"},
    "gpt-3.5-turbo-16k-0613":          {"cc": 0, "r": 0, "msg": 4, "plus": 3, "max_out": 4, "sampling": 1, "vision": None},
    "gpt-3.5-turbo-0613":              {"cc": 0, "r": 0, "msg": 4, "plus": 3, "max_out": 4, "sampling": 1, "vision": None},
    "gpt-4-32k-0314":                  {"cc": 0, "r": 0, "msg": 4, "plus": 3, "max_out": 8, "sampling": 1, "vision": None},
    "gpt-3.5-turbo-0301":              {"cc": 0, "r": 0, "msg": 4, "plus": 3, "max_out": 4, "sampling": 1, "vision": None},

    # completions, deprecated
    "gpt-3.5-turbo-instruct": {"c": 1, "sampling": 1, "max_out": 4},
    "davinci-002":            {"c": 1, "sampling": 1, "max_out": 4},
    "babbage-002":            {"c": 1, "sampling": 1, "max_out": 4},
    "text-davinci-003": {"max_out": 4},

    # sneaky models
    "gpt-5.1-mini":                  {"max_out": 125},  # was in spec, presumably 2025-11-13, never delivered

    "gpt-5-pro-alpha-2025-09-15":    {"cc": 0, "r": 1, "msg": 4, "plus": 2, "max_out": 125, "vision":  "tile", "tile_tokens": 70, "verbosity": 1, "alias": "gpt-5-pro"},
    "gpt-4o-2024-08-06-alpha":       {"cc": 1, "r": 1, "msg": 4, "plus": 3, "max_out": 16, "vision":  "tile",  "tile_tokens": 85, "sampling": 1, "alias": "gpt-4o"},
    "gpt-5-mini-alpha-2025-08-07":   {"cc": 1, "r": 1, "msg": 4, "plus": 2, "max_out": 125, "vision": "patch", "vision_mult": 1.2, "verbosity": 1, "alias": "gpt-5-mini"},
    "gpt-4-turbo-preview-completions": {"c": 1, "sampling": 1, "max_out": 4},
}
# Note: then context window & max out is useful to gate content and max_output_tokens parameter values, complicated by 272k max input gpt-5

Or use different “true” truth for reasoning.effort.

“In terms of “friendly names”, you can employ only “aliases” without dates”

That’s not a friendly name. The friendly (as in user-facing) name for e.g. “gpt-5-mini” would be “GPT-5 mini” as per OpenAI specification. Relying on “magic” would be fraught with errors considering the varying naming syntax of OpenAI models.

“Here’s hours of labor compiling and verifying where OpenAI documentation is lacking, that you get for free today in consumable format”

Thanks.

My suggestion was mainly directed at OpenAI, alternatively to get hints about moderated repositories listing models and features.

That’s a great idea also.

I put AI to the task a bit more… A near 0-knowledge “pretty name maker”, works for all of OpenAI, Google, Claude… and probably into the future.

"""
Universal AI Model Name Parser - Pure Simplicity
Converts any AI model identifier to a friendly display name.

Philosophy: Remove dates, split on hyphens, capitalize smart. That's it.
"""

import re


class UniversalModelParser:
    """Parse any AI model name using pure simple rules."""
    
    # The ONLY hardcoded strings - for special capitalization
    SPECIAL_CAPS = {
        'gpt': 'GPT',
        'chatgpt': 'ChatGPT',
        'tts': 'TTS',
        'api': 'API',
        'llm': 'LLM',
        'ai': 'AI',
        'ml': 'ML',
        'ocr': 'OCR',
        'nlp': 'NLP',
        'asr': 'ASR',
        'xl': 'XL',
        'xxl': 'XXL',
    }
    
    # Date patterns to remove
    DATE_PATTERNS = [
        re.compile(r'-?\d{8}(?:-|$)'),           # 20250929
        re.compile(r'-?\d{4}-\d{2}-\d{2}'),      # 2025-09-29
    ]
    
    # Version codes to remove
    VERSION_CODE_PATTERN = re.compile(r'-0\d{3}(?:-|$)')  # -0613, -1106
    
    def __init__(self):
        self.special_caps = self.SPECIAL_CAPS
    
    def _remove_dates(self, name: str) -> str:
        """Remove all date patterns."""
        for pattern in self.DATE_PATTERNS:
            name = pattern.sub('', name)
        return name
    
    def _remove_version_codes(self, name: str) -> str:
        """Remove version codes like -0314, -0613."""
        return self.VERSION_CODE_PATTERN.sub('', name)
    
    def _clean_hyphens(self, name: str) -> str:
        """Clean up multiple/trailing hyphens."""
        name = re.sub(r'-+', '-', name)
        return name.strip('-')
    
    def _combine_claude_version(self, words: list) -> list:
        """
        Claude format: claude-opus-4-6 → [claude, 4.6, opus]
        Combines consecutive numbers into decimal version.
        """
        if not words or words[0] != 'claude':
            return words
        
        result = ['claude']
        numbers = []
        others = []
        
        for word in words[1:]:
            if word.isdigit():
                numbers.append(word)
            else:
                others.append(word)
        
        # Combine first two numbers as version
        if len(numbers) >= 2:
            result.append(f"{numbers[0]}.{numbers[1]}")
            result.extend(numbers[2:])
        elif numbers:
            result.extend(numbers)
        
        result.extend(others)
        return result
    
    def _capitalize_word(self, word: str) -> str:
        """Apply capitalization to a single word."""
        lower = word.lower()
        
        # Special caps dictionary
        if lower in self.special_caps:
            return self.special_caps[lower]
        
        # Numbers with size suffixes: 70b → 70B, 16k → 16K
        if re.match(r'^\d+[kmbtKMBT]$', word):
            return word.upper()
        
        # Preserve '4o' style variants
        if re.match(r'^\d+o$', lower):
            return lower
        
        # Decimal numbers stay as-is
        if re.match(r'^\d+\.\d+$', word):
            return word
        
        # Default: capitalize first letter
        return word.capitalize()
    
    def parse(self, model_name: str) -> str:
        """
        Convert any model name to friendly display name.
        
        Examples:
            gpt-4o-mini-2024-07-18 → GPT 4o Mini
            claude-opus-4-6 → Claude 4.6 Opus
            gemini-2.5-flash-lite → Gemini 2.5 Flash Lite
        """
        if not model_name:
            return model_name
        
        # Normalize to lowercase
        name = model_name.lower()
        
        # Remove dates
        name = self._remove_dates(name)
        
        # Remove version codes
        name = self._remove_version_codes(name)
        
        # Clean hyphens
        name = self._clean_hyphens(name)
        
        # Split on hyphens
        words = [w for w in name.split('-') if w]
        
        if not words:
            return model_name
        
        # Special handling for Claude version format
        if words[0] == 'claude':
            words = self._combine_claude_version(words)
        
        # Capitalize each word
        result = [self._capitalize_word(w) for w in words]
        
        # Join with spaces - NO HYPHENS!
        return ' '.join(result)
    
    def __call__(self, model_name: str) -> str:
        """Allow parser to be called as a function."""
        return self.parse(model_name)


# Singleton instance
parse_model_name = UniversalModelParser()


def get_friendly_name(model_identifier: str) -> str:
    """
    Convert any AI model name to a friendly display name.
    
    Args:
        model_identifier: Model name from any provider
    
    Returns:
        Friendly name with spaces, no hyphens
    
    Examples:
        >>> get_friendly_name('gpt-4o-mini-2024-07-18')
        'GPT 4o Mini'
        >>> get_friendly_name('claude-opus-4-6')
        'Claude 4.6 Opus'
        >>> get_friendly_name('gemini-2.5-flash-preview')
        'Gemini 2.5 Flash Preview'
    """
    return parse_model_name(model_identifier)


if __name__ == "__main__":
    print("Universal AI Model Name Parser - Pure Simplicity")
    print("=" * 80)
    print()
    
    # Test all providers
    print("OpenAI:")
    openai = [
        "gpt-4o-mini-2024-07-18",
        "gpt-4-turbo-2024-04-09", 
        "gpt-3.5-turbo-16k",
        "gpt-5.2-pro-2025-12-11",
        "o3-mini-2025-01-31",
        "chatgpt-4o-latest",
    ]
    for m in openai:
        print(f"  {m:30} → {get_friendly_name(m)}")
    
    print("\nGoogle Gemini:")
    gemini = [
        "gemini-2.5-pro",
        "gemini-2.5-flash-lite",
        "gemini-2.5-flash-preview-tts",
        "gemini-live-2.5-flash-preview",
    ]
    for m in gemini:
        print(f"  {m:30} → {get_friendly_name(m)}")
    
    print("\nAnthropic Claude:")
    claude = [
        "claude-opus-4-6",
        "claude-sonnet-4-5",
        "claude-haiku-4-5",
        "claude-3-7-sonnet-latest",
        "claude-sonnet-4-5-20250929",
        "claude-3-opus-20240229",
    ]
    for m in claude:
        print(f"  {m:30} → {get_friendly_name(m)}")
    
    print("\nOther Providers:")
    others = [
        "llama-3.1-70b-instruct",
        "mistral-large-2407",
        "phi-3-mini-128k",
        "command-r-plus",
    ]
    for m in others:
        print(f"  {m:30} → {get_friendly_name(m)}")
    
    print("\n" + "=" * 80)

Demo in the code

OpenAI:
gpt-4o-mini-2024-07-18 → GPT 4o Mini
gpt-4-turbo-2024-04-09 → GPT 4 Turbo
gpt-3.5-turbo-16k → GPT 3.5 Turbo 16K
gpt-5.2-pro-2025-12-11 → GPT 5.2 Pro
o3-mini-2025-01-31 → O3 Mini
chatgpt-4o-latest → ChatGPT 4o Latest

Google Gemini:
gemini-2.5-pro → Gemini 2.5 Pro
gemini-2.5-flash-lite → Gemini 2.5 Flash Lite
gemini-2.5-flash-preview-tts → Gemini 2.5 Flash Preview TTS
gemini-live-2.5-flash-preview → Gemini Live 2.5 Flash Preview

Anthropic Claude:
claude-opus-4-6 → Claude 4.6 Opus
claude-sonnet-4-5 → Claude 4.5 Sonnet
claude-haiku-4-5 → Claude 4.5 Haiku
claude-3-7-sonnet-latest → Claude 3.7 Sonnet Latest
claude-sonnet-4-5-20250929 → Claude 4.5 Sonnet
claude-3-opus-20240229 → Claude 3 Opus

Other Providers:
llama-3.1-70b-instruct → Llama 3.1 70B Instruct
mistral-large-2407 → Mistral Large 2407
phi-3-mini-128k → Phi 3 Mini 128K
command-r-plus → Command R Plus

I simply did like this. In other words a separate friendly name and parameters according to my use cases.

const config_models = {
    // https://platform.openai.com/docs/models
    //"gpt-5.2-pro": {"provider": "openai", "title": "OpenAI - GPT-5.2 Pro", "max_tokens": 128000, "parameters": false, "instructions": "developer", "effort": true, "verbosity": true, "responses": true, "stream": true},
    "gpt-5.2": {"provider": "openai", "title": "OpenAI - GPT-5.2", "max_tokens": 128000, "parameters": false, "instructions": "developer", "effort": true, "verbosity": true, "responses": true, "stream": true},
    "gpt-5.1": {"provider": "openai", "title": "OpenAI - GPT-5.1", "max_tokens": 128000, "parameters": false, "instructions": "developer", "effort": true, "verbosity": true, "responses": true, "stream": true},
    //"gpt-5-pro": {"provider": "openai", "title": "OpenAI - GPT-5 Pro", "max_tokens": 272000, "parameters": false, "instructions": "developer", "effort": true, "verbosity": true, "responses": true, "stream": true},
    "gpt-5": {"provider": "openai", "title": "OpenAI - GPT-5", "max_tokens": 128000, "parameters": false, "instructions": "developer", "effort": true, "verbosity": true, "responses": true, "stream": true},
    "gpt-5-mini": {"provider": "openai", "title": "OpenAI - GPT-5 Mini", "max_tokens": 128000, "parameters": false, "instructions": "developer", "effort": true, "verbosity": true, "responses": true, "stream": true},
    "gpt-5-nano": {"provider": "openai", "title": "OpenAI - GPT-5 Nano", "max_tokens": 128000, "parameters": false, "instructions": "developer", "effort": true, "verbosity": true, "responses": true, "stream": true},
    "o4-mini": {"provider": "openai", "title": "OpenAI - O4 Mini", "max_tokens": 100000, "parameters": false, "instructions": "developer", "effort": true, "verbosity": true, "responses": true, "stream": true},
    "o3": {"provider": "openai", "title": "OpenAI - O3", "max_tokens": 100000, "parameters": false, "instructions": "developer", "effort": true, "verbosity": true, "responses": true, "stream": true},
    "o3-mini": {"provider": "openai", "title": "OpenAI - O3 Mini", "max_tokens": 100000, "parameters": false, "instructions": "developer", "effort": true, "verbosity": true, "responses": true, "stream": true},
    "o1": {"provider": "openai", "title": "OpenAI - O1", "max_tokens": 100000, "parameters": false, "instructions": "developer", "effort": true, "verbosity": true, "responses": true, "stream": true},
    "o1-mini": {"provider": "openai", "title": "OpenAI - O1 Mini", "max_tokens": 65536, "parameters": false, "instructions": "system", "effort": true, "verbosity": true, "responses": true, "stream": true},
    "gpt-4.1": {"provider": "openai", "title": "OpenAI - GPT-4.1", "max_tokens": 32768, "parameters": true, "instructions": "system", "effort": false, "verbosity": false, "responses": true, "stream": true},
    "gpt-4.1-mini": {"provider": "openai", "title": "OpenAI - GPT-4.1 Mini", "max_tokens": 32768, "parameters": true, "instructions": "system", "effort": false, "verbosity": false, "responses": true, "stream": true},
    "gpt-4.1-nano": {"provider": "openai", "title": "OpenAI - GPT-4.1 Nano", "max_tokens": 32768, "parameters": true, "instructions": "system", "effort": false, "verbosity": false, "responses": true, "stream": true},
    "gpt-4o": {"provider": "openai", "title": "OpenAI - GPT-4o", "max_tokens": 16384, "parameters": true, "instructions": "system", "effort": false, "verbosity": false, "responses": true, "stream": true},
    "gpt-4o-mini": {"provider": "openai", "title": "OpenAI - GPT-4o Mini", "max_tokens": 16384, "parameters": true, "instructions": "system", "effort": false, "verbosity": false, "responses": true, "stream": true},
    "gpt-4-turbo": {"provider": "openai", "title": "OpenAI - GPT-4 Turbo", "max_tokens": 4096, "parameters": true, "instructions": "system", "effort": false, "verbosity": false, "responses": false, "stream": true},
    "gpt-4": {"provider": "openai", "title": "OpenAI - GPT-4", "max_tokens": 8192, "parameters": true, "instructions": "system", "effort": false, "verbosity": false, "responses": false, "stream": true},
    "gpt-3.5-turbo": {"provider": "openai", "title": "OpenAI - GPT-3.5 Turbo", "max_tokens": 4096, "parameters": true, "instructions": "system", "effort": false, "verbosity": false, "responses": false, "stream": true},
};

The data structure also contains definitions for other providers’ models, that I excluded.

There are still code-based differences, like of course different API URL roots for different providers etc.

1 Like