Could you please extend your API exploration to this endpoint?
Add a model ID to the path to just get one model instead of 3000+ lines:
That way extending applications may be possible if you can anticipate the existence of future feature flags that are in array “features”.
The question is how to make the data structure future-safe against new strings that act like boolean gates. You might not have been able to anticipate a new model that added "reasoning_effort_none".
Thus of course, to use new feature strings appearing in the list of features you receive, the code would have to be changed anyway.
Read up here:
In terms of “friendly names”, you can employ only “aliases” without dates, which are not indicated by the models endpoint either when they are returned or by where they point…You’d need some object that relates “alias” (short name) back to the full “snapshot” (version with date that is not a snapshot in time) that is being run, where you might present only alias in a UI…
Here’s hours of labor compiling and verifying where OpenAI documentation is lacking, that you get for free today in consumable format, (while the support functions, strict functions, custom functions, json_object, json_schema, patches tool support, token costs for standard/priority/batch, cached pricing and per-model if 24hr, audio, images costs, deprecations warns and auto-shutoffs, etc I will hold out, as actual work that benefits someone usually means +0 likes in this forum):
Python code, but should avoid non-json besides the comments.
“In terms of “friendly names”, you can employ only “aliases” without dates”
That’s not a friendly name. The friendly (as in user-facing) name for e.g. “gpt-5-mini” would be “GPT-5 mini” as per OpenAI specification. Relying on “magic” would be fraught with errors considering the varying naming syntax of OpenAI models.
“Here’s hours of labor compiling and verifying where OpenAI documentation is lacking, that you get for free today in consumable format”
Thanks.
My suggestion was mainly directed at OpenAI, alternatively to get hints about moderated repositories listing models and features.
I put AI to the task a bit more… A near 0-knowledge “pretty name maker”, works for all of OpenAI, Google, Claude… and probably into the future.
"""
Universal AI Model Name Parser - Pure Simplicity
Converts any AI model identifier to a friendly display name.
Philosophy: Remove dates, split on hyphens, capitalize smart. That's it.
"""
import re
class UniversalModelParser:
"""Parse any AI model name using pure simple rules."""
# The ONLY hardcoded strings - for special capitalization
SPECIAL_CAPS = {
'gpt': 'GPT',
'chatgpt': 'ChatGPT',
'tts': 'TTS',
'api': 'API',
'llm': 'LLM',
'ai': 'AI',
'ml': 'ML',
'ocr': 'OCR',
'nlp': 'NLP',
'asr': 'ASR',
'xl': 'XL',
'xxl': 'XXL',
}
# Date patterns to remove
DATE_PATTERNS = [
re.compile(r'-?\d{8}(?:-|$)'), # 20250929
re.compile(r'-?\d{4}-\d{2}-\d{2}'), # 2025-09-29
]
# Version codes to remove
VERSION_CODE_PATTERN = re.compile(r'-0\d{3}(?:-|$)') # -0613, -1106
def __init__(self):
self.special_caps = self.SPECIAL_CAPS
def _remove_dates(self, name: str) -> str:
"""Remove all date patterns."""
for pattern in self.DATE_PATTERNS:
name = pattern.sub('', name)
return name
def _remove_version_codes(self, name: str) -> str:
"""Remove version codes like -0314, -0613."""
return self.VERSION_CODE_PATTERN.sub('', name)
def _clean_hyphens(self, name: str) -> str:
"""Clean up multiple/trailing hyphens."""
name = re.sub(r'-+', '-', name)
return name.strip('-')
def _combine_claude_version(self, words: list) -> list:
"""
Claude format: claude-opus-4-6 → [claude, 4.6, opus]
Combines consecutive numbers into decimal version.
"""
if not words or words[0] != 'claude':
return words
result = ['claude']
numbers = []
others = []
for word in words[1:]:
if word.isdigit():
numbers.append(word)
else:
others.append(word)
# Combine first two numbers as version
if len(numbers) >= 2:
result.append(f"{numbers[0]}.{numbers[1]}")
result.extend(numbers[2:])
elif numbers:
result.extend(numbers)
result.extend(others)
return result
def _capitalize_word(self, word: str) -> str:
"""Apply capitalization to a single word."""
lower = word.lower()
# Special caps dictionary
if lower in self.special_caps:
return self.special_caps[lower]
# Numbers with size suffixes: 70b → 70B, 16k → 16K
if re.match(r'^\d+[kmbtKMBT]$', word):
return word.upper()
# Preserve '4o' style variants
if re.match(r'^\d+o$', lower):
return lower
# Decimal numbers stay as-is
if re.match(r'^\d+\.\d+$', word):
return word
# Default: capitalize first letter
return word.capitalize()
def parse(self, model_name: str) -> str:
"""
Convert any model name to friendly display name.
Examples:
gpt-4o-mini-2024-07-18 → GPT 4o Mini
claude-opus-4-6 → Claude 4.6 Opus
gemini-2.5-flash-lite → Gemini 2.5 Flash Lite
"""
if not model_name:
return model_name
# Normalize to lowercase
name = model_name.lower()
# Remove dates
name = self._remove_dates(name)
# Remove version codes
name = self._remove_version_codes(name)
# Clean hyphens
name = self._clean_hyphens(name)
# Split on hyphens
words = [w for w in name.split('-') if w]
if not words:
return model_name
# Special handling for Claude version format
if words[0] == 'claude':
words = self._combine_claude_version(words)
# Capitalize each word
result = [self._capitalize_word(w) for w in words]
# Join with spaces - NO HYPHENS!
return ' '.join(result)
def __call__(self, model_name: str) -> str:
"""Allow parser to be called as a function."""
return self.parse(model_name)
# Singleton instance
parse_model_name = UniversalModelParser()
def get_friendly_name(model_identifier: str) -> str:
"""
Convert any AI model name to a friendly display name.
Args:
model_identifier: Model name from any provider
Returns:
Friendly name with spaces, no hyphens
Examples:
>>> get_friendly_name('gpt-4o-mini-2024-07-18')
'GPT 4o Mini'
>>> get_friendly_name('claude-opus-4-6')
'Claude 4.6 Opus'
>>> get_friendly_name('gemini-2.5-flash-preview')
'Gemini 2.5 Flash Preview'
"""
return parse_model_name(model_identifier)
if __name__ == "__main__":
print("Universal AI Model Name Parser - Pure Simplicity")
print("=" * 80)
print()
# Test all providers
print("OpenAI:")
openai = [
"gpt-4o-mini-2024-07-18",
"gpt-4-turbo-2024-04-09",
"gpt-3.5-turbo-16k",
"gpt-5.2-pro-2025-12-11",
"o3-mini-2025-01-31",
"chatgpt-4o-latest",
]
for m in openai:
print(f" {m:30} → {get_friendly_name(m)}")
print("\nGoogle Gemini:")
gemini = [
"gemini-2.5-pro",
"gemini-2.5-flash-lite",
"gemini-2.5-flash-preview-tts",
"gemini-live-2.5-flash-preview",
]
for m in gemini:
print(f" {m:30} → {get_friendly_name(m)}")
print("\nAnthropic Claude:")
claude = [
"claude-opus-4-6",
"claude-sonnet-4-5",
"claude-haiku-4-5",
"claude-3-7-sonnet-latest",
"claude-sonnet-4-5-20250929",
"claude-3-opus-20240229",
]
for m in claude:
print(f" {m:30} → {get_friendly_name(m)}")
print("\nOther Providers:")
others = [
"llama-3.1-70b-instruct",
"mistral-large-2407",
"phi-3-mini-128k",
"command-r-plus",
]
for m in others:
print(f" {m:30} → {get_friendly_name(m)}")
print("\n" + "=" * 80)
Google Gemini:
gemini-2.5-pro → Gemini 2.5 Pro
gemini-2.5-flash-lite → Gemini 2.5 Flash Lite
gemini-2.5-flash-preview-tts → Gemini 2.5 Flash Preview TTS
gemini-live-2.5-flash-preview → Gemini Live 2.5 Flash Preview
Anthropic Claude:
claude-opus-4-6 → Claude 4.6 Opus
claude-sonnet-4-5 → Claude 4.5 Sonnet
claude-haiku-4-5 → Claude 4.5 Haiku
claude-3-7-sonnet-latest → Claude 3.7 Sonnet Latest
claude-sonnet-4-5-20250929 → Claude 4.5 Sonnet
claude-3-opus-20240229 → Claude 3 Opus
Other Providers:
llama-3.1-70b-instruct → Llama 3.1 70B Instruct
mistral-large-2407 → Mistral Large 2407
phi-3-mini-128k → Phi 3 Mini 128K
command-r-plus → Command R Plus