The API reference has an “example code” for many situations, with a model selector and language selector. It has problems also.
Take for example: o1-preview
- doesn’t support developer message
openai.BadRequestError: Error code: 400 - {'error': {'message': "Unsupported value: 'messages[0].role' does not support 'developer' with this model.", 'type': 'invalid_request_error', 'param': 'messages[0].role', 'code': 'unsupported_value'}}
Take for example: o1
- doesn’t support streaming
openai.BadRequestError: Error code: 400 - {'error': {'message': "Unsupported value: 'stream' does not support true with this model. Supported values are: false.", 'type': 'invalid_request_error', 'param': 'stream', 'code': 'unsupported_value'}}
Take for example: o1-preview
, or chatgpt-4o-latest
- doesn’t support function-calling via tools parameter
openai.NotFoundError: Error code: 404 - {'error': {'message': 'tools is not supported in this model. For a list of supported models, refer to https://platform.openai.com/docs/guides/function-calling#models-supporting-function-calling.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
Conclusion
- The API Reference is a minefield of copy-paste code that does not work.
Additional notes
The “image input” gives no model choice, although we have gpt-4-turbo-2024-04-09, o1, etc. that accept images for vision.
I understand the models have varied intermixed supported parameters. It may be an endeavor to refactor this and the yaml source with some intelligence.
(PS: thanks for finally producing valid structured output in the Playground’s “get code” that doesn’t also result in an error).