Me: What's the latest version of the openai Python API?
o1: As of my knowledge cutoff in October 2023, the latest version of the OpenAI Python library is **0.28.1**....
Let’s all look forward to upcoming GPT5 update where it most probably has real time web browsing capabilities and a bug free code creation so we finally don’t need to look at code again.
I was hoping to use o1-preview to build a kind of coding environment to make o1’s superior coding capability more accessible to non-programmers, guess I have to wait a little.
I’ve noticed this as well across models. I typically had to copy and paste the text from Open AI API website just to ensure the code provided aligned to the latest documentation.
Does anyone have any other suggestion or approaches for ensuring the latest API documentation is referenced when requesting code from 4o or o1?
if using chatgpt, use costume instructions and paste the documentation there, if using playground, create a template which has the docs in the system messages, if using the api, include the docs in the system message.
it is a bit annoying to always have to remind it that new models exist and that there were in fact changes to the api… simple fix but probably low priority to include in the training data? or maybe (probably) has to do with cute off date? who knows
“The API” is simply too broad to fully train an AI on by in-context documentation.
You can provide exactly the new SDK methods, models, parsing, as documentation, and, in your code to refactor (where changing the API is not in scope of the task) you still get them overwritten by poor new AI models that seem to not care.
I would say that the current state is from past attempts to address this exact concern, an AI that would give you code engine="davinci", replaced by one now overtrained on writing openai.ChatCompletions.
The best way to deal with this is to compartmentalize anything API into reusable functions or classes that can be kept out of a code base to be worked on by OpenAI AI.
Don’t let the AI see that "imported_assistants.async.parse_tool_stream()" has anything to do with OpenAI.
By the time you’ve developed your API-using chatbot, you’ve developed your own knowledge to make it obsolete.
I manage to solve this problem by simply asking o1-preview to write the code based on the Python library litellm, so that OpenAI API is essentially hidden.
This works for me because I need only simple inferencing. Your mileage may vary.
This is extremely frustrating when applying to a very specific application of OpenAI I want to implement but I’m stuck using a competitor until this is resolved. I figured over a year it would be incorporated by now.
Even if training data is up to Oct 2023, you could have thrown in the github 1.0 migration documentation.