In the following conversation, I repeatedly try to get ChatGPT to use the new completion signature, and it just can’t do it, lol!
https://chat.openai.com/share/e/13cffbf1-52f2-40ef-a734-27f77835a0ac
In the following conversation, I repeatedly try to get ChatGPT to use the new completion signature, and it just can’t do it, lol!
https://chat.openai.com/share/e/13cffbf1-52f2-40ef-a734-27f77835a0ac
Conversations shared by a Team cannot be seen outside the organization.
To bring this screenshot within the realm of human perception, and remove the abundant personal information (you can edit your post):
The AI is a poor programmer of API, having pretraining on just obsolete models, deprecated endpoints, and replaced library methods. You can keep tasks confined to just ancillary code.
You can paste a whole bunch of examples of correct code before asking for your new task, but this version of ChatGPT GPT-4 has also lost a lot of the ability to synthesize new solutions out of existing knowledge.
The API Reference in the forum sidebar is your go-to source for API programming, although it also doesn’t tell you what to do with response objects.
Thanks! All good advice. I was posting more in hopes of some developer taking note and enhancing the model, or maybe discovering better prompt engineering. Is there an easy way to share the conversation outside the team? The gist of it was that I provided the complete docs for the model and gave it many examples, but it continuously reverted to the training data despite ongoing conversations and requests for suggestions on how to get it not to do that.
My goals with GPT are mostly related to code development, so it’s not that I am looking for the answer myself–I am trying to get the model to consistently give it to me.
Thanks again.
Someone had posted their attempt to make an OpenAI API programming GPT a few weeks ago (now deleted from my “recently used”). I knew where it would fail to produce just a few lines of working response parsing, from my own experiments with carefully-curated API documentation meant to train the AI (not limited by being put behind a GPT’s knowledge retrieval) – and it did.
Would you be willing to share your experiments? I would be interested in learning from them.
Here’s 26k tokens of API spec about the Assistants endpoint and files, including all python and curl examples you’d see in the API reference. AKA $1.62 of input to gpt-4-32k
for a single response.
https://pastecode.io/s/ivwo622a
It also could use some of the basic quickstart overview pages as grounding.
Despite not overlapping with older methods, gpt-4-turbo
AI still can’t ferret out the proper procedures to make a successful run of a single user input.
ahh, I was so excited to find this. Did you delete the code because it was no longer helpful, or did you simply find a better way?
You can search for “OpenAI API yaml” to get the reference documentation.
It becomes obsolete with every API change - like today.
It takes significant curating to get it down to task-based sections that instill the new required knowledge. Cutting out CURL, Python, or node.js, or other parts less useful. Including references that appear later.
API reference built on this is still poor and with omissions, so you also need to go after different sections of “documentation” and make them understandable, along with code examples demonstrating usage.