API Support for Creating and Customizing/Fine-Tuning GPT Assistants via Chat (Similar to Web UI Functionality)

Hello OpenAI Community,

I’m delving into the potential of programmatically creating and fine-tuning GPT assistants, akin to the functionality available in the OpenAI web UI. My interest lies specifically in achieving this through an API. The web UI offers user-friendly tools for customizing GPT models (via Chat with GPT Builder), and I’m curious if similar capabilities exist for API users.

Is there an API that supports the creation, customization, and fine-tuning of GPT assistants in a manner comparable to the web interface’s ‘GPT Builder’? Detailed insights into API functionalities, access methods, and any relevant documentation or examples would be immensely helpful.

Thank you for your time and expertise

1 Like

Hi @hamedt,

If I understand correctly, the Assistant’s Playground itself is a tool built using the Assistants API.

Here’s the documentation on the Assistants API to get you started with creating assistants using code.

1 Like

Hey @sps , I am referring to the following. I cannot anyway to provide feedback to GPT via Assistant API endpoints or playground to provide feedback to the model

In the ui, when you submit a feedback, the model gets updated and it improves the quality of similar situations from that point.

Oh I see now. I don’t have access to that UI yet so can’t say about it.

1 Like

I assume that the GPT Builder is just changing the Instructions through the Assistants API. Fine-tuning has a different meaning in this context.

GPT Builder is a “Custom GPT” that knows how to call the Assistants API to update Instructions and other Assistant Metadata. Maybe there is more to it than that, but maybe not.

If you wanted to emulate it, you would make an Assistant that has access to the Assistants API and some Custom Instructions about how to talk to the user and the API.

Hey @prescod

Thanks for your response. Based on my understanding, the instruction we can provide is limited to 8000 characters. and GPT Builder does it work in a unique way as it feels like a fine-tuning as it remembers the feedback and extra details provided without doing any changes to Instruction and files attached.

I think this is a powerful capability and I do not know how I can exploit it more via API

Yes, it is possible that they improved the builder’s behaviour with fine tuning. But it’s incredibly unlikely-- bordering on impossible – that they are fine-tuning the actual models that we talk to as our Assistants. Fine-tuning a model takes a lot longer than the time it takes for those models to update.

It’s also possible that they gave themselves more room (in the Builder) for instruction space than we have. (8k tokens is a fair bit though…any more might confuse a model)