We want to switch over all of our assistants over to gpt-4o but it’s just firing blank requests. Is this model not supported yet or does it require further changes to the assistants API?
FYI: it works in the playground, just not through API calls
We want to switch over all of our assistants over to gpt-4o but it’s just firing blank requests. Is this model not supported yet or does it require further changes to the assistants API?
FYI: it works in the playground, just not through API calls
Seems to be working for Chat Completions API, maybe its being rolled out, I hope it has function calling.
Figured it out, I needed to update all my Assistants to V2
are you using the python SDK?
i’m running into a similar issue, although updated to the newest package and still getting that it’s unsupported and that i’m using v1
how did you do that? I think the official documentation is vague about this (for node.js)
Indeed, Can’t select the latest model either. Tried modifying via the API and via the graphical interface
Go to the Settings->[Select Project name]->Limits
Under Model Usage, select Edit and Allow gpt-4o or gpt-4o-2024-05-13.
Go back to Assistants page. It should now show the latest model in the selection.
@supershaneski Yes! I found it myself already. Thanks for the reply .
I created a new project and i could select the GPT4o. So i knew something was off…
Thanks.
Did you change anything in the code to enable V2 for assistants? Or indeed only removing the limits?
Well i used gpt4-turbo with filesearch for the assistants already so it was V2.
Hey @RutgerL
Sorry to keep going at this but I really want the assistant api to be run by gpt-4o to enable voice and vision . Can you maybe share your code how you enabled V2? I installed the new SDK by the way.
or can you please explain what parts of my current code I need to modify to enable V2:
router.post(“/personal-experience-kearney”, async (req, res) => {
const { userMessage, threadId } = req.body;
try {
let currentThreadId = threadId;
if (!currentThreadId || !threadResponses[currentThreadId]) {
const threadResponse = await openai.beta.threads.create();
currentThreadId = threadResponse.id;
threadResponses[currentThreadId] = { events: , clients: };
}
sendEventToAllClients(currentThreadId, { event: "messageStart", data: {} });
await openai.beta.threads.messages.create(currentThreadId, {
role: "user",
content: userMessage,
});
const stream = openai.beta.threads.runs
.createAndStream(currentThreadId, {
assistant_id: assistantIdToUsePersonalKearney, // Make sure to replace this with the correct assistant ID for Kearney
})
.on("textCreated", (text) => {
sendEventToAllClients(currentThreadId, { event: "textCreated", data: text });
})
.on("textDelta", (textDelta) => {
sendEventToAllClients(currentThreadId, { event: "textDelta", data: textDelta });
console.log("textDelta event Kearney:", textDelta);
})
.on("toolCallCreated", (toolCall) => {
sendEventToAllClients(currentThreadId, { event: "toolCallCreated", data: toolCall });
})
.on("toolCallDelta", (toolCallDelta) => {
sendEventToAllClients(currentThreadId, { event: "toolCallDelta", data: toolCallDelta });
})
.on("end", () => {
sendEventToAllClients(currentThreadId, { event: "end", data: null });
});
res.status(200).json({ threadId: currentThreadId });
} catch (error) {
console.error(“Error handling /personal-experience-kearney:”, error);
res.status(500).send(“Internal server error”);
}
});
@yvoderooij I used this link to configure the assistant.
Then connect to it with the API. (run the thread with the assistant)
Did you delete the old project? Because I keep getting the error “no assistant_id found”. The only thing I changed is creating a new project, creating a new agent and use that ID
No I didn’t.
First create the assistant using the link i send in the post above. And configure it for gpt4o.
Are you sure the API has rights to your project?
Yes I have exactly the same as you have, but only the assistants from my old/default project work. Not from the new project I created…
I will keep researching, thanks for the help thus far!
Hello colleagues,
Please, could you advise on what type of messages (text, photo, audio, video, documents) GPT-40 can process?
Do I need to separate message formats in my pipeline and send them using different methods?
I can’t find detailed documentation.
I have found a workaround, using the python SDK (1.30.1). Pass the assistants version V2 explicitly
run = client.beta.threads.runs.create(thread_id=thread.id,
assistant_id="asst_XXXXXXXX",
extra_headers={"OpenAI-Beta": "assistants=v2"})
It seems, that the default headers are ignored from:
client = OpenAI(default_headers={"OpenAI-Beta": "assistants=v2"},
organization="org-XXX",
api_key="sk-XXX")
Pretty plainly: if you are sending an assistants=v1 header, you will be denied creation of or modification of an assistant using model:gpt-4o
{
“error”: {
“message”: “The requested model ‘gpt-4o’ cannot be used with the Assistants API in v1. Follow the migration guide to upgrade to v2: https://platform.openai.com/docs/assistants/migration.”,
“type”: “invalid_request_error”,
“param”: “model”,
“code”: “unsupported_model”
}
}
The latest Python SDK should not have a problem, as its default for assistants was switched to v2. If the model setting fails, and you confirm the latest version in your script operation, that would be a bug.
import openai
print(openai.__version__)
1.29.0
I havent found any of the gpt 4 models in the add/block list ! not sure how you guys found it. and i also have a plus account.