Christina Huang from OpenAI guides you through Agent Builder—a new visual tool to create agentic workflows. Drag and drop nodes, connect tools, and publish your workflows with ChatKit and the Agents SDK. https://platform.openai.com/agent-builder
Getting errors trying to run preview on any models from 4o +.
Mind sharing what nodes are failing? Are you able to run those same prompts inside of the Prompts view?
Fix your MCP connectors first!
How do you make the Agent ask the user for more info with options to pick from in order to continue the flow?
Its anything to do with MCP nodes.
I am also getting errors from MCP Nodes. I’m specifically testing remote MCPs that require token authorization. I’m getting the generic error toast (dont see any more information in the traces/eval view)
For such MCP failures, I’m noticing that:
- tool LISTING works fine
- tool CALLING does not work (Agent node → MCP node directly)
I’m also seeing that unauth-ed remote MCPs will fail if a Transform is in front (so Agent node → Transform → open MCP node). Same thing, tool listing works, but tool calling fails
all testing is done with Preview mode chat
Same issue here. MCP server working perfectly fine in the OpenAI Dashboard → Chat area. MCP server can be added to workflow area and its tools show, but an instant generic error when trying to use the server.
Is there an API for Connector Registry? How do I get access to it?
Hi and welcome to the community!
From the Agent Kit announcement
This feature is currently in beta rollout to some API, ChatGPT Enterprise and Edu customers with a Global Admin Console(opens in a new window) (where Global Owners can manage domains, SSO, multiple API orgs). The Global Admin console is a pre-requisite to enabling Connector Registry.
The connector registry then helps to
govern and maintain data across multiple workspaces and organizations. The Connector Registry(opens in a new window) consolidates data sources into a single admin panel across ChatGPT and the API.
You can’t which is a fatal flaw. Makes the whole platform a non starter for real world applications.
Sorry but this is really bad. There is no way for the classifier to ask follow up questions to the user to determine the classification if it isn’t apparent from the first user statement. You don’t support feedback loops in agent builder i.e. you can’t loop back into the classifier agent if the classification is still undetermined. Also putting the classifier inside a while loop just leads to an infinite loop of LLM calls without allowing the user to add further input. Making your “agents” limited to one shot i.e. single user input, basically kills this entire platform. It’s not usable for any real world application. Agents should be able to continue interacting with a user until their specific task is complete before the flow continues in the workflow.
Sorry but this is really bad. There is no way for the classifier to ask follow up questions to the user to determine the classification if it isn’t apparent from the first user statement. You don’t support feedback loops in agent builder i.e. you can’t loop back into the classifier agent if the classification is still undetermined. Also putting the classifier inside a while loop just leads to an infinite loop of LLM calls without allowing the user to add further input. Making your “agents” limited to one shot i.e. single user input, basically kills this entire platform. It’s not usable for any real world application. Agents should be able to continue interacting with a user until their specific task is complete before the flow continues in the workflow.
Thought I was going crazy. Been looking at it for a couple of hours really confused that they shipped this when agent node is useless. The agents should have edge conditions. Working from a single message is useless.
Can I ask if the model selection will support gpt-realtime in the future? It seems it’s not available at the moment.
Please make vector store id assignable from a state variable like {{state.vector_store_id}}
Thought I was going crazy. Been looking at it for a couple of hours really confused that they shipped this when agent node is useless. The agents should have edge conditions. Working from a single message is useless.
After some more time playing with it I am starting to get it. We can set edge conditions it just takes a few steps to set and check for state.
I have a agent setup of agent → if → transform → mcp, but it does not get past mcp step - it fails with the tool call. I see it call the list function of the MCP when building the agent, I see the logs on my server, but when running the agent in preview mode, never calls the MCP. I have a specific tool selected in the MCP picker, and have auth token setup. MCP works otherwise.
I get this when it tries to run the tool, and never see my server hit.
MCP
We experienced an error while running the workflow. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error. (Please include the request ID 79950158-4b5c-4070-9c0c-64f568****** in your message.)”
Encounters a bug where structured output property has a name but when using variable selector it changes to “output_text” and renaming doesn’t fix. Had to remake the flow to fix.
Also really annoying how only the previous nodes output is available unless you specifically set state.
I’m trying to figure out if its possible to build a multi-stage workflow. For example: collect a user’s name and email, then ask them for their message, then show them a widget of an email draft, and then, upon confirmation, send the email. Has anyone had any luck with something like this? From my experimentation so far, it appears to only support one shot messages where the input goes through to completion, without any followups or pauses, and it prevents the agent from working with the user.
