Hi everyone,
I’m currently designing an agent-based workflow architecture (similar to patterns seen in n8n or LangChain) and I’m looking for advice on how to best implement a specific routing logic using OpenAI’s models.
The Goal:
I want to implement a “Classifier Agent” that sits in the middle of a workflow. Its sole responsibility is to:
-
Read the current “workflow memory” or the “past conversation” context.
-
Analyze this state.
-
Output a specific enum category (e.g., technical_support, sales, general_inquiry).
This output will then be used by a conditional (“If”) node to route the workflow to the appropriate specialized agent.
My Questions:
-
Memory Access: How should I best architect the “memory” component so the Classifier Agent can read from it and other agents can write to it? Is it feasible to treat this “memory” as a shared database that agents access via tools, I know the n8n have it from the get go?
-
Persistence & Tools: Is it possible (or recommended) to use something like the Model Context Protocol (MCP) or standard Function Calling to connect a real database (Postgres/Redis) for this read/write capability?
-
Implementation: If I am building this via the Assistants API, does the built-in Thread mechanism suffice for this “shared memory,” or should I maintain an external state object that I pass into the Classifier Agent as context?
I’m trying to understand if this pattern (State \\to Classifier \\to Router) is native to how OpenAI agents/assistants are designed to work, or if I need to build significant custom infrastructure around the API to handle the state management.
Any guidance or examples of similar architectures would be greatly appreciated!
Thanks