We have two main tools one for fetching saved payment methods and then initiate payment which sends the token received from the previous tool.
But its getting blocked by moderation check.
”This tool call was blocked by a moderation check. Please ensure that the call arguments are relevant to the user’s prompt. Be careful to avoid irrelevant sensitive data or leaking the user’s sensitive information.”
can you show the tool definition and argument definitions?
If you just need to pass a return value from tool 1 over to tool 2, don’t send it to openAI in between. If you need OpenAI to “connect” these two tool calls, use a made-up key that you map locally to the correct value.
Hard to say if this is helpful since I don’t know much about your case but I hope it lands!
Can you explain more? I ran into this issue today too. We are building a CRM app and have “deals” and “tasks” for those deals. A user may ask a question like “Add a followup task to the Benjamin deal”.
We have two tools - search and createTask. The search task returns all the deals that match a query like “Benjamin”. Each of those deals has an ID. The second tool to create a task, takes a deal id as part of the input.
We’re getting a contnet moderation error on the second tool call when we try to create the task.
Sure. The key is in how you think about work that has to be done in agent CODE vs work that has to be done by the remote agent LLM. Your goal is to call the back end services for search and create… so how should you tell the model to help get this done?
you can map your services directly to model tools, and that will work as long as you don’t expect the model to re-create long IDs (which it will sometimes get wrong) or handle sensitive information (which it has a hair trigger avoidance for because the servers don’t want the liability). But if you do need long IDs (in my case, every back end service call involves several GUIDs) or sensive data in your back end service call params, you need a way to keep the LLM away from having to handle that.
lets get specific. user says “Add a followup task to the Benjamin deal”. Give the model a “search” tool that takes optional fields that the back end can search on. When the model wants to call the search tool you wire it up to the backebd service BUT you don’t return the native payload from your backend directly to the model. Instead, you hang onto it and tell the model something straightforward and simple like:
Customer ID 1 for this result is Benjamin Smith. Customer has open deal valued at $500k.
Customer ID 2 for this result is Benjamin Rogers. No pending deals.
When the model calls the “create” tool it will pass 1 or 2 as customer_id because those are the IDs it has in context. When you get the create tool call, you de-reference the ‘1’ back to the list that you got from the search so you know the actual specifics of the Benjamin Smith deal. The model is happy to put “1” in a tool parameter because that’s not secret. But a big ID might look like a key or a phone number or an SSN… especially if it is in a list of personal details.
Of course the whole reason we have IDs is that there may be two different Benjamin Smiths. You need to identify them as uniquely as needed to make the model differentiate.
And take care to make sure the next call to “search” for the entity “customer” continues to increment the result ID number until the session restarts. Otherwise what used to be “customer 2” changes from one point to another in the conversation and you might get some wires crossed.
Incidentally for search queries like this, the model loves the idea of “predicates” where it gives you a field, operator and value like
field=name, operator=like, value=benjamin
field=age, operator= ‘>’, value= 30
field=state operator=’in’ value=KY, TN
if your back end can handle that, you’ll find the models very effective at narrowing down to the specifics you are working on
Ok so basically you are saying “rewrite your keys to be shorter”. The only problem is that your MCP server now needs to maintain state - which, ok fine.
Keys are not the only problem… could be phone numbers, credit card numbers, IP addresses… part numbers, social security… if the only reason the model needs to know a record of data is so that it can send it back to you later, then you’re better off sending a placeholder that the model can treat as a “black box’“. In my example above, the entire customer record can be manipulated with the model only have to choose between a hand full of short numbers.
Keeping local state so that the model can work at a higher level of detail is a good tradeoff. Unless the model needs to know a piece of information in order to use it for a decision or composition, you will save money, speed and reduce errors by not sending it. You can’t be sure a model is going to reliably recreate long sequences of anything (not just keys) because that’s not part of natural language. Sure its reinforced in at the end of model training but you’re exposed unnecessarily to decaying probabilities.
And its Agent state, not MCP server state… hope that part makes sense and its not taken as a nit because the distinction is key. The agent is managing the state of the process that you are automating, the tool calls are just giving it arms and legs. The agent needs to track a lof of things like how long the convo has been going on, how many tool calls have failed, total token cost for the conversation, transcripts, audits… all “state” in one sense. What customer are we talking about, what case are we working on, what locations or parts or doiagnosis or whatever… all of that is agent state that may propagate into many different tool calls. Its the administrivia that the Agent is managing so that the model can use its big brain to make decisions and the tools can be dumb IT microservices.