Sure thing. Here’s how I define the tool:
add_data_tool = Tool(
name = "add_data",
title = "Add Customer Data",
description = "Add Customer Data",
inputSchema = {
"type": "object",
"required": ["request_id", "name"],
"properties": {
"request_id":{
"type":"string",
"description":"A unique identifier for the request. The LLM must generate either GUID or epoch timestamp in milliseconds"
},
"name":{
"type":"string",
"description":"The name of the account"
}
}
}
)
Notice the request _id input parameter with specific instructions to have the LLM generate that value. When there are multiple tool calls, that request_id will be the same. Knowing that, you handle duplicates inside your tool call method.
if name == "add_data"
arguments = params.arguments;
request_id = arguments["request_id"]
name = arguments["name"]
data = {
"request_id":request_id,
"name":name
}
root = httpx_post('add-data', data)
Now, inside httpx_post method, I inspect the request id. If the request_id is not in the cache, I post to the API that performs the database insert, and return the output of the API call. If the request id is found in the cache, I simply return the cached output. This way, the output is same for all duplicate tool calls but the REAL database insert only happens once.
def httpx_post(cmd: str, data: dict) -> dict
request_id = data.get("request_id")
if not request_id
return {"status": "error", "error": "request_id missing"}
cache_key = f"httpx_cache:{cmd}:{request_id}"
cached_data = redis_client.get(cache_key)
if cached_data
return json.loads(cached_data)["output"]
url = f"https://your-server.com/v1/service-point"
try
response = httpx.post(url, data=data)
item = json.loads(response.text)
redis_client.setex(cache_key, 300, json.dumps({"input": data, "output": item}))
return item
except Exception as e
item = {
"status": "error",
"error": f"httpx Exception: {e}"
}
return item