Hi everyone
I’m building a bot that chats with users through the Assistants API.
When a user attaches a photo, the bot should forward that image to our internal MCP micro-service for analysis (OCR, object detection, etc.). I’m orchestrating this flow with openai.responses.create and a tool definition for mcp.process_image.
Current architecture
- Client → S3 – the browser uploads the image and receives a presigned GET URL.
- Gateway → OpenAI – the gateway calls
openai.responses.create({
model: "gpt-4o-mini",
tools: [ mcp.process_image, … ],
messages: [
{ role: "user", content: "User prompt…" },
{ type: "image_url", image_url: { url: "<my-presigned-S3-URL>" } }
]
});
- Assistant → MCP – if the model decides to run the tool, the platform forwards the call to mcp.process_image.
Problems
- Own storage path: inside the tool call I never receive my original S3 URL; responses.create rewrites it to something like oaiusercontent dot com…, so the MCP can’t fetch from our bucket.
- Base-64 fallback: if I embed the image as data:image/png;base64,…, the MCP still gets an OpenAI blob URL — but the SAS token inside that link expired on 2023-12-06 13:31 UTC, so Azure replies AuthenticationFailed.
- files.create experiment: uploading first with openai.files.create doesn’t help; the ID that reaches the MCP sometimes differs, and when it matches, GET /v1/files/{id}/content returns 404.
Question
How can I either
- force the Assistant to deliver my original presigned S3 URL to the tool untouched, or
- get a blob URL whose SAS token is still valid when the MCP receives it, or
- reliably use files.create so the tool can download the image?
Any best-practice examples or code snippets would be hugely appreciated. Thanks!
