How to get OpenAI to return images as Markdown

We’ve got an AI chatbot based upon OpenAI’s models, and we’ve got context we’re extracting using VSS and RAG. I want the AI chatbot to display relevant images as Markdown, but almost regardless of what I do, OpenAI returns only text, and it’s very difficult to ensure it’s returning foo, even though I explicitly send it such information.

Advice …?

Yeah, if you query the text API, you’ll get text back. The image API is separate.

Are you wanting the API to generate image + text? If so, you’ll need to make multiple calls likely.

What are you trying to do exactly?

How are you sending the images you want to be displayed back to the API?

Not sure, how you are implementing your RAG. Let us assume you are using function calling, so here is a possible way:

Here’s the raw output in markdown: