Can Assistants generate images via DALL-E?

I’m trying out the Assistants API:

I was able to use the API to create an assistant that included code interpreter, and I asked it to create a visualization based on a csv file that I passed. This worked and it generated a .png with some charts.

I created another assistant that also has code interpreter, and I prompted it to “Create a cartoon about a cat and a mouse” and it returned a message saying “as an AI model, I don’t have the ability to generate images…”

However, on another call, it DID generate an image when I posed a similar question, but the style of the image was extremely simple (attached).

Should it be able to use DALL-E here? Is there any way to make it consistent?

Hello. For now, the assistant simply allows you to create prompts according to your instructions. It does not yet support the generation of images, but OpenAI is working on integrating DALL-E into the assistants. However, for the time being, DALL-E must be manually adapted to the assistant. If you want to create a cartoon with a cat and a mouse or any other story, you need to add these parameters to the assistant in the instructions. Once you get your story, you then have to call DALL-E at a certain point to generate your image inspired by the text so that you can create your cartoon.