I’ve currently got a project in which ChatGPT generates some output for me, and then that output is immediately fed into the OpenAI Image Create endpoint.
However, the issue I am running into is that ChatGPT will sometimes generate text that the Image creation end point considers “harmful” due to the content of the message.
As I understand it, the only way to avoid this right now is to use the “moderations” end point to determine if the text is considered harmful or not, and then have it re-adjust accordingly if it’s found to be harmful text, but ideally it would be nice to have the image generation accept an extra bool parameter which tells OpenAI to give me back an image anyway, but give the end point liberty to remove the harmful parts.
Wondering if it’s also possible to ask ChatGPT to not use “harmful” words to begin with, but I’m unsure if it knows which words the image generator would consider harmful. Open to other suggestions!