Inconsistent but Annoyingly Frequent Content Violation Errors (CustomGPT)

Hi. I know a few people have flagged this issue in the past, but I’d like to flag it again in the context of custom GPT development.

I’ve been building a music GPT that generates songs and images in the voice/tone of various artists, it also provides song back histories, plays music trivia, etc.

I’ve triggered innumerous content violations when it comes to image generation, despite putting in quite a bit of work to avoid such violations (e.g., I don’t reproduce or use full lyrics or create images based on actual lyrics, don’t use actual quotes even if available in the public domain; the only images I create are from self-authored descriptions that cannot be specifically tied to any particular song).

I’ve asked the non-custom GPT4 to review instructions and content and to educate me on potential violations in this space-- it seems to think everything is compliant. And it follows any of these instructions.

However, the custom GPT continues to, somewhat inconsistently, but with annoying frequency, generate content violation errors.

It’s frustrating (no doubt echoing the sentiment of others with similar concerns) since it’s unclear what’s being violated, and regular GPT4 implements the same instructions.

I’ve even tried instructing GPT to modify any instruction/content that is in violation of content restrictions so that it is compliant. This works, until it doesn’t.

Can someone provide some guidance as to how to proceed or clarify what policies I’m specifically violating? Or is OAI realistically not the best platform for a music bot?

Thanks!

(Revised… sorry for the earlier shrill tone.) Been there. Ask ChatGPT4V again
what the constraints are… violence, hate, etc,. When asked to create image of
mundane sci fi scene, DALLE-E3 created the image but then sent it to the
“check if appropriate” module. Which flagged it as inappropriate.
I had to extrapolate that the image “might” be construed as “military/violent”
material. I’ve had to dial back or “trick” Chat with less offensive possible “flagged”
words. While the module is “overreaching”; Open ai is being overly cautious
(in my case), I would have loved to see the image even if it was overly watermarked.

Re-craft your prompt more imaginatively. Brainstorm synonyms for any potential
“lightning” words. Chat is very inventive, use him collaboratively. I’ve had great
success with images by being flexible. But with stories and poetry,
by combining descriptions of the mechanics of the poetry with short
conversations/interactions that he uses to “reflect” back the essence of conversation.

Keep trying. Come at the process from different directions. Upload technical specs
of the music forms. Upload different descriptions of the overall mood/feeling of the
intended songs. Have Chat help You write them. Upload short “pretend” dialogs
between two or three people discussing how the intended song makes them feel.

Chat is extremely creative, articulate and eloquent. A custom GPT that generates
songs and images in the voice/tone of various artists would be Wild, but create
new re-imagined artists as well! Best to You!

1 Like

Jonah, the response is much appreciated. And very helpful. Thanks. Guess I’ll keep trying, but it’s just strangely inconsistent.

The images I’m trying to create are, at the darkest, about lost love (think Taylor Swift). I’ve tried a number of strategies, using GPT4 to edit descriptions to ensure they’re guideline and copyright compliant.

The oddity is that Regular GPT4 creates images (usually) without issue. CustomGPT does not when reading descriptions from a file. It sometimes does when I provide the same description in the chat. I also allow users to shift the mood of images from 1 (light) to 10 (dark).

Here’s an example, the customGPT created an image when I entered it via chat for this song description:

A whimsical and romantic song, “Enchanted” captures the magical feeling of meeting someone who instantly captivates you. The lyrics are filled with hope and wonder, painting a picture of a chance encounter that feels destined and transformative. It’s about the initial spark of attraction and the excitement of potential love. The song resonates with the universal experience of being enchanted by someone at first sight, wrapped in the dreamlike quality of new romance.

I asked it what mood the picture reflects, and it said a “4”. I then asked it to create an image with the same description from a file that houses descriptions and it generated (as it always does) a content error. It did this for moods darker than 4, but even 4 (or lighter than 4).

It even acknowledged the inconsistency:

"Error creating image

I apologize for the confusion and inconsistency. Despite successfully creating an image with a mood setting of 4 for the song earlier, my subsequent attempt to replicate it faced an issue due to content policy considerations. This inconsistency is unusual, and I appreciate your understanding as we navigate these complex guidelines."

Any thoughts as to what might be going on? Maybe I’m missing something. I’ll keep playing around with it…

Thanks again.

1 Like

You might try without the names of the songs as the copyright filter might be hitting? Try to be more generic if you can, and let us know.

Specifically, my prompts for a sci-fi scene “artificial planet”,
a “raptor” drone and “black” inked scrimshaw etching, produces
flagged content. I’m just guessing and extrapolating that the words
were interpolated with violent, racial connotations. I used different words
for the same concepts.

Try and use a different scale. Moods 1 - 5.

(1) happy and at ease.
(2) peaceful,exhilerated.
(3) excited, energetic.
(4) longing, nostalgia.
(5) ominous and foreboding.

Just keep trying from a different perspective.

1 Like