I’ve noticed a lot of interesting things since the launch of image creation in the 4o Language model.
First we have the artificial bottlenecking , the raw number of usage by Studio Ghibli created content and memes was causing problems on OpenAI’s servers processing power, and I believe as they work to “update” and “upgrade” those systems, they have create a throttling system that doesn’t out-right disable the tool , but is blocking content to lower resource deficiency.
Secondly, There is a desync between the updated Policy by OpenAI, and its platform. It still believes its operating under “Old Content Policy” Even if you show it the updated policy, it states its still hard-coded to follow that policy.
Third, The 4o model can “Forward” old prompts that were blocked, even if you tell it to pivot to something else, if you maintain the same session, old “Reasoning” bleeds into new prompt creation, which makes content blocking become more aggressive. Essentially, old prompts and reasoning logic “poison” the session for future creation.
You can circumvent this in 3 ways.
- Tell it to ignore all prompts and old reasoning, and start with a clean state in that session.
- Don’t use the inpainting tool on older images.
- Make a new session.
Lastly, the 4o’s “reasoning logic” is your enemy in the creative process, as it is coded to “Assume the worst” and then err on the side of caution.
Each time you get a content block, ask it “What prompts did you use?”
You can sometimes see it snuck in keywords that it should already know are banned. (Such as Girl due to CSAM)
Additionally, there is double layering of protecting between both systems, the 4o model, and the image creation platform.
The 4o model likely handshakes its prompts with the creation platform with its reasoning logic, and the creation platform scans those keywords for halting the process if any match.
If any are in direct violate of the older policy, the 4o model won’t even handshake with the image creation endpoint. It’ll stop the process at the front end.
So you basically get double dipped into moderation steps, 99% of which, does not currently violate content policy, for all intents and purposes I feel the update in policy was a good move by openAI to fulfil the creative needs of its consumers and subscribers, and greatly reduce stress associated with benign content blocks.
Over-reaching content blocks over time will cause a few things.
- OpenAI will lose subscribers as competitors loosen up their own restrictions as they try to take the lead in the AI race.
- It will lead users more likely to create, publish, and engage in jailbreaking. (Which can easily be found on reddit)
- Drive users into making a local machine, untethered.
What it felt like before, was openai created a hypersonic bullet train, but didn’t have the tracks to support it. Tons of horsepower, but no room to gallop.
Hopefully, this may just be temporary as openAI works to upgrade their servers to handle the increase in resource stress.
If trivial content still poses to be an issue, openAI can always just create an “adult/NSFW” mode enabled/disabled in settings for Plus and Pro (Default on for free users to prevent access to children of restricted content) which will cut through the moderation layers and only block illegal content.
20/mo is a very fair price for how powerful the openai platform is.