To clarify I’m literally providing non-sensitive code files and having them rejected.
If I put ~400 lines of code or more I get this error. If I trim that same prompt and retry it will almost certainly fail even if cutting down to ~2 lines.
I posted this elsewhere but this is going to be an ongoing battle for OpenAI. The core issue is that no matter how well trained these safety models are they will always be perceived of as being too aggressive.
Think about it…. If they let a prompt through that they shouldn’t have you can’t detect that and nobody is going to report it so you can’t easily increase their safety. But if they block a prompt they shouldn’t have everyone sees that and will report it. What you end up with is a system where 99% of the reports are about things you blocked that you shouldn’t have and less then 1% of the reports are for things that will actually improve model safety.
I just don’t see how this system ever works as intended.
I have a pretty simple prompt that checks the internal consistency of another very complex prompt, which I of course need to pass to the model as well.
I believe this very complex prompt causes the error message, due to “Limit additional context in retrieval-augmented generation (RAG): When providing additional context or documents, include only the most relevant information to prevent the model from overcomplicating its response.”
Since I want to check the inner consistency, I need to pass the very complex prompt as is.
Do you see any other solution besides moving to model R1 instead?
I’m running into this problem with o1-mini, too. My impression is that one important factor might be the presence of some “assistant” message in the loop. I can share a playground preset, if that would be helpful.