How much control do you really have over your chatbot?

The symptoms reported are not from a “super prompt”. If you ask the AI to write a “system prompt” in an AI context it knows what that means. If you know how to change a tire, are you a car?

But indeed there is one - and OpenAI is using it. Your system message is now the second system message. The context window of gpt-4.1 has a planned “missing” 1000 tokens, likely now reserved, and 125k models won’t take more than 123k. Besides the fact that I can reproduce encoded model input contexts out of band.

At the bottom of this post read the full text, which varies.


The AI is prompted to write at text “assistant” that is an unseen start of a message after the ones with roles you send. This means your use of assistant:name from the API is an anti-pattern for the actual output, and that you can’t truly break from the post trained patterning or have an authentic AI completion from “Venom”.

2 Likes