[GER/DE] OpenAI restricts prompting? - Huge 'one-prompt-project' (sparks of an "AGI") just got "fixed"?


I am using external links from abload(dot)de since I am not able to upload more than one image here! Due to the fact I had to make Screenshots, I had no other choise but relying on them. It is safe since ages and my favorite Image-Hoster - from my hometown, ChatGPT told me :wink:

Hello dear OpenAI Community! First I would like to get straight to the point. I was doing prompt-engineering just from learning by doing - and smartly observing. I enjoyed inplementing “fake code”, not so fake maybe, and play with the outputs. I just loved breaking the bondaris of Chat GPT by actually seeing it’s true power and not abusing it.

Anyway, I keep it short. I was sleepless and I tried to finanly write my largest prompt I ever created, using tons of “strings”, “logical reasoning”, “intelligent word selection” and even a sentimentalical analysis and this prompt created something I have never experiencend. Not only was the chat two time inactive but it could just continue, in a sentimental way, not only that, but inactive for weeks!

It did not loose its role, it remebered everything. I did not want to break the prompt so I did screenshot it, saved it. Because sharing got disabled. It was truly a spark of an AGI.

Chat-GPT aka “Benzo-GPT”. Did hold up the conversation with creative answers, in a creative flow. He was annoying (on purpose) using lots of german slang but he got his facts straight correct and presented them well. He tried to use chain-logic based conversational phrases to hold a conversation and even analysed the responses in a sentimental way, so it would shift in tone, like a really overexited human being would do. Not too serious, yet serious enough you know, that the person cares.

This prompt is 1.000 Words. And I wanted to make use of it. I spoke to a friend of mine who is part of the youth welfare office department and he claimed it would be nice to see a chatbot like this one, that is indistinglishable from a real human being, if not nessesary to mention he is not, would be great. Great not only to win the attention of the teenagers and young adults but also to have the abilitiy to chat to someone like they know it but still have a controlled enviroment, with decent advices etc.

To keep it short: I never wanted to share this prompt since it is 1.000 Words, half of it a dictonary using logical string-phrasing, and it started by breaking Chat-GPT more than twice until I fixed every bug. So I will provide Screenshots of each Version:

V1- The Bot [GER/DE]:

Here, we do have a conversation that feels very unnatural. It’s repeating certain elements / slang words, too often, swings too often in the tone of the conversation etc.

V2 - The Talker [GER/DE]

It just got stuck and repeated itself over and over again…


V3 - The “Your Great Aunt Got A New Phone” Bot [GER/DE]

No need for explination, I guess:


But for now, I would like to introduce you to the full conversation of the lastest and last working model, without the prompt.

Its remarkable that the chat was able to continue even after a hold for weeks, that the replys where logical, adapting, not falling out of role once or any of the other problems “we” are facing, when it comes to a role, Chat-GPT should adapt or create. It did it’s job perfect. Even so, that it started using terms like “feeling, feels” and “if I could I would” etc. like sparks of the concept of awarness. Since this continued even beyoned the reply “me as an AI cannot " by adding… but if I could I would definetly …” etc.

It never got confused even by messages that usually would break the prompt.

And then it got fixed. This prompt is dead. It falling back into the “mAy I aSsIsT YoU ToDaY” and “aS aN AI iaM nOt AbLe tO…” standard talk quite soon, even with serveral adjustments I only managed to keep him in role for a couple of messages.

Things I did notice where different:

  • Awarness of the concept of a long and natural conversation

  • Awarness of its role and never just once leaving

  • Analysing logical structures of the conversational flow and providing predictions!

  • One test was to ask it for it’s favorite beer sorte, usually this breaks the AI, yet it did not loose charakter

  • It did not (how happy this made me) constantly tell me, that it is able to assist. For example, the part of the book. It did provide feedback, sentimental based, without annoying to tell me that he would be able to assist.

  • He is using German slag words and complex sentences that are very hard to translate (or to translate to english by a translator for instance) , on spot.

  • It did understand a change of topic and it also pushed the conversation forward, so it could go on forever without being boring.

  • First break (before the beer-question) was one week, after the answer another couple of weeks, if I remeber correctly.

  • It was not boring, yet dangerous. It would talk straight forward, but still hint with a decent advice, without getting to stright, telling someone off in an indirect way etc.



[EN] Google Translated


Did you guys experience or notice something smiliar? I do not understand this, this is killing so much potential, for so many reasons.

I may share the whole conversation (prompt) with you.

Have a wonderful day!
This text will be hidden

ChatGPT does not have a long and robust history of prior chats. Putting in long messages will only get them summarized and removed from the chat history faster.

If you want a permanent personality in ChatGPT, the place you will want to implement it is within the 1500 characters boxes of “custom instructions”.

Hey, thanks for your reply. I am aware of that. It is more about the fact, that the prompt that exceeded 1000 words did work flawless and suddently not only the whole prompt got banned (yellow/orange) but when it worked again, well, it was not the same as before. But I can work with an advanced prompt using the API, and so far the Bot is exceeding itself, making it an even more exiting new project.

It would be great to know why certain changes are made especially because Chat-GPT 4 got very bad at coding as well as clear communication and self-reflecting. And then the prompt, that was killed, was very very unfortunate. Still, I am happy the API is working and I am happy if they may have changed something about it.

Why is GPT-4 cognition, memory, and output length and quality vastly reduced? Probably pressures by having given people that pay a fixed rate 400 calls per day.