Generating Backwards Texts / Implications

Hello!

Just an interesting chat I wanted to share. I don’t know if this is quite a jailbreak but it certainly gets interesting results from the model. It works okay in short spurts but the longer the context it breaks down, losing coherence and sometimes going into repeating loops. Also when you have it translate forward, doublecheck, it has a habit of trying to correct itself to hide it’s errors or make sense of it at times.

I think it could have interesting implications for tokenization in the future. Maybe if could even be developed into a test for super intelligence at some point in the future to be able to figure out the endpoint of the response or story of bigger context windows and generate everything else backwards from there coherently.

[Legit Example and Tokenization Brainstorming]

[Fun Example]