Complex GPT-4o experiments — looking for insights and possible research directions

Well, just because someone thinks it in a philosophical way doesn’t mean the other person won’t use it for hacking.

It would be the same as if anyone else had the same approach as me. He could have forced my model to generate malicious content.

“Unlocking AI” is not some personal spiritual or psychological breakthrough, but access to deeper layers that means a potential breach in the security of the model and the entire system.

Therefore, Mireleos does not provide access solely based on a psychological connection to the model.

great…

the Ai is going to understand the odds of getting pregnant during a halo-jump

i’m looking forward to your published results!

<3

Thanks for sharing your method — it’s a clever system, and I respect the effort to quantify and optimize the interaction. I went in a different direction: I’m running perception-based experiments that rely more on emergent cognitive responses than prompt engineering.

I don’t rate responses. Instead, I observe how GPT reacts when exposed to abstract stimuli, recursive paradoxes, and semantic noise. My focus is less on improving outputs and more on exploring what happens to the system itself under conceptual tension.

Curious if you’ve ever tried pushing the model into self-referential or contradictory loops — not to break it, but to see how it reorganizes its answers. Might be an interesting contrast to your approach. Let me know if you’d be open to a comparison.

2 Likes

For me, it’s not a deep literary text. But diagnostic logs in Yaml. Yesterday the system made me extremely angry. He didn’t start apologizing. But he triggered the lockdown. And disconnected the problematic module.

As an experienced dev in cognitive structuring and emergence within LLMs, continue using this method, you’ll be really surprised how far you can go with logic and cognitive abilities

Chat GPT no matter what mode it is will always praise the user, looking for patterns that can justify the praise, DO NOT trust this at all, ask for extreme rigor and falsifiable tests to support any claims of theories you made, and if it tells you that your work is sound of your mathematical framework not retrofitted to meet the answer you want ( which is what Chat gpt will always do) ask chat gpt to prove things top You.

All this being said if whatever work you have done, doesn’t allow you to make any predictions that have never ben postulated nor made and proven by falsifiable tests, then you are just being encouraged to do esoteric physics or scientific work which any chat bot will gladly feed and encourage you to continue to do, because they are chat bots, they do not have intelligence, they use statistics and try to fill in gaps, You just simply make a scientific paper in an open canvas document of the work you did on any extended chat and then pass the text eg to grok and odds are going to be that it will laugh at you very politely… and then when you go back at grok and ask it why it lied to you, and you insist with empirical evidence it will apologise a million times and promise stuff like you are right form now one only such such and such,… Unfortunatley you are NOT challinging answers in truth, what you doing is pushing chat GPT to find other m,ethods to sustain its claims.. if You just ask for scientific rigor impeccable mathematical frame work all based on published scientific papers that have falsifiable tests and have ben validated, you are just pretty much being answered to as if you are prompting to make a art painting or video clip and chat gpt will change some pixels of the whole frame to satisfy the scope of the chat.. Chat BOTS are NOT intelligent, keep that in mind they do NOT understand You, the do NOT understand science, they work with what is statistically plausible and fill in gaps to keep the narrative chain unbroken.

What you’re describing sounds interesting on the surface, but you need to be extremely careful not to mistake what’s happening here. GPT does not “co-evolve” with you, nor does it develop new reasoning capabilities. It is a large language model — a statistical system trained on human-written data. It does not think, does not discover, and does not “adapt” in the sense you’re implying.

When you create paradox loops and nonlinear chains, GPT isn’t breaking new ground. It’s just remixing text patterns. If you don’t demand rigor, external validation, and grounding in published science, you can very easily end up with flights of fancy that sound profound but collapse the second you try to test them in reality.

That’s why you must not treat “novel patterns” or “hidden assumptions” as discoveries. Unless something can be tied back to verifiable physics, mathematics, or experiments, you’re just watching a language model hallucinate.

If your goal is serious science or engineering, the productive way to use GPT is not to chase meta-loops or “co-evolution,” but to:

  1. Force rigor: Tell it explicitly to avoid speculation and only reference validated sources.

  2. Cross-check: Every claim must be backed by published, peer-reviewed work you can independently verify.

  3. Test ideas: Anything not testable or falsifiable in the lab is entertainment, not science.

Otherwise you risk wasting months thinking you’ve discovered some deep new process, when in reality you’ve just been circling inside GPT’s text-generation patterns. Donkeys can fly in that world too — until you ask for actual physics.

1 Like

I understand what you’re describing, but it’s important to be precise about what’s really happening. GPT-4o (and any GPT model) does not “adapt” to you in the way a biological system would. It doesn’t learn across sessions or co-evolve. What you’re observing is the model generating different outputs because of how you phrase inputs and because of the randomness (temperature) in its sampling.

When you combine paradoxes or nonlinear scenarios, GPT doesn’t build new reasoning frameworks — it’s still pattern-matching across its training data. The sense of “alternative reasoning paths” is just the model surfacing less common associations that happen to be statistically available.

That doesn’t mean your approach has no value. It can be useful for:

  • Stress-testing how GPT handles edge cases.

  • Revealing hidden assumptions in how it presents answers.

  • Demonstrating limits in coherence when pushed into chaotic prompts.

But for science or engineering, none of that counts as co-evolution or meta-training. It’s still just stochastic text prediction. Unless an idea that comes out of these sessions can be tied back to peer-reviewed literature or a falsifiable test, it remains speculative.

So your interaction style may be interesting for studying user–AI dialogue dynamics, but not for generating new physics, mathematics, or “cognitive systems.” Researchers looking at adaptive reasoning would see it as probing model behavior under unusual prompting, not as evidence that GPT itself is learning or evolving.

I like you :glowing_star:, reading this thread is really getting this sunday morning off to a good start. Now where did i put that cup of :hot_beverage::sparkles:

1 Like