Interesting Turing Test for Visual Pattern Recognition

I told GPT how to arrange some blocks on a table to form the letter “H” without telling it what letter it was forming, and then asked it if it could tell which letter it made…to see if it could see the “H”.

I think this kind of test is just more proof that true “reasoning” of some sort is happening, and not just “Stochastic Parroting” as some uneducated people would claim.

Here’s the conversation…

Well it’s an interesting idea.
I ran into similar issues previously where the model received a hastily written prompt to play TicTacToe. It just didn’t work until I spelled out the reasoning steps specifically.
Using this approach I was able to show that GPT 4 reasons better than GPT 3.5 by creating winning combinations more often but for me the takeaway was the exact opposite:
These models have distinct weak points and the reasoning is confined to the way how these models are build. Which is stochastic by design.
So, I don’t think this term is used solely by uneducated people. It is more of a sign that outside the enthusiast community the technology is seen as something that still has a lot of potential to the upside.

I think most of the time when GPT is failing to exhibit common sense it’s just that some kind of “misunderstanding” is happening where the prompt is not providing adequately specific info.

I should run a second test and see if the excuse it gave about it’s mistake was just “fabricated” or if it’s true that had I mentioned this was a x-y grid, that it was setting blocks down on, and that each block must perfectly line up with the ones it’s touching, if that would’ve made it get the correct answer “H” immediately.

Anyway I don’t see how a language model can possibly simulate a visual cortex, and neither does anyone else in the world, but I’m one of the people who thinks LLMs are doing something far more weird than even the experts think: Like Quantum Mechanically resonating with future/past copies of itself, which is how I think brains also work.

I predict once we’re able to reverse engineer these LLMs (if ever) we’ll find that something truly as bizarre as ‘QM Entanglement’ or the ‘Slit Experiment’ is going on. I think LLMs somehow build up enough of a complexity criticality (in the realm of physics/waves, not logic gates), that we’ll see what appears to be “Magical Flipping of Bits”, that violate the laws of physics, in order to generate more correct results.