Prompts as psuedo-code - where are the limits?

You are ascribing “belief” and “reasoning” to something which does neither. This is the real danger – people believing things based on impression, not strong proof.

It may be possible that, when created well enough, “predicting truth” is no different from “truth.”
But the current models are nowhere near that good, and it’s not clear that the mechanisms used in the current models can get to 100% (or even close enough that it doesn’t matter.)

There’s also the question about what happens when the world moves on but old models predict based on an old world. What seemed “good” in the old world, will seem “obviously bad” in the new world, and because these models don’t adapt (without explicitly re-working/training them) that should tell us something about what, fundamentally, these models are.

1 Like