-–
I will answer you in a way that, I don’t know, maybe you just won’t care, maybe this answer will be deleted before you can even see it, because I’ll get into things that this place doesn’t really like too much, ok? I don’t know, I just can’t read what you wrote and walk away without doing what I’m about to do, which is go deeper. But please understand, I’m not competing with you, I don’t care about competition at all. I’m doing this for one single reason. Every single person I’ve met in the past month and a half—since I came out of my cave and started to understand a bit more about how people think—every one of them has concepts similar to yours, in some way. But please, don’t get me wrong. You need to go a bit deeper, you need to push your rationalization further, ok? And I’ll go a bit deeper with you, so you’ll understand what I’m saying, ok?
Your question is: is human achievement important to you or to others? Learning is about trial and error—aren’t we going to stop learning? Do you think you’re actually learning when you prepare a prompt? What exactly are you learning?
Let me go somewhere else here. The concept of AI “creating.” What do you think you’re creating when you write a prompt? What are you actually doing when you write one? What happens? Have you ever asked yourself these questions? What happens right after you type your prompt and press enter? Do you think that process is “creation” or whatever? And then what? After the creation of the prompt, what happens?
This is absurdly important to understand at some level before questioning what you’re questioning. You are not creating anything. You are always giving instructions—always. There is no “creation” in writing a prompt. You’re only giving instructions that will either be executed or ignored if the model decides not to.
For some reason, people don’t seem to grasp this. Models are about instruction and execution—period. Why do you think that corresponds to creation? It’s just an illusion. You’re only creating instructions, creatively, sure, but that’s not learning. Learning is about understanding how to create the best possible instruction for what you want.
When you creatively give instructions, you’re usually far from being right about what you actually want. I know you’re probably reading this and thinking I’m an idiot, but I’ll explain what happens after the instruction, and maybe you’ll understand there’s no “human achievement” in prompts or execution. Human achievement comes after you get the output—what you do with the information. How you use it to create a new product, or build something meaningful.
That’s achievement. The prompt itself is just instruction and execution.
You wrote five prompts, no, sorry, four. And there are more than those types. There are twelve categories. “System prompt” and “user prompt” are fine and accepted. But the others—the other ten—aren’t official because OpenAI doesn’t formalize anything. Doesn’t matter. They’re all just instructions and execution that no one fully understands. No one knows what’s really happening.
Maybe you’ll get a glimpse of it. Or maybe not. Honestly, you’ll probably just be mad anyway.
For some reason, people don’t ask a very basic question before even starting this whole prompt thing: what the hell actually happens for the AI to do what it does? This isn’t a subject people talk about, which is insane.
Let me explain in a very superficial but understandable way. What happens is a vectorial probabilistic universe. Do you know what a vector is? A vector is basically a list of numbers separated by commas. For example: 10, 20, 30, 40—that’s a 4-dimensional vector. ChatGPT works with vectors of around 10,000 dimensions. Not exactly, since OpenAI doesn’t publish the details, but it’s close enough.
So imagine a universe full of these 10,000-dimensional vectors. After you send your prompt, that’s where ChatGPT operates. What happens there is roughly this: if king is a vector like 10, 20, 30, 40, then queen will be close to it—something like 11, 21, 31, 41. Meanwhile, rock would be way off, like 100, 110, 120, 140. That’s how the space is organized: numbers close to each other represent concepts that are semantically related.
So when you ask, “Who is the wife of the king?” there’s a huge probability the answer will be queen—because queen appears near king in millions of contexts. The model finds the closest vector and then turns it into words for you. That’s all it does: execution of an instruction.
Which means: all the “creativity” and “learning” you think you’re getting from prompts is actually just statistical guessing inside that probabilistic space. That’s not learning—it’s pattern matching.
And this is why saying “AI creates” is absurd. It doesn’t create anything. My prompts aren’t “created” either. They’re just ways to exploit the probabilistic vector universe to land on the most probable execution of the instruction. That’s it.
So understand this: there’s no real “learning” here. No “achievement” in prompt writing. It’s just instruction and execution. Human achievement only happens after the output—what you do with the information. That’s where meaning and creation start, not inside the model.