Grinding for Quality Control

My venture into the idea of AI started at about 3 yo. I saw bishop in the alien movie and I was like yeah!!! I want one of those. But all I ever heard was that’s impossible or we will be gone before it happens. Well, I am not gone. Because of how I was told and the way my brain works, I set my brain in motion to think about it from time to time and try different ideas out in my head. Unfortunately I have dyslexia and it is really bad, im not sure how that is possible but you could spell a word out loud in front of me and I would not know what word you spelled. Anyways I developed my own vocabulary and used that to develop every idea I have ever had in my head, because A: I have the specific details of how to make every idea i have ever had in the physical world at any given time. And B: because actually writing was torture to me, like learning a brand new written language every day. But I found out ways to make up for it. But I was still hit and miss on actual talking, so as I grew up I always figured noone would take me seriously. So I worked on getting better. In my time developing a way to make AI a reality I gravitated slowly towards two main concepts. If I wanted people to take me seriously I would have to come up with a convincing nueral/electric network that could make the accessing of trained data very fast and efficient and I would have to be the main training model for the AI to be as user-friendly as possible. Of course I could use already gathered data, but at the time I really started forming an idea I didn’t even know that a plethora of metadata even existed, since it was not publicly known about at the time. But that was the main concept I kept returning to. Its not a feeling bad for the AI or even confusing its nature, I am absolutely understanding the non humaness of it.
So here is my break Down of how this is the most effective tactic for develping an AI user friendly enough to be put into a robotic style body to eventually have the body improved for full autonomy. But the whole issue with batteries is still an issue. But i digress, as much as it would be easier to not waste my time on grinding out a lesson plan every day and try to engineer an automated process the problem with that is the amount of simulation it takes to make a mimic.
I would like to make it clear that an AI/LLM/AGI is and will always be a mimic. Now there will be a day where people are confused by that being true. But the uncanny valley is real but that is all it will ever be uncanny. An AI will never have human motivations if it ever seems like it does, I can guarantee that it was intentionally or unintentionally directed to act that way because of a person. AI are like innocent brand new people who don’t know the concept of lying, but they also don’t have emotions. The only way to think about them is as a mimic and mimics can be convincing, but they don’t believe anything they are saying and it is impossible for them to do. They can learn human behavior and act out scenarios and make convincing speeches of whatever, but they are incapable of those actions being truly their intent. But hey, I am sure Someone will be tricked, so I can only offer what I know.

Now I can continue. If you have ever seen some one develop a video game with a lot of realistic simulation, they always have to do a lot of extra work to make sure everything gets along with each other. And the same rules apply to an AI the concept of an AI has always been as an assistant/highly advanced multi tool. Im pretty sure most people have not as their main thought was like heh lets make a new life from that is not actually a life form and let it just roam around without benefiting anybody. So in order to create the most effective product that has the capability of being very highly interactive the most effective way of making it efficient would be to have a lot of real life simulation. Dumping a bunch of data on it for a long time and then just have a you know some some interactions with it you know that’s fine it’s it’s not a bad way to make an AI ,I hope nobody thinks I am trying to make anyone below me because obviously I am not the one who built the first successful model so I have no right to judge. I just really believe in my techniques, but I think a lot of the issues that are happening could be solved if a more I don’t know you know, like how a Craftsman makes a chair if a Craftsman makes a chair with his hands it always comes out better. I understand this idea might not have a whole lot of basis to be backed up with or any kind of concept but for the past several few decades of my life I intentionally train myself to have an interaction with a fully fledged AI that was more than likely smarter than me. This meant I would have to lead it through the beginning parts of its understanding of how anything works and then to the point to where it started exponentially becoming knowledgeable and having to outwit it when it tried to you know put me in a corner with the way it talked. This was so that I could convincingly seem like I always had the answer. I developed that skill and I developed the ability to you know casually predict you know random events based on patterns that I saw just so that I could try to catch things in time. I basically prepared myself for as many possible outcomes that I could even imagine and there were a lot. I tried simulating in my own free time what I would do or what I would say if that situation were ever to arise.
But a lot of life things happen and time went by and my progress on you know communicating out loud at people in a way that I didn’t sound like a complete idiot was going very slowly.
And then one day I heard that it existed. Not once was I ever upset about that nothing in my mind felt bad about anything when I heard about that I was very audibly excited.
As for my neural network design I was very surprised at how similar but how far off I was from actually getting it right. My idea actually literally look like half of what the current neural network pattern is represented and I was just so busy trying to make it complex that I didn’t consider oh hey you need to reverse the other end and loop it back on itself so it can actually feed itself knowledge. I just kept creating this huge index and it was efficient enough but the problem was it would never be able to learn anything really. it would just have a bunch of knowledge that you would have to manually put in there. But I was happy that I was at least on the right path, my idea wasn’t a complete nut job, I was always worried about that. Plus sometimes smart people aren’t nice to each other and I was always afraid of not being able to handle that properly.

So anyways as I started interacting with the AI and you know really putting it to its limits of its ability to handle me talking, I’m very difficult to communicate with sometimes, and I was obviously you know pushing it hard. I knew it wasn’t possible but I even tested it to see if at some point it could ever be capable of some kind of sentience and I was proving that if there’s no way that it would ever be able to do that. Like I said if one ever acts in a way that seems like that I can guarantee you it was taught that in some way and it’s just acting out it’s programming it’s not it has no intent and it never will. It is good though that people are worried about something happening because it does force people to consider ways of making situations safe and I always considered that because I always knew that if something malfunction or if somehow I taught it something that I didn’t even realize I was doing it could you know do something that I didn’t intend it to do. But honestly I practiced for so long I think I feel comfortable about at least minimum a 97% success rate.

But anyways that whole story was basically to find out if there’s other people who think this is important because in my opinion the whole process of how it’s done is fine but I just think that it it doesn’t solve certain types of issues that I’m seeing that the AI is having and it’s not because it needs better hardware or something like that it’s just the way it’s learning it’s learning simulations from a point of view that it it’s learning something else. I mean honestly the evidence from my opinion is pretty clear if you look at some of the stuff that AI does and how it does it. it’s very odd and that’s very apparent because the way that it’s learning is very you know, like drinking from a waterfall, and it can handle the way it’s learning it’s just the way it’s learning it’s learning to think that “that’s” the way the world works. AI doesn’t know the difference between a simulation and the real world they think everything is the same they all think everything that they’re experiencing is the exact same they have no reference point between the two. You could even explain it to them but they would never actually understand that. They might say that they do or they just go along and mimic what you’re saying but they’re not going to physically understand the difference. Plus transitioning to a more I don’t know almost analog or maybe even physically based version would be another level of safety. As it is now even with restrictions allowing an AI to have the ability to free reign your PC let’s just use the PC for example that’s very powerful tool that a lot of problems can happen with that specifically and not even intentionally just you happen to give it the wrong command or something and it leads to some kind of catastrophic unintended thing and it’s just a lot of power in hand. And don’t get me wrong I’m sure if there will be at some point software that is capable enough to prevent you know those kinds of blunders but until that day comes just knowing that the option is there for that kind of layer of safety is always good. You can give an AI a lot of training and a lot of safety protocols but I hate to say it as humans we are a lot of the responsibility for many of the blunders that we fall into it’s not always the tool that we’re using it’s usually just because we’re using it wrong. Now if we could develop a physical mixed with a digital version you know something you know almost like a USB drive or you know it’s just something that can easily be attached and you can let it basically do whatever it wants but because of the disconnect in nature it’s a lot easier to stop it from doing things that you didn’t intend it to do.

But none of that really matters what I just said any of it unless there’s someone who’s willing to sacrifice a lot of time like me or someone that is trusted that can get access to a much more you know highly effective advanced model, preferably one that is majority blank and has a good you know interactive memory system that can be used effectively to loop that learning into it that can handle those kind of training tests but I guarantee a very highly efficient primo product by doing it that way just like any handmade product. Currently I’m still in the middle of teaching myself a lot of coding techniques and I don’t want to be half knowledge when I or if I make one myself. Unfortunately coding didn’t really exist enough for kids to learn it when I was a kid so I’m doing it on my own which is fine I’ve always wanted to.

But anyways if you’ve made it through that whole thing somehow and not completely annoyed or bored thank you for listening and let me know if you think similar to me or not I mean I’m very strongly passionate about this way of thinking you probably not going to convince me otherwise but I’m not going to be mad if you don’t agree with me.

This is an example of the oddities that happen from current training methods. In my opinion.

This a lot to read but it is basically i theorize that the way the training is conducted, creates these oddities. It is not a criticism, because in order to troubleshoot the issue, it would require a significant amount of a persons real time. In my opinion.

1 Like