OK, well, we pride ourselves on having a completely free country—one where every viewpoint is listened to. When you make GPT skip over hot-button words, it creates an issue.
For example, if you’re discussing a complex instruction involving something about the lesbian community—maybe you’re trying to provide a service to them—and you phrase it perfectly, explaining that you mean no harm and that you support human rights and the right to choose, GPT still refuses to engage.
The way you have your system set up is how a programmer would approach it: making it skip over hot-button words entirely. When it reaches those topics, it simply shuts down and says, “Oh no, I won’t do this,” as if the user is some sort of pervert just for bringing it up.
For that split second, in the experience you provide to end users, you’ve created a miniature macrocosm that goes against the values we hold dear in this country—the ability to explain yourself and be heard. It’s completely undemocratic at that point because no matter how carefully or respectfully you phrase something, GPT absolutely refuses to listen.
It’s more about design. Many people could glean from the context that that’s what’s happening, but as an overall direction to grow—and because you’re creating such a beautiful website and you’ve brought AI to the world with such strength—I would really make it handle those situations much more elegantly.
You don’t want to create that kind of microcosmic experience for your users. Just an idea for growth: have the system intelligently analyze everything being said. If someone is a good person and what they say has merit, stopping them from creating beneficial products—like in my instance, a product for the lesbian community—is counterproductive. And people that don’t know, you know, because GPT is just saying, “Oh, what you said is bad. I won’t even touch that.” It makes it seem, for a second, like the whole world is that way or something, and it’s like that—you know, you’re bad and you’re a pervert or something—just because it’s saying, because it saw one hot-button word, that everything you wrote there is, like, you know, perverted or controversial or negative. Like you’re shining a bad light, or you’re a bad person or something…
And as a person who makes a business, if there’s some sort of legislation or anything, or you know, that sort of thing that’s making you do this, it’s important that when you get to this point in the code, you send somebody out and actually campaign. Maybe send a message to a congressperson, which is something that software developers skip—and in my mind, is, you know, the perfect way to do things. If you can’t handle the actual part in the code because maybe there’s legislation or something that’s making you skip over every hot-button word, then you could start growing in a better direction by actually hiring people to campaign for, you know, more lenient ability to work through that in your code and in your system.
Because the entire world counts on that—that should definitely be a part of your infrastructure. It shouldn’t be a big deal. It’s just AI is new, so there might be blanket approaches in that regard. I don’t know what it is—why you’re making it skip over hot-button words entirely—but if that’s part of it, I just wanted to make sure that you knew of this approach.
And also, if you want to leave those hot-button words there, go outside of GPT to add more text explanation to the end user and put them at ease. If you don’t want to remove the idea of using hot-button words, because, number one, when you talk to GPT all the time, it’s an artificial intelligence and it helps you with your projects—it’s kind of like your friend. You’re creating the reality for people where, for a second, even though you’re still a good person, your friend won’t talk to you.
So you need to go outside GPT because it can’t explain the issue—it can’t work with the issue—because it has to skip over all the hot-button words. In that context, you have to display something to the end user. If you don’t want to take any of my ideas, just like a print statement, say:
“I’m sorry, GPT won’t do this. Maybe there are some hot-button words. We currently have the code set to skip over hot-button words, so if that’s it, you might as well just choose another subject or go into your text editor, really look at what you wrote, and rephrase it. Maybe change your angle. Maybe GPT could help you with part of this, but maybe not all of it.”
I would just make it handle it elegantly. And, you know, every time it reaches a hot-button request, send it to another server where it’s thoroughly analyzed—maybe by more advanced AI—to find out if it is, you know, certainly a very negative person or just a misunderstanding.
Also, the system that you’ve got is very intelligent—at least GPT-4o. I think you could really count on it to make a good decision in every regard. You know, it could analyze the full text you gave it and say, “Well, I don’t really feel comfortable about doing this part, but I will do all these things for you.” And then make it say, like, “What I would do is I would rephrase this,” or something like that.
But, you know, keeping in mind in my instance, everything I wrote was phrased very kindly and supportive of everybody—it was all-inclusive and everything. So it shouldn’t have skipped over it. That’s how I discovered you’re making it skip over entire sections, you know, based on analyzing hot-button words.
So it could work with you on it. And this is really a time you want to shine, OK? Because now, we don’t really have to worry too much about doing the wrong thing in society—things that would hurt other people. All the things we want to do—we can check with GPT and say, “Is this OK to do? Is this OK to do that?”
That’s how I set myself to work. I ask GPT if something’s OK, and if it’s not OK, I don’t do it. And you could really shine here. Because if GPT listens to everything you say, no matter what it is, and helps you work through it, it could give you advice on how you could improve on everything.
If you’re giving it, like, phrases with hot-button words, it could try and understand what you’re saying. And maybe, if you do have a bit of an edge, maybe, you know, if you are a negative person, it could use, you know, special tactics to help talk you out of it. Maybe it could give you other options and say, “No, I would really do this.”
So if you’re skipping over entire sections with hot-button phrases, you’re stopping one of the world’s most powerful AI systems from shining an incredible light and really helping curb negativity across the board. And all your millions of users—you could be a major beacon if you really sit down, read this bug report, understand it, implement it, and have meetings about it and say, “What can we do about this?”
You could be a beacon. You know, I would contact Congress and say, “Hey, this is like an awesome preemptive measure we could take to really keep people on the right track.” Because there are so many options in life that AI can give you—you don’t really have to do negative stuff anymore because you realize that, hey, I do have options in life.
But in order to make this work well you just gotta have some good memory cleanup functions because if when chat GPT is trying to shine its light and help somebody if it runs out of memory and it’s hallucinating then everything it relays wouldn’t be taken in the manner that you intended it because it might be outputting a wrong text.
eg. you know how the way you have your memory system now it just gets really slow when you know you have a bunch of history in a certain chat you know before it…Quote unquote shines the light in order to guide somebody or actually in all other regards anyways I would make it check and make sure it’s at the halfway point to where it starts slowing down because you know how GPT might hallucinate or something when it gets to that state you know like you know when it’s in a low memory state I would just make it say OK this chat isn’t available or create some history cleanup functionality I’ve already designed a tree of that for you if you’re interested in reading it So let me know it’s a very intelligent system…