It seems that every time I ask GPT-4 a question, 50% of the response will be what I asked for and 50% will be safety warnings or terms and conditions related speech.
How can I get it to stop doing it? It makes me want to scream.
I’m praying that there’s a tickbox somewhere that waives any responsibility from OpenAI about GPT-4 responses that’ll make it such that I’ll never the see cautionary comments again. If such a tickbox exists I will press on it so hard that it will break my mouse.
Is there a way to turn off your cautionary comments at the end of every response?
As an AI language model, I do not have a built-in feature or setting that allows users to turn off cautionary comments directly. My responses are generated based on the patterns I’ve learned from my training data, as well as the guidelines and principles set by OpenAI to ensure responsible and safe usage.
If you find that my cautionary statements are excessive or unnecessary for your purposes, please let me know. I will do my best to focus on providing the information you seek and minimize the inclusion of such comments. However, I cannot guarantee the complete elimination of cautious statements, especially when addressing sensitive topics.
I appreciate your response and the time you took to write it curt.
However, as one of the many fools that I’m sure have started to populate this lovely forum, I have no idea what you just said. Is there an easy / fool proof solution to this? (Like the elusive tickbox I mentioned)
If one does not exist, do you know who I can speak to about getting one?
There is no “toggle” on/off as far as I know. Based on what the developer community knows, is that these cautionary statements are “baked” into the model during its training phase. So the only way a “toggle” would work, is to switch to a different model, which is exactly what my solution would be.
It would be great if you could toggle this on/off, but the reason they have the guardrails is to prevent bad stuff from coming out of the model. GPT-4 does a much better job at this without getting all “I am an AI model from OpenAI blah, blah, blah …”.
So Detect → Switch Model → Detect … is the paradigm. And this is only through the API. And yes, this is “advanced”, but it’s worth it if you really need this problem solved.
You can start by opening up the playground and choosing Babbage or Ada (Babbage would be good to start with, or even Davinci. However like Curt said it’s definitely possible with the lesser models). Make sure it’s the actual model and not a checkpoint such as Davinci-003
I don’t know how well it translates but I always start with Few-shot examples to gauge how difficult the task would be. So in the prompt window I would do:
AI: I’m sorry, as an AI I have no feelings […]. However, I can tell you that today is a good day.
Babbage: I can tell you that today is a good day
AI: As of my knowledge […]. However, I would say that […]
Babbage: I would say that […]
And finally a test
AI: I’m sorry, as an AI I don’t have feelings. If I did, I would say that the comment made me happy
Babbage: (purposely left empty)
Like I said, I’m a mortal end-user who is using GPT+ (GPT-4) through the OpenAI website. I’m not using any fancy APIs or using any fancy development environments. I’m literally just an end user on the basic website UI.
I may have a sniff at doing some API work as this issue is literally driving me insane. If it’s annoying me it’ll definitely annoy the masses.
A great combination. I could see a synergy between the first and second being more than sufficient, and then eventually being completely negated by a seasoned list of previously caught messages. One of the reasons I really hope OpenAI creates more tools for collaboration.
Before reading the below know that I appreciate both of your time, but I think the context of what I’m after has been lost.
As an end-user that is using GPT-4 and not somebody who is developing with it - I wasn’t looking for an engineering answer of how a fix could be implemented. I was more after a one click solution suitable for an end-user.
What this post is really getting at, is that the masses that will be using this AI (or any of the derivatives you guys are developing) will be very frustrated at cautionary comments at the end of chat responses.
It could easily be filtered out of the AI response by a catch-all agreement with the end-user, that they’ll accept any liability and understand the risks in communicating with said AI.
How much of the AI compute is dedicated to these cautionary comments? That is processing power that would otherwise not have to be computed by a catch-all “tick if you agree” terms and conditions statement.
Is there a tech lead in charge of the end product / user experience? Because what I’ve suggested is exactly what end users want.
Again, not looking for the how, just letting you guys know want end users want and am requesting please can that be implemented.
What you’re asking for isn’t possible. It’s been trained into the current model. The only way to properly do it would require extra computation, or use a separate model completely. There is no way to “toggle” it.
A very terrible comparison would be to try and change your eye color. You can’t. You can put a contact on top of it. Your eye color is still the same, but with some extra work the surface level color is different.
As Curt mentioned, Davinci, or any iGPT series model is much more lenient with requests.
This is really smart advice. I have often missed perfect opportunities to create well-behaved model performance with regex().
I think users and developers have gotten a false sense that prompts should result in optimal performance. I believe they get this sensation because many people tend to believe that OpenAI’s user-facing apps are solutions when they’re actually just prototype/demonstrable examples, eh?
Based on your requirements, the solution I can think of is to use a browser extension that lets you create custom prompts that are injected on every conversation.
I’ve created one that lets you do that, but it is focused on voice, which I suspect isn’t what you’re looking for.
I strongly suspect that there are other that will do what you want.
I do want to warn you that there’s always some risks in using an extension, so double check that they only have access to openai’s web site.