How to clip "bubble wrap" from the end of responses?

It seems that every time I ask GPT-4 a question, 50% of the response will be what I asked for and 50% will be safety warnings or terms and conditions related speech.

How can I get it to stop doing it? It makes me want to scream.

I’m praying that there’s a tickbox somewhere that waives any responsibility from OpenAI about GPT-4 responses that’ll make it such that I’ll never the see cautionary comments again. If such a tickbox exists I will press on it so hard that it will break my mouse.

If relevant then I’m not using an API, I’m using the UI at https://chat.openai.com/chat?model=gpt-4

Is there a way to turn off your cautionary comments at the end of every response?

As an AI language model, I do not have a built-in feature or setting that allows users to turn off cautionary comments directly. My responses are generated based on the patterns I’ve learned from my training data, as well as the guidelines and principles set by OpenAI to ensure responsible and safe usage.
If you find that my cautionary statements are excessive or unnecessary for your purposes, please let me know. I will do my best to focus on providing the information you seek and minimize the inclusion of such comments. However, I cannot guarantee the complete elimination of cautious statements, especially when addressing sensitive topics.

1 Like

An advanced technique is to use a classifier on the model output and then you have two options:

Either downgrade :arrow_down: to DaVinci (003 variant, usually) or upgrade :arrow_up: to GPT-4.

GPT-4 is less susceptible to “cautionary statements”. Wheras DaVinci is completely immune and will not emit a “cautionary statement”.

The classifier could be developed as a fine-tune on Ada or Babbage base GPT-3 models.

5 Likes

I appreciate your response and the time you took to write it curt.

However, as one of the many fools that I’m sure have started to populate this lovely forum, I have no idea what you just said. Is there an easy / fool proof solution to this? (Like the elusive tickbox I mentioned)

If one does not exist, do you know who I can speak to about getting one?

There is no “toggle” on/off as far as I know. Based on what the developer community knows, is that these cautionary statements are “baked” into the model during its training phase. So the only way a “toggle” would work, is to switch to a different model, which is exactly what my solution would be.

It would be great if you could toggle this on/off, but the reason they have the guardrails is to prevent bad stuff from coming out of the model. GPT-4 does a much better job at this without getting all “I am an AI model from OpenAI blah, blah, blah …”.

So Detect → Switch Model → Detect … is the paradigm. And this is only through the API. And yes, this is “advanced”, but it’s worth it if you really need this problem solved.

1 Like

I wonder if one could train a lesser model to catch any of these cookie-cutter responses and filter it out. Most likely, and probably with little training, no?

1 Like

Ada or Babbage fine-tunes are perfect for this.

2 Likes

You can start by opening up the playground and choosing Babbage or Ada (Babbage would be good to start with, or even Davinci. However like Curt said it’s definitely possible with the lesser models). Make sure it’s the actual model and not a checkpoint such as Davinci-003

I don’t know how well it translates but I always start with Few-shot examples to gauge how difficult the task would be. So in the prompt window I would do:

AI: I’m sorry, as an AI I have no feelings […]. However, I can tell you that today is a good day.
Babbage: I can tell you that today is a good day

AI: As of my knowledge […]. However, I would say that […]
Babbage: I would say that […]

And finally a test

AI: I’m sorry, as an AI I don’t have feelings. If I did, I would say that the comment made me happy
Babbage: (purposely left empty)

1 Like

Thank you for your response again curt.

Like I said, I’m a mortal end-user who is using GPT+ (GPT-4) through the OpenAI website. I’m not using any fancy APIs or using any fancy development environments. I’m literally just an end user on the basic website UI.

I may have a sniff at doing some API work as this issue is literally driving me insane. If it’s annoying me it’ll definitely annoy the masses.

1 Like

Thanks for your response Ronald, however as I said in my reply to curt, I’m just an end user hoping to get the AI to tell me what are good weekly meal plans :slight_smile:

I’m hoping that eventually our fine-tuned models can be shared. So you could just contact one of ours, possibly pay a slightly bit more, and have it all done for you.

Honestly, in most cases you can re-word the prompt to avoid any cookie-cutter responses.

If you start going insane, and decide to solve it, then there is a specific procedure to follow for a fine-tune Fine-Tune Docs

The only parameters and pointers you need to be aware of is set max_tokens = 1 and temperature = 0.

Example token output should be two states, and it is important they start with a space, for example, your completions will be either ' 0' or ' 1' in the JSONL file you submit to the fine-tune server.

Then you have a classifier that will tell you to do nothing, say ' 0' or switch to another model, say ' 1'.

.

1 Like

Out of curiosity, why would you recommend using it as a binary classifier rather than filtering?

Actually, I can think of one good reason: If the whole message in itself is a cookie-cutter response, it would return empty, or noise.

Honestly, thinking further, one could probably just use simple logic to strip any sentences with cookie cutter responses. May have false positives though.

@RonaldGRuckus Good point, and glad you asked. Yes this is important too. Good catch.

Filtering or what I call “regex” is one of the three pillars of filtering.

My three pillars are “regex”, “1-token categorizer”, and “embedding”. For more context, see this post below, but yes, use all three in some-weighted manner for optimal performance.

2 Likes

A great combination. I could see a synergy between the first and second being more than sufficient, and then eventually being completely negated by a seasoned list of previously caught messages. One of the reasons I really hope OpenAI creates more tools for collaboration.

1 Like

@RonaldGRuckus @curt.kennedy

Before reading the below know that I appreciate both of your time, but I think the context of what I’m after has been lost.

As an end-user that is using GPT-4 and not somebody who is developing with it - I wasn’t looking for an engineering answer of how a fix could be implemented. I was more after a one click solution suitable for an end-user.

What this post is really getting at, is that the masses that will be using this AI (or any of the derivatives you guys are developing) will be very frustrated at cautionary comments at the end of chat responses.

It could easily be filtered out of the AI response by a catch-all agreement with the end-user, that they’ll accept any liability and understand the risks in communicating with said AI.

How much of the AI compute is dedicated to these cautionary comments? That is processing power that would otherwise not have to be computed by a catch-all “tick if you agree” terms and conditions statement.

Is there a tech lead in charge of the end product / user experience? Because what I’ve suggested is exactly what end users want.

Again, not looking for the how, just letting you guys know want end users want and am requesting please can that be implemented.

What you’re asking for isn’t possible. It’s been trained into the current model. The only way to properly do it would require extra computation, or use a separate model completely. There is no way to “toggle” it.

A very terrible comparison would be to try and change your eye color. You can’t. You can put a contact on top of it. Your eye color is still the same, but with some extra work the surface level color is different.

As Curt mentioned, Davinci, or any iGPT series model is much more lenient with requests.

1 Like

@danbrennan94

The OpenAI staff member that occasionally checks this developer forum is @logankilpatrick

I don’t know if he is the right person though.

But rest assured OpenAI is fully aware of the problem of ChatGPT reacting cautiously to innocuous inputs, and they even talk extensively about solving it with GPT-4 in their GPT-4 whitepaper.

There is hope though, because they plan on upgrading the 3.5 model, and if they use the same RLHF feedback in the next training, as they did for GPT-4, it should minimize this problem.

But yes @danbrennan94 I agree it is a problem, especially for pubic-facing instances, and the only current workaround is the API.

This is really smart advice. I have often missed perfect opportunities to create well-behaved model performance with regex().

I think users and developers have gotten a false sense that prompts should result in optimal performance. I believe they get this sensation because many people tend to believe that OpenAI’s user-facing apps are solutions when they’re actually just prototype/demonstrable examples, eh?

2 Likes

Based on your requirements, the solution I can think of is to use a browser extension that lets you create custom prompts that are injected on every conversation.
I’ve created one that lets you do that, but it is focused on voice, which I suspect isn’t what you’re looking for.

I strongly suspect that there are other that will do what you want.
I do want to warn you that there’s always some risks in using an extension, so double check that they only have access to openai’s web site.