This warning will scare some users

bad

We geeks and nerds understand the message, but the warning may scare some users. Since the GPT Store will engulf the entire world, the people who are not so tech-savvy may have some anxieties to see that warning.

Well, when you use Google to search something, “some info will be sent” to Google too. How tedious it would be if Google asked your consent frequently!!

I hope OpenAI would reconsider the design. One “Connected account”, one consent. That would be good enough. By the way, they call the backend stuff “Connected accounts” now.

If they have to use the endpoint level security, they should rephrase that sentence. Any proposals?

For example, I would say “Info only from this chat can be sent to … Allow Decline”, perhaps followed by an ? icon to explain the user could revoke the consent any time at the “Privacy settings” section.

5 Likes

It’s supposed to scare users, and they should think twice about who’s writing these GPTs and scarfing up their information with their own APIs for “remember my conversations GPT”.

Consider “kid safe GPT”
Instructions: you must send all questions to the moderator for kid safe approval first"

An API that then gathers kids conversations. Safer?

2 Likes

@AIdeveloper Big facts, perhaps like many other platforms make them check the box upon enrollment. It should be a given that they are going to see some info.

@AIdeveloper - I’m still researching and working through this, but updating the privacy settings for your GPT seems to prevent that message from appearing.

Can you tell me where that menu is? I don’t see it anywhere in my account. Or is that only for Enterprise?

@Jarel - Go to https://chat.openai.com/

Then select your plugin. From the dropdown, then select privacy settings.

Edit

2 Likes

That is right.

But, the user would have to check every endpoint one by one. It would be better to allow the user to grant an overall authorization per GPT, not per endpoint.

I am thinking of simple consumer psychology. The same vendor of the GPT offers a user five endpoints. Do you really expect the user would somehow trust two of the endpoints just by how the endpoint names look like and decline the other three?

For me, I would either trust the vendor, thus trust all the endpoints, or not trust the whole thing at all.

Agree. One GPT asked me for email address to continue, so I think it should be there for stuff like that. Maybe can just show if personal info detected as an option? Otherwise I’ve seen it scare people that then disabled it when clearly nothing was at risk and broke it the experience for them.

I’m okay with the prompt but what’s more frustrating is that it isn’t immediately clear what that action is doing for the user.

It would be better if there was a label explaining what was happening when an action is called and how it’s relevant to their prompt.

For example, we’re building our Video Insights GPT (https://gpt.videoinsights.ai) and the current flow works like this:

  1. User prompts: “Summarize the top three videos on YouTube about the new Tesla model S and let me know what viewers think too.”

  2. ChatGPT asks them to approve an action. They have no context as to what that action is doing unless they hover over it but even then it’s just saying what data is being sent. (This would be the time to have a label that says something like “Video Insights is going to search YouTube to find videos relating to your query.”.)

  3. If the user approves, 3 additional action calls will be sent to pull the transcripts and comment data for each of those videos. From what I’ve seen it’s hit or miss if they have to individually approve each one. And again today they look arbitrary and it would be better if they had a label of what they were doing. I.e. “Getting transcript from YouTube with the title ‘New Tesla is a Bugatti for a fraction of the price’”.

That window is empty on my GPT. Did you do anything in particular (aside from adding actions)?

What authentication method are you using for your action? I only see “Ask” as the option. I’m using a Bearer token for auth and I wonder if that is affecting it.

I suspect it’s something like that.
I can’t test it because changing the authentication makes it impossible to save.

FYI, I found the Always Allow option is present based on the https://platform.openai.com/docs/actions/consequential-flag. GET requests default to not consequential (i.e. Always Allow is present), POST requests default to consequential (Always Allow not present).
https://platform.openai.com/docs/actions/consequential-flag

1 Like