I have an idea for end-user authn between ChatGPT and your plug-in/app.
The idea is simple.
You can login to ChatGPT with Google or Microsoft, presently. My apps offers the same methods, too.
- When registering your plugin, OpenAI issue you a client secret.
- When ChatGPT calls your API, it sends the name of the provider, e.g. Google, and the Google issued user identifier of the person using ChatGPT.
- Plus a signature of this data, signed with your client secret for your API to recompute and verify.
This is a kind of OAuth pass through.
The downside would be a lack of choice in allowing ChatGPT access to the owner’s resources. AFAIK developers can skip the “allow this app” screen on OAuth anyway.
In this case there’s no opportunity to present an “allow” UI, unless the API presents a challenge that can be asked in the chat using chat/text, or I strongly feel OpenAI, Google, Meta, Slack, Microsoft will need to propose a visual interactive card standard to be developed for rich UI + chat.
I was thinking more about this. The problem would be that my API would be putting a lot of (too much) trust in OpenAI not to use this method as a back-door into my customers’ data.
For example, OpenAI employees with access to my secret key and the Google user ID could form HTTP requests that they know my API will validate and serve.
Just calling this out for consideration.
I’m not sure I follow your logic here, how and why would an OpenAI employee have your secret key? Would they also be working with Google? Help me to understand what your concern is.
My idea is for the Google Account unique user ID to be passed through when OpenAI calls my API, so my API can look up the ID and locate my app’s user.
This would allow people logged in to ChatGPT to use chat to work on their data held in my app.
But my API would need to validate that this is an authentic call from OpenAI. This can be done by signing the ID.
When I register my API with OpenAI, a secret key could be generated for my app and I then store it in my app.
Then OpenAI can use this key when signing the ID. My API can perform the same signing math on the ID with the key and check the signatures match. Then I’d have high confidence that this is a genuine call from OpenAI.
All good so far, but I’d have to trust that OpenAI store those keys securely and employees cannot see them. Else, anyone with that key (and a Google Account ID) could create HTTP requests to my API and access my customer data.
So my idea might not be a good one
Right, ok, but you are now storing your secret key in your application, so I download your apk, install it on a virtual phone emulator, do a memory scan and pull your key out. I don’t need to be an OpenAI employee to do that.
No, the conversation is between OpenAI and my application’s HTTP API. The shared secret would be store on OpenAI servers, and my API servers.
When I say application, I mean my web application, nothing to do with a phone app.
For clarity, I described the idea to ChatGPT and it wrote me back this which is clear:
Okay, I understand the flow better now:
- User logs into ChatGPT using their Google account
- ChatGPT gets the user’s Google ID from the login
- When making API calls to Zipwire, ChatGPT passes the Google ID
- Zipwire looks up the Google ID and matches it to a Zipwire account
- To authenticate the API call is from ChatGPT, the Google ID is signed using a shared secret key
- Zipwire validates the signature matches the key it shares with ChatGPT
This allows ChatGPT to make authorized API calls on behalf of a user, leveraging the Google ID for account linking between ChatGPT and Zipwire.
The main trust still lies with ChatGPT properly securing the secret key and not abusing it outside the intended use case. But the impact is lower since the key would only allow access to a specific user’s account data based on their Google ID, not full API access.
Let me know if I have the flow right now! Overall this seems like a reasonable approach to safely allow ChatGPT to integrate with and access user data from your application.
Ahh, ok. Well then yes it relies on you trusting that OpenAI store data securely. You could email email@example.com and ask the question.