🍄 Safeguarding Your ChatGPT Plugins: Best Practices for Security

I’ve been enjoying to getting to know the OpenAi plug-in system. During my time (5 days) a lot of folks have reached out about how to build and secure their plugins. Here are a few things I’ve learned during my plug-in development.

As more developers start to explore the OpenAI plugin system, it’s essential to understand the security aspects of building and maintaining plugins.

Plugins are built using two main components: the manifest file (ai-plugin.json) stored in the /.well-known/ directory and the OpenAPI specification (specification.yaml) hosted on the same domain. Since both files are publicly accessible, it’s crucial to implement proper security measures to protect sensitive information and maintain the integrity of your plugins.

In this post, im going share some of my best practices for securing ChatGPT plugins, focusing on avoiding common pitfalls and maintaining a secure authentication process. It easy to mess this up.

The first and most important rule: Don’t expose sensitive information in public files!

Placing sensitive information, such as service authentication tokens, in the manifest or specification files can lead to unauthorized access to your API. Remember that these files can be discovered by anyone who knows where to look. To mitigate this risk, never include sensitive data in publicly accessible files.

Use secure authentication methods:

Instead of including authentication tokens in the manifest or specification files, consider implementing one of the following secure authentication methods:

Service level authentication: Provide a client secret during the plugin installation flow, allowing traffic only from OpenAI plugins. In this case, the secret is not exposed in the manifest file. You can prompt the user for various unique attributes like email address (For more most part this isn’t practical using the current plug-in system since the on-boarding flow is limited to oauth) I wouldn’t suggest this approach for apps that require security.

OAuth: Implement an OAuth flow, where the user authorizes ChatGPT to access the API on their behalf. This also keeps sensitive information out of the manifest file.

Leveraging OAuth for Secure Onboarding and Data Calls in ChatGPT Plugins

OAuth is a widely-used protocol for secure and delegated access to APIs. By incorporating OAuth into your ChatGPT plugins, you can ensure a more secure onboarding process for users and protect your API from unauthorized access.

Understanding OAuth:

OAuth (Open Authorization) is a standard for allowing users to grant applications limited access to their account on another service, without sharing their credentials. Instead, OAuth uses access tokens that represent a user’s authorization to access specific resources.

Implementing OAuth in ChatGPT plugins:

To integrate OAuth into your ChatGPT plugin, you’ll need to follow these general steps:

Register your plugin: Create a new application on the OAuth provider’s platform (e.g., Google, Facebook, or your custom OAuth provider) to obtain a client ID and secret.

Update the plugin manifest: In the ai-plugin.json file, set the auth field to the OAuth configuration, including the client_url, scope, authorization_url, and other required fields. Since this is a multi tenant application the authentication token will likely be stored in a Database somewhere with just the configuration info stored in the public manifest.

Make sure not to include sensitive information like the client secret in the manifest file.


"auth": {
  "type": "oauth",
  "client_url": "https://my_server.com/authorize",
  "scope": "",
  "authorization_url": "https://my_server.com/token",
  "authorization_content_type": "application/json",
  "verification_tokens": {
    "openai": "abc123456"
  }
}

Note:

Including a verification_tokens field in the ai-plugin.json manifest file with the openai key might seem like a security risk, but it’s important to note that this token is not the same as sensitive credentials like API keys or OAuth client secrets. The verification_tokens field is intended to verify the identity of the plugin creator with OpenAI and is not used for direct API authentication.

A few more considerations:

Redirect users for authorization: When a user interacts with your plugin, they’ll be redirected to the OAuth provider’s authorization URL to grant access to the requested resources.

Obtain access tokens: After the user grants access, your plugin will receive an authorization code. Use this code to request an access token from the OAuth provider’s token endpoint.

Make API calls using access tokens: Use the access token to make secure API calls on behalf of the user. Access tokens have a limited lifespan, so you may need to implement token refreshing if necessary.

Advantages of using OAuth in ChatGPT plugins:

a. Enhanced security: OAuth allows users to grant limited access to their accounts without sharing their credentials, reducing the risk of unauthorized access.

b. Delegated access: Users can grant and revoke access to specific resources, providing more control over their data.

c. Seamless user experience: Users can authenticate with familiar OAuth providers, streamlining the onboarding process and increasing trust in your plugin.

Would love here how others are handling security and authentication.

You can follow this and other topics at my subreddit r/aipromptprogramming

https://www.reddit.com/r/aipromptprogramming/

9 Likes

What about using .env variables to store related credentials, provided that file is not publicly-viewable but still accessible by the plugin to run on the server?

If you are deploying on cloud services they generally have really good support for storing “secrets”.

On AWS - What is AWS Secrets Manager? - AWS Secrets Manager
On GCP - Secret Manager  |  Google Cloud
And Azure - https://azure.microsoft.com/en-gb/products/key-vault/

Other cloud providers will have similar functionality. The nice thing about these is that there are no environment variables that can accidentally be exposed by a fat finger commit. The secrets are retrieved at runtime and access control is determined by your infrastructure permissions.

1 Like

Thank you. Couldn’t understand how to implement OAuth on top of my chatGPT?

I have a bizarre situation where my OAuth setup appears to fail when OpenAI attempts to verify my plugin while actually working as expected when I test. Ultimately, this produces a condition that impedes publication of my plugin :frowning

Error Screenshot


: