Feature Request: Encrypted Prompt Handling with Server-Side Decryption

Certainly! Here’s a description of Option 3 as a feature request for OpenAI:


Feature Request: Encrypted Prompt Handling with Server-Side Decryption

Overview:
We request a feature that allows developers to securely send encrypted prompts to OpenAI’s API. The OpenAI servers would decrypt these prompts using a pre-generated key stored within OpenAI’s infrastructure, process the decrypted prompt, and return the response to the application. This feature would significantly enhance intellectual property (IP) protection for developers who need to keep proprietary prompts confidential without exposing sensitive business logic in their client applications.


Use Case:

Many developers are integrating OpenAI’s models into their applications and embedding proprietary logic in the form of structured prompts. However, in cases where prompts are exposed within client-side code (mobile/desktop/web apps), this leaves developers vulnerable to reverse engineering, which can lead to the exposure of proprietary algorithms.

This feature would allow developers to:

  1. Encrypt prompts on the client side before sending them to OpenAI.
  2. Store a decryption key securely within OpenAI’s infrastructure to decrypt these prompts server-side.
  3. OpenAI would process the decrypted prompt using its models and return a response, as usual.

By keeping the decryption process server-side, this feature would allow developers to keep sensitive prompts hidden from the end-users or client applications, preserving the proprietary nature of their logic while still using OpenAI’s models for inference.


How It Works:

  1. Client-Side Encryption:

    • Developers would encrypt their prompts using a secure encryption method (e.g., AES) and send the encrypted data to OpenAI’s API.
    • Example encrypted prompt: "gZ7Rhw1Qd2==".
  2. Server-Side Decryption:

    • The OpenAI server would use a pre-generated decryption key, securely stored on the server side, to decrypt the prompt before processing.
    • The decrypted prompt would be processed by the model (e.g., GPT-4).
    • The decryption keys should be managed using a secure key management service (e.g., AWS KMS, Azure Key Vault) to ensure high security and avoid unauthorized access.
  3. Return Processed Response:

    • After processing the decrypted prompt, OpenAI’s API would return the usual response to the client, following the existing API response structure.
    • The response could remain unencrypted unless requested otherwise.

Benefits:

  1. Enhanced IP Protection: Developers can safely send sensitive prompts to OpenAI without risking reverse engineering or exposure of proprietary algorithms within client applications.

  2. Secure Communication: By managing decryption server-side, OpenAI ensures that proprietary data is never exposed during transmission, significantly improving security.

  3. Developer Flexibility: This feature would provide developers with the flexibility to securely integrate OpenAI’s API into their apps without the need to run a separate backend server to handle proprietary prompts.


Considerations:

  1. Encryption Standards: OpenAI should specify the supported encryption standards (e.g., AES-256) and provide secure key management services for handling decryption keys.

  2. API Changes: OpenAI’s API would need to accept an additional parameter for encrypted prompts and securely retrieve the associated decryption key for each developer.

  3. Performance: There may be slight performance overhead due to the encryption/decryption process, so optimization would be necessary to ensure minimal impact on response times.


Conclusion:
This feature would provide developers with the ability to securely use OpenAI’s API without compromising on IP protection, creating a safer and more secure environment for proprietary prompt handling in client-facing applications.

2 Likes

Want to note here that an end-to-end encrypted client is available through Azure’s OpenAI client:

https~//learn.microsoft.com/en-us/azure/ai-services/openai/overview

I’ll definitely sympathize… put mildly, navigating Microsoft’s tentacles can be a bit of a bother sometimes. I would definitely be interested in seeing OAI launch a custom-encryption endpoint like described above.

Want to note here, the suggestion is that a client application would have an encrypted system prompt as a means of hiding its contents from disassembly - but then the client is making direct requests to the OpenAI API.

If you are doing that, you have already failed at security 101, and will have your API keys stolen.

1 Like