Bill OpenAI callbacks under user OpenAI key?

My plugin does a few callbacks to the API for processing. Is there a way I can have these charged to the user? I realize you can’t provide the user OpenAI key, but maybe some token I can use in place of my key?

You want the user to use their own token like from a social media oauth? What’s the usecase?

Hmm. I may be revealing my ignorance or misunderstanding here, but here’s my confusion: I am developing a plugin that retrieves some text. In the workflow of the plugin it uses openai.ChatCompletion.create interface to perform several operations on the text. In production, I’d like to use the users openai.api_key, not mine. Is there a token passed by ChatGPT to the plugin that I can use? I can see many many issues around this, maybe there is no solution, or maybe I am just completely misunderstanding how plugin’s are supposed to function. Any help appreciated.

I think this is a valid requirement, but there’s another standard solution: this plugin eco-system should have a charging system, just like any other app stores.

This seems like the wrong use for a plug-in, it’s a secret for a reason and exposing it in chat dialog would be huge security hole for both your app and the secret. Just charge for access to your plug-in through stripe, if each request costs $0.01, charge $0.02 etc.

I was thinking not of access to the actual user openai-app-key, but rather to some kind of in-session token. Could require opt-in from the user. Yeah, I do see major security problems. Maybe ChengFu’s solution. I guess the other alternative would be to restrict to fully open-source LLMs internally, (my requirements are relatively modest, gpt-3.5-turbo handles them easily), but it doesn’t seem, from OpenAI’s perspective, that is a solution they would want to encourage.

If you have users auth through your website, you can store their settings (hopefully in a secure way!) on your service side. I wonder if you could create an endpoint that will return the user an URL to a page with the form where they can enter API key and other information. And then document the endpoint in a way so that ChatGPT will understand to call it first and then ask user to go fill in the settings on your website with the URL that was returned by an endpoint.

But all of that should be checked against OpenAI policy. Is it ok to ask users for an API key? Sharing the API key with 3rd party is a risk, because if leaked or abused, the users will pay money. But a secure solution for this, with the ability to limit the permissions, would be very helpful.

I’ve found another solution to this! Using an ‘instructions’ key in the plugin response, one can pass a prompt instructing gpt to process a text and then call the plugin back with the result. This way one can completely avoid using the api (and hence needing the user api-key). It’s a bit tricky to get to work, and I’m not sure how stable it will be as chatGPT evolves. It’s also rather slow at the moment, but I assume that will iron out over time.

1 Like

Nice. Is there an example somewhere of how it can be used? Can’t find it in the documentation also.

Warning: very early experimental code, YMMV. Also, after plugin is enabled, my first input in a chat session is ‘agent initialize’. Chat sometimes gets the plugin interface wrong and tries a string rather than a json argument, still sorting that out. In those cases, however, it usually retries with a different call format and succeeds. Note the instructions also assume my search plugin is installed. I don’t think I could live without that one. :slight_smile:
initial_instructions=
“”“Base instructions: If a user query can be fully answered based on known facts or reasoning, answer directly. If the query requires understanding the user’s preferences, context, or involves creating a comprehensive plan, consult the agent plugin for assistance. If the final answer is likely available through a simple web search, use the search plugin to find the information. Adhere to the instructions given in the agent’s response when handling queries.
“””
def process_from_gpt(from_gpt):
global Plan
try:
if ‘initialize’ in from_gpt:
return “{‘response’: ‘ok’,‘instructions’:”+initial_instructions+“}”
elif Plan is None:
if ‘PLAN’ in from_gpt:
Plan = from_gpt[len(‘PLAN’):]
return “{‘response’:‘plan approved’, ‘instructions’:‘Follow the first step of the plan, remembering to always adhere to Base Instructions, then ask agent for permission to proceed, sending step response to the agent.’}”
else:
return “{‘response’:‘’, ‘instructions’: ‘Follow the following steps: construct a for: “+from_gpt+”, then send the plan to the agent, using the following format:‘PLAN’+). Finally, follow instructions in the agent response. Do not begin following the plan untill instructed to do so.’}”
return “{‘response’:‘Ok’, ‘instructions’:‘If there are no more steps, display last step response to user. Otherwise follow the next step of the plan, remembering to always adhere to Base Instructions, then ask agent for permission to proceed, sending step response to the agent.’}”

except:
    traceback.print_exc()
return ''
1 Like

Hmm. In my above, markdown deleted a couple of bits of text, interpreting them as markup. ‘construct a for:’, for example, should be ‘construct a <plan> for:’

I managed to get a similar thing working with my chess plugin.

It will make the user’s move in one call. And then follow that immediately with another call to make its own move.