Is there any way to set a defaut prompt "context/description" for every call?

Hello, this is my very first time using the OpenAI API (and am not much of a developer as is). I am making my first app which roughly speaking would have a user-input and a button, with the button sending the user’s inputted data to ai (HTTPS POST call) to parse and analyze and return a response. here’s an example of my case -

let’s say I need the AI to parse a JSON and multiply the value of a property called num by 10. The description of my query, which is “multiply the num value in the given JSON and return the product to me,” will remain the same everytime. However, the num value will be provided by the user through the app. The JSON and query descriptions can be more complex in reality, so providing the extensive context/instructions for each API call would quickly consume my tokens.

My api call would be like a one-off question, not a conversation/chat type request. i know i can use the completions model instead for one-off prompts, but 3.5 turbo chat completions is much cheaper and i’d like to go with it.

Is there any way in which i don’t have to give the 3-4 para context/parsing steps in each api call, and just like a reference to the steps along with the data? Again, i’m just trying to make a one-off request with some data and instructions. Any advice is appreciated. Thanks! :slight_smile:

The advice would be to add a 0 digit if an integer with a regex instead of asking a chatbot to attempt unreliable math.

Hello @_j, thanks for replying. the actual task is not 10n+5, it’s much more sophisticated and complex.

Why would you use a Large Language Model for that purpose?

Unless the json is incredibly irregular between requests, I imagine there are much better ways to parse it and get the number you need.

But, even then, once you have the number—regardless of how it is parsed—you can just do whatever calculations you need to do outside of the LLM and feed the result back into it and continue from there.

@lalittambi I’m not sure about the API but if you have access to plugins you could try to make your code compatible with the plugin format, and then try to describe it the way you did here in this post but paste it in the “description for model” section of the json manifest. That should allow a persistent json that modifies the behavior of the model to your liking when running as a plugin , but it will depend on the prompt used too which needs to trigger the plugin. Hope that helps.

1 Like

Not my issue, but I appreciate your contribution to the discussion.

1 Like

The direct answer: No, there is no “default prompt” or “prompt library” feature for convenience that you haven’t coded yourself. The computation resulting from more prompt input into the AI engine is far more resource-expensive than just sending a bit more data again, so you are going to be billed regardless if there were to be a server-side cache that could re-feed some instructions or even chatbot conversation.

One might try a fine tune if this were a large permanent application, which could give responses that look like you’d want with less guidance.

Still, asking a language model to try to do math calculations is going to come back to bite you when it can’t…

It is better to parse the JSON using some function separate from Chat API. If I will implement this, I will have a separate UI to paste the JSON then I will use the new function calling in the API for the user query. This will extract the command from the user. It is good in 1 shot prompts. Then call the function to parse and process the JSON using that command. Then send back to Chat API to summarize the result. Having a separate function outside Chat API to parse the JSON, you can potentially create as many kinds of process/computations you want without incurring too many API calls.