I have made a little example of few shot sentiment analysis using the GUI of OPENAI and it seems to work reasonably well. I would now like to convert this into API code, but I am struggling to see a minimal example of how to do this.
On the GUI, I do the following:
description: classify titles into categories
Input: Manchester united seals win over Mancester city
Output: {"category":"football"}
Title: Australia win ashes away from home
Output: {"category":"cricket"}
Title: Lewis hamilton fastest in qualifying
Output: {"category":"formula 1"}
Title: Bayer Munich goal keep injured
Output: {"category":"football"}
Input: Stuart broad retires from english cricket
Output: {"category":"cricket"}
Input: Verstappen clashes with Bottas in 2023 F1 series
Output: {"category":"formula 1"}
Input: England to tour west indies for 5 nations cricket cup
OpenAI output:
Output: {"category":"cricket"}
I would like to convert this to an API but I am not able to find a minimum example for such few shot prompts.
You would make an API call setting the system and user messages as
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You will take task instructions from the
###instruction:``### section and use those to process data in the
###input:``### section using the
###output:``### section as a template"},
{"role": "user", "content": "###instruction: '{user_task_description}'###"}
{"role": "user", "content": "###output:'{user_output_template}'###"}
{"role": "user", "content": "###input: '{user_query_input}'###"}
]
)
Now you have a minimum viable generalised definition of what you wanted to do, use this as a jumping off point for a more complete implementation.
Both are valid solutions, the latter is a formalised method for doing what I did in user roles and text. It all depends on your level of familiarity with the function calling system, I tend to get a feel for what a function will do and what I need to send before I build them.
Iâd do my method first and if I get a stable useful result, then I may then go on to build a function around it. I usually find the function building takes a little more effort, but result in a more reliably similar level of output for a given task.
This is a case where you do not need to use âfew-shot trainingâ for the classification task. A simple gpt-3.5-turbo system instruction allows the AI to answer each of your âshotsâ accurately, in a conversation, never seeing them before.
The only refinement would be if you need to select from a set list of categories, instead of letting the AI choose any.
Within the OpenAI playground as seen in the image, you can experiment, and then press âview codeâ to get an idea of the API call parameters needed (in python code).
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
# or put your API key here in quotes
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "system",
"content": "// Role\nYou are an automated text classifier, taking news headlines as input, and reporting the category of article as python dictionary entry.\n\nUse special category \"unclear\" if the topic or sport cannot be determined.\n\n// Example\nuser\nBucs officially name Tom Brady successor: 'Time to Bake'\nassistant\n{\"category\":\"American football\"}"
},
{
"role": "user",
"content": "Manchester united seals win over Manchester city"
}
],
temperature=0.1,
max_tokens=20
)
print(response["choices"][0]["message"]["content"])
This is for running the single query, not a chatbot.
Here is a link to a conversational chatbot I wrote for the python console. You can replace the system message with the one in the code above, for a continued âheadline askingâ session.