ENV: Python 3.11 :pycharm
OS: Windows 10 home
GPT: gpt-3.5-turbo-16k-0613
Hello forum,
I’m concerned that there is some broken functionality in the function-calling system. However, I’m not above the possibility that my issues may be user error.
I’ve written a bot (Le0sGh0st on x) that writes stories and generates images. I had noticed that GPT has a tendency to generate different, but similar content when the prompts are similar. It’ll write 5 stories in a row about 5 different main characters, but they’ll all be named, “Lilly.”
In an effort to generate content with more variety I’ve decided to have GPT generate a list of character names to use in stories, and each time a story is written, store the names used in a database.
This way, when choosing the names, i can load the last 5 or 10 character names into a variable, and append that to the gpt query for character names f"hey, i need you to generate 10 character names, for a {chosen_genre} story. please do not use any names found on the following list: {Last_5_names_used}"
Now, when i do this raw, with out function calling, it hit’s the nail on the head every time. I get 10 fresh names every time. However, when i put this through the function calling system, i get weird results.
Sometimes, it’ll give me weird shit like, the name of the genre chosen earlier in the process… when i remove references to the genre, it more often then not comes back either
A: empty
B: containing nothing but the list of names i’ve instructed it not to use. Almost defiant.
now empty function-call requests are detrimental, almost always result in an error. I hate to use error handling here and try again if it fails, because that can become costly.
It can fail often, returning malformed function-call arguments, or json responses that are slightly off. It seems to do it pretty consistently when it fails (I’ve let it run in a loop until successful and racked up nearly a dollar in gpt3.5-turbo-0613 requests).
Now, contrastly:
I’ve written an virtual assistant that takes advantage of the function-calling system. In this context, those function calls work great, because generally the responses from GPT API are one, or two words.
For instance, when you speak to the assistant, your input is sent to gpt with a query about it’s context: “question”,“command”,“task”,“cmd_line” etc. so ifyou say “open paint,” the script sends that to gpt, asks if it’s a question, a command / task request/ etc and gpt will respond: “cmd_line” in the function-call.
While it occasionally mis categorizes context/intent of user input, it’s at about a 99% success rate here, and at least when it errors it errors in getting the right context, the json response is always properly formatted. .
Am i asking too much for GPT to parse out 10 character names into a json response and have it accurately apply the logic i’ve instructed it to?
has anyone else hand any issues with this?
example usage:
Function Call Definitions:
functions = [
{
"name": "get_genre",
"description": "Generates a Genre of fiction for the next story",
"parameters": {
"type": "object",
"properties": {
"genre": {
"type": "string",
"description": "the genre of fiction"
}
},
"required": ["genre"]
}
},
{
"name": "gen_char_names",
"description": "Generates a list of 10 character names for the story.",
"parameters": {
"type": "object",
"properties": {
"char_list": {
"type": "string",
"description": f"this is a list of 10 character names."
}
},
"required": ["char_list"]
}
},
{
"name": "get_title",
"description": "Generates a title for the story.",
"parameters": {
"type": "object",
"properties": {
"title": {
"type": "string",
"description": "this is the title of the story based on the plot outline provided"
}
},
"required": ["title"]
}
},
{
"name": "get_author",
"description": "Generates an author for the story.",
"parameters": {
"type": "object",
"properties": {
"choice": {
"type": "string",
"description": "this is the author whos voice will inspire the story based on the plot outline provided"
}
},
"required": ["choice"]
}
},
]
Actual API Request:
def gen_char_names(genre, prompt):
i = 0
last_characters = get_last_five_("Character_List")
#last_char_json = json.loads(last_char)
#last_characters = list(set(last_char_json))
print(f"cleaned up characters to avoid: \n {last_characters}")
while True: # Keep trying until successful
try:
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-16k-0613",
messages=[
{
"role": "system",
"content": f"you are a helpful assistant. Follow the prompt directions exactly. "
},
{
"role": "user",
"content": f"develop a list of 10 character names that are different from the following list of names: '{last_characters}' do not send back an empty response. do not use any of the following names: {last_characters} " #Use this prompt as inspiration: {prompt}.."
}
],
functions=functions,
function_call={
"name": functions[1]["name"]
},
temperature=0.9,
max_tokens=100,
top_p=1,
frequency_penalty=1,
presence_penalty=1,
n=1,
)
#print(f"response: \n" + response["choices"][0]["message"]["content"])
# Parse the response JSON string into a dictionary
response_json = json.loads(response["choices"][0]["message"]["function_call"]["arguments"], strict=False)
# Extract the Character Names
character_list = response["choices"][0]["message"]["content"] #response_json["char_list"]
# Print the character list
print(f"List of Characters: \n {character_list}")
return character_list