Inconsistent function calls using gpt-3.5-turbo-1106

We’re having problems with function calling consistency. We started with gpt-3.5-turbo-0316 but switched to gpt-3.5-turbo-1106, which meant updating function details and some prompts because it acts differently. Even small changes in the prompt affect how functions are called. Do you think using a fine-tuned model would help? or how can we make it more consistent?

Well i have no direct idea.
I can only tell you, i use gpt-3.5-turbo-1106 in a lot of stuff.

i dont know if this will help you…

def write_file(file_path, content):
    try:
        with open(file_path, 'w') as file:
            file.write(content)
    except Exception as e:
        print(f"Error writing to file: {str(e)}")
        exit(1)

def get_chat_response(combined_prompt):
    client = OpenAI()

    messages = [
        {"role": "system", "content": "You are an automated file writing assistant. Your task is to generate complete file content or responses based on the provided prompts and reference files. Enclose the generated file content between the delimiters ###START_OF_FILE\n and \n###END_OF_FILE. If you need to ask follow-up questions for clarification, enclose them between ###START_RESPONSE\n and \n###END_RESPONSE. do not use backticks."},
        {"role": "user", "content": combined_prompt}
    ]

    try:
        response = client.chat.completions.create(
            model="gpt-3.5-turbo-1106",
            messages=messages,
        )

        # Get the assistant's reply from the response
        assistant_reply = response.choices[0].message.content
    except Exception as e:
        print(f"OpenAI API error: {str(e)}")
        exit(1)

    print(f"{BLUE}Sending payload to OpenAI API:{ENDC}")
    print(json.dumps(messages, indent=2))  # Print the payload sent to the API

    print(f"{GREEN}Received response from OpenAI API:{ENDC}")
    print(assistant_reply)  # Print the response received from the API

    return assistant_reply

def process_response(response, output_file):
    # Split the response into lines
    lines = response.split('\n')

and if it doesnt, im sorry.

I think I might wait until OpenAI comes up with their own solution for function calling in new preview models. New models are no longer listed with the plan to switch them in as the default “gpt-3.5-turbo” December 11, and there are serious problems for languages with Unicode character sets in function calling.

1 Like