GPT-4 randomly garbling function call names

I sporadically get an invalid_request_error for completion requests with one function definition. I always construct the request the same and works for thousands of requests, but every once in a while I get this error, or permutations of it. GPT4 seems to be garbling the function name appending additional stuff after it, which causes the regex not to match.

‘performEdit.rnnassistant’ does not match ‘[1]{1,64}' - 'messages.2.function_call.name' 'performEdit]interface' does not match '^[a-zA-Z0-9_-]{1,64}’ - ‘messages.2.function_call.name’

The name of my function is just performEdit


  1. a-zA-Z0-9_- ↩︎

PS: Here’s an example of how my API calls look like:

POST https://api.openai.com/v1/chat/completions

{
  "model" : "gpt-4-0613",
  "messages" : [
    {
      "content" : "You are an experienced native German proof reader. You will correct texts preserving markdown formatting, links and images. Prefer terse responses and never create any new texts.",
      "role" : "system"
    },
    {
      "content" : "Redigiere den folgenden Satz. Wenn Änderungen nötig sind, dann führe sie aus. Ansonsten melde `OK`\n\n```\nWie rasch du dich wieder sportlich betätigst, hängt auch ein wenig davon ab, ob du einen Babysitter\/eine Oma hast, die in der Zwischenzeit auf dein Kind aufpassen kann.\n```",
      "role" : "user",
      "name" : "Writer"
    }
  ],
  "functions" : [
    {
      "name" : "performEdit",
      "description" : "Performs an edit by replacing the original markdown text with the correction. Avoid double quotes in string parameters.",
      "parameters" : {
        "type" : "object",
        "properties" : {
          "correction" : {
            "type" : "string",
            "description" : "The corrected markdown text"
          },
          "reason" : {
            "type" : "string",
            "description" : "The editor's notes \/ explanation of why the change was necessary. Please use German."
          }
        },
        "required" : [

        ]
      }
    }
  ]
}

Might I suggest one more parameter?

"temperature": "0.0"

Thanks, but I didn’t change the temperature because I fear that the edits would be less “creative”.

Also, the temperature should have no effect on how GPT encodes function calls. That’s clearly a case calling for more training, rewarding correct function calling. That’s what OpenAI was doing anyway to give us this functionality. In the least there should by a server-side guard rail that just redoes the request for those errors.

@oliver.drobnik Did you try to add the function_call: {name: "performEdit"} param to force the model call your function?

Temperature injects noise into the token selection process, whether the softmax is making tokens for a function or a completion. That will give you the chance of the less than ideal selection.

That’s fine if it gives you “kitty” instead of “kitten”, but not when it gives you a different name for your function.

It also might have been tuned on calling functions – but not 1000 examples of your variable names.

1 Like