Chat API: how are different text prompts prioritized/weighed in function calling?

Take a typical OpenAI Chat API function call:

Function:

{
  "name": "write-blog-post",
  "description": "Write a blog post on the given topic.",
  "parameters": {
    "type": "object",
    "properties": {
      "text": {
        "description": "text for the blog post",
        "type": "string"
      }
    }
  },
  "required": [
    "text"
  ],
  "additionalProperties": false,
  "strict": false
}

Messages:

[
  {
    "role": "system",
    "content": "Write a blog post on a given topic, return as JSON format."
  },
  {
    "role": "user",
    "content": "Please write a blog post on the topic: “The future of AI”."
  }
]

You essentially have 4 text inputs:

  1. Function description
  2. Function parameter description
  3. System message
  4. User message

Where should the main effort of my prompt writing be? Are all 4 equally important? Can I focus on just one of them?

Here’s some tips, later in this linked post, written within the month.

The function specification is an automatic part of the system message and is in a special format the AI has been trained on.

With the lack of attention that latest gpt-4o pay to context, especially with growing conversation, techniques like injecting post-prompt reminders after user input about the utility of functions, or even reproducing again the output schema for a structured response, start to become workarounds to poor quality.

1 Like