Function calling doesn't adhere to the context provided

HI! I seem to be getting inconsistent results on function calling. I’m trying to extract the skills listed on a person’s resume.

I’m using gpt-3.5-turbo


You are a candidate applying to a job. I will ask you questions about your resume and you will answer them truthfully.
Only answer the questions based on what is on your resume. Do not add any additional information.


What programming languages, programming frameworks, libraries, technologies, software, etc. do you have experience with?
Only include those that you have used professionally and not in school or personal or freelance projects.

Here is your resume: 

When I use a function call like

skills_schema = {
    "type": "object",
    "properties": {
        "skills": {
            "type": "array",
            "additionalItems": False,
            "items": {
                "type": "string",
                "description": "name of the technology"

completion = openai.ChatCompletion.create(
        {"role": "user", "content": skills_prompt},
    function_call={"name": "extract_skills"},
            "name": "extract_skills",
            "description": "Save skills from the resume",
            "parameters": skills_schema,

This outputs:

  skills: [
    'programming languages',
    'programming frameworks',

Just calling it directly without functions returns the correct output (since this person is not a programmer they don’t have anything listed): “Based on my resume, the programming languages, programming frameworks, libraries, technologies, software, etc. that I have experience with are not mentioned.”


  1. How do I force it to only look at the resume context?
  2. Is it better to give the context at the beginning or end of the prompt? Or in a different message?

The prompt (we assume that’s just the system prompt) you have written where you are having the AI pretend to be an inverviewee is different than the headline or this statement. But then your prompt has a confusing question asked to the AI also, while the system area is instructions for the AI, not the chat of a user/assistant.

Then you have a function you are trying to make mandatory, that needs some input, but you have type “array” and a description that will never be seen, being nested. Why and how to use the function is very open to interpretation. – and especially in the context of a function call being trained in general as a way for the AI to retrieve more information or induce an action, not as a sole output mechanism.

You should rethink, and not lie to the AI about its job. Example:

system: You are the backend data processor for our resume submission system. You extract many applicant skill keywords from a resume, especially focusing on gathering all pertinent programming languages. The user input is a candidate’s resume or CV. The only permitted AI output is a python list of strings (which is enclosed in square brackets), where each string is one of the extracted skills or technologies.

1 Like