Help Me Understand How to Call 3rd Party APIs

I’m having trouble understanding the API functions for assistants. I’ve created a private web interface, similar to ChatGPT, that uses the Assistant’s API to interact with the Google Analytics API. I want to be able to dynamically ask any questions directly about my Google Analytics data, like the number of visitors last month, and the AI displays the information regardless of how I phrase my questions and should be able to make the appropriate API calls by extracting semantic key information from queries.

I’ve seen others on YouTube building their application’s entire logic and function calls internally, without using the assistant’s schema provided in the OpenAI assistants playground. Do I really need this schema to achieve my goals? For example, if I want to instruct the Assistant that whenever I receive input, for example, ‘How many visitors last month?’ are these instructions written inside my application or written inside the Assistant window on openAI?

Also, I’m unsure how to enable the assistants to call third-party APIs. Should this be done within my own application, or through OpenAI’s schemas? I see different approaches being used, as custom ChatGPTs and OpenAI assistants seem able to call third-party APIs when their schema is used. This has left me quite puzzled as to where all the magic is happening.

I would love it if someone could give me how the workflow would be in the frontend and backend and where the interpretation is happening. will it be inside my application or inside openai assistants since it’s connected through the API? For example this

  • Scenario: A user types in a question, for instance, “How many visitors did I have last month?”
  • Process: This query is received by my application’s front-end and sent to the back-end for AI Assistant Interpretation.

Please continue the workflow for me so I can understand better. Thank you for much!

You could add to GPT’s instructions to send back a specific response when the user asks about how many visitors they had last month. This specific response could be something like this: {‘-visitors-last-month-’} and you would have it programmed so that if the response is exactly that it instead will call a function that calls Google Analytics for this number and then feeds the number to you.

are these instructions written inside my application or on Openai?

That’s what confuses me the most. What is the difference between writing instructions inside my application for GPT and adding the schemas inside my application or adding the instructions and schema on Openai Assistant playground?

Oh, I can understand your confusion. Is there a reason you are using assistants do do this? It seems that for your use case it would be cheaper and easier to just use a base gpt-3.5-turbo model.

it doesn’t matter what model I use, I just need to understand the difference between creating the AI assistant in the OpenAI playground and integrating it into my own application.

Also, the schema structures - should I do it in the OpenAI environment or within my application? What’s the difference?

For example, if I want the AI to analyze a certain natural language query I send from the application frontend and have it analyze the query and dynamically return a JSON format to my backend app logic so it can parse this format and make a relevant API call, do I need to define any schema on openai assistants playground or inside my app? Do you see where the confusion comes?

The function (or the differently-formatted tool specification) that you send to the API is a schema the AI can understand.

functions = [
{
    "name": "data_demonstration",
    "description": "This is the main function description",
    "parameters": {
        "type": "object",
        "properties": {
"string_1": {"type": "string", "description": ""},
"number_2": {"type": "number", "description": ""},
"boolean_3": {"type": "boolean", "description": ""},
"empty_4": {"type": "null","description": "This is a description of the empty_4 null property"},
"string_5_enum": {"type": "string", "enum": ["Happy", "Sad"]},
        },
        "required": ["string_1","number_2","boolean_3", "empty_4", "string_5_enum"]
    }
}
]

In response to user input that would require the function to be employed, the AI will emit JSON that (mostly) complies with the specification of the internal API you so provide it:

{
  ..
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": null,
        "function_call": {
          "name": "data_demonstration",
          "arguments": "{
  "string_1": "Hello",
  "number_2": 42,
  "boolean_3": true,
  "empty_4": null,
  "string_5_enum": "Happy"
}"
        }
      },
      "finish_reason": "function_call"
    }
  ],...

That might not look at all like what the external API wants, and you likely want that layer of abstraction so the user’s can’t extract the exact external tools (and you’ll need private authentication methods also)

So you’ll need a translator of AI-emitted language (from a function that you wrote a simple as possible for the AI to understand the purpose of each parameter), into the external API function (or internal one like “trigonometry_calculator”). And then also process and clean the external service’s return into AI language that has the most effective impact.