I’m having trouble understanding the API functions for assistants. I’ve created a private web interface, similar to ChatGPT, that uses the Assistant’s API to interact with the Google Analytics API. I want to be able to dynamically ask any questions directly about my Google Analytics data, like the number of visitors last month, and the AI displays the information regardless of how I phrase my questions and should be able to make the appropriate API calls by extracting semantic key information from queries.
I’ve seen others on YouTube building their application’s entire logic and function calls internally, without using the assistant’s schema provided in the OpenAI assistants playground. Do I really need this schema to achieve my goals? For example, if I want to instruct the Assistant that whenever I receive input, for example, ‘How many visitors last month?’ are these instructions written inside my application or written inside the Assistant window on openAI?
Also, I’m unsure how to enable the assistants to call third-party APIs. Should this be done within my own application, or through OpenAI’s schemas? I see different approaches being used, as custom ChatGPTs and OpenAI assistants seem able to call third-party APIs when their schema is used. This has left me quite puzzled as to where all the magic is happening.
I would love it if someone could give me how the workflow would be in the frontend and backend and where the interpretation is happening. will it be inside my application or inside openai assistants since it’s connected through the API? For example this
- Scenario: A user types in a question, for instance, “How many visitors did I have last month?”
- Process: This query is received by my application’s front-end and sent to the back-end for AI Assistant Interpretation.
Please continue the workflow for me so I can understand better. Thank you for much!