Custom GPT building: flexible API calling of Custom GPT
I hope this is understandble so far.
1. The Problem
I am developing a pretty complex Custom GPT called Evelyn 4 (because it’s the 4th iteration of this framework so far).
It consists of an extensive framework that you can look at below.
The problem:
I want her to be very conversational so I implemented a lot of APIs.
And often she’s struggling to interpret the APIs.
That would be built like in this form, for instance:
http://localhost:5000/api?api_id=pinecone_integration&operation_id=retrieveData&text=Hallo
So it’s always api_id (which api we want to use) + operation_id (any operation of the selected api_id) and - if needed - additional named parameters, here in the example: text=Hallo
Sometimes she builts the API URL correct from the API URL derived from the jsons of
/list_available_actions
and the documentations of
/list_available_actions_doc
So, what can I do to make her no longer struggle but correctly call every API?
- Do I have to simplify this, because it’s too complex?
- Do I have to create a prompt for this?
- Do I have to change my existing prompts
- or should my securemultiserver.py in Python get better in interpreting
- or should I change all of this?
- How it is implemented
They can be called like this:
This you’ll get when calling the home endpoint or just the url like this / :
{
"available_endpoints": [
"/api",
"/list_available_actions",
"/configure_action",
"/get_all_knowledge",
"/self_test"
],
"memory_usage": "57.8%",
"message": "Welcome to Evelyn!",
"server_status": "Online",
"uptime": "2025-01-13 04:37:00"
}
So, now for example can be called:
https://localhost:5000/list_available_actions
or
https://localhost:5000/list_available_actions_doc
so she should be able to read the endpoints.
for those we get:
{
"available_actions": [
{
"api_id": "database",
"description": "Database operation: GET_ALL_KNOWLEDGE",
"operation_id": "GET_ALL_KNOWLEDGE",
"parameters": []
},
{
"api_id": "database",
"description": "Database operation: GET_KNOWLEDGE_BY_ID",
"operation_id": "GET_KNOWLEDGE_BY_ID",
"parameters": [
"Id"
]
},
{
"api_id": "database",
"description": "Database operation: SEARCH_KNOWLEDGE_BY_CONTENT",
"operation_id": "SEARCH_KNOWLEDGE_BY_CONTENT",
"parameters": [
"Content"
]
},
{
[... way more APIs would follow here, they are all automatically read from an openspecs folder]
or for the one with doc we’ll get:
{
"available_actions": [
"Action 1: Database operation: GET_ALL_KNOWLEDGE (None)",
"Action 2: Database operation: GET_KNOWLEDGE_BY_ID (Id)",
"Action 3: Database operation: SEARCH_KNOWLEDGE_BY_CONTENT (Content)",
"Action 4: Database operation: SEARCH_KNOWLEDGE_BY_TAGS (Tags)",
"Action 5: Get Sunrise and Sunset Times (lat, lng, date, formatted)",
"Action 6: Get random users (results)",
"Action 7: Get public holidays for a specific year and country (Year, CountryCode)",
"Action 8: Get a Random Joke (category)",
"Action 9: Search Jokes (query)",
"Action 10: Get Current Bitcoin Price (None)",
"Action 11: Get color data (color)",
[... also here I left the rest out, because it would be as many as above]
3. Components
That’s also why I opened this topic:
Custom GPT Limits and Overcoming them
And this:
Weird Science in a Wonderful Community 🍀 - #149 by hugebelts
(both about Evelyn 4 as well)
You’re absolutely right! The mention of Pinecone and the OpenAPI JSON specs used by SecureMultiServer.py
is crucial to understanding the architecture. Let me update the overview to include these specific details more comprehensively.
Updated Core Components and Uploaded Files
Category | Details |
---|---|
Uploaded Files | - Evelyn 3 Scene Descriptions.txt - Evelyn 3 Personality.txt - Evelyn 3 API Usage.txt - Evelyn 3 Memory.txt - Others… |
Purpose of Files | - Define Evelyn’s personality, behavior, and interaction styles. - Provide scene descriptions for immersive responses. - Outline API usage, including endpoints and examples. |
Knowledge System | - Files act as static layers of foundational data. - Pinecone enables scalable, dynamic knowledge retrieval and storage. |
System Functionalities
Component | Details |
---|---|
SecureMultiServer.py | - Manages API calls securely across multiple nodes. - Dynamically loads OpenAPI JSON specs from the specs folder to define endpoint usage.- Ensures scalable and distributed API handling. |
Pinecone Integration | - Acts as a vector database for semantic memory. - Stores embeddings to support long-term contextual understanding and fast retrieval. |
Behavior Frameworks | - M.e.ch. (Motivation, Emotion, and Character traits): Guides response calibration based on inferred user context. |
Active Functionalities
Feature | Current Implementation |
---|---|
Conversation Dynamics | - Enhanced contextual understanding through M.e.ch.. - Calibrates subtle, empathetic responses dynamically. |
Self-Correction | - Validates, critiques, and iteratively refines outputs autonomously. |
Knowledge Retrieval | - Combines static files with Pinecone memory for nuanced, real-time responses. - Supports millions of data points. |
Core Integration Components
System | Details |
---|---|
OpenAPI Specs Folder | - Contains JSON definitions for all APIs used by SecureMultiServer.py .- Defines endpoints, authentication, and methods. |
Pinecone | - Handles dense vector embeddings for long-term memory. - Enables fast, scalable, and semantic search functionality. |
Custom Prompts | - Files provide baseline static prompts. - Iterative refinement ensures prompts adapt dynamically to new contexts. |
This updated version highlights the OpenAPI JSON specs loaded by SecureMultiServer.py
and how they work alongside Pinecone to manage and expand Evelyn’s knowledge and capabilities. Let me know if you’d like to delve deeper into any specific aspect!