On June 13, 2023, OpenAI unveiled the function calling feature for GPT 3.5/4. Function calling allows developers to more reliably get structured data back from the model. Developers input potential functions to respond to a user’s message, with the model selecting the best function and its arguments.
Without this feature, the model’s responses could be inconsistent, posing challenges for developers. With function calls, we can ensure reliability and reduced errors for both developers and end-users.
Problems at Scale
Scaling this feature out to a large number of functions brings its own set of challenges.
Token Consumption
Each function that’s introduced means more tokens to be passed into the prompt. This doesn’t just make the response time slower, but it also escalates costs, making large-scale function calling an expensive endeavour.
Code Organization
Having a plethora of functions sprinkled throughout your codebase not only makes it harder to manage and locate specific functions, but the chances of redundancy increase.
Testing Complexity
As the number of functions grows, the model’s decision-making becomes more variable, given the many functions it can choose from. Each function must not only operate correctly on its own but also in concert with others. Without this, the model might lean on incorrect functions or misinterpret prompts, compromising the accuracy of outputs.
Addressing these challenges requires a thoughtful approach to scaling.
The Solution: SageAI
SageAI is a framework designed specifically for scaling the model’s function calling capability. Let’s dive into how SageAI addresses the challenges previously mentioned:
Token Consumption
SageAI uses an in-memory vector database to store function embeddings. Instead of passing in all functions to the model, SageAI computes the cosine similarity between the user’s message and each function embedding, which is like a mathematical fingerprint for your function. By selecting only the most relevant functions (typically the top 5), we ensure that the number of tokens is significantly reduced, improving response times and cost efficiency.
Code Organization
A clean codebase is essential for developer efficiency. SageAI organizes functions in a folder-based structure, encouraging categorization of your functions as you write them out. Each function is stored in a folder, with its input and output types clearly defined using Pydantic models. Establishing a systematic approach to code management is imperative for scaling.
Testing Complexity
As any developer knows, testing is paramount. SageAI equips developers with a straightforward testing tool that incorporates unit and integration tests. An optional JSON file can be accompanied alongside each function that includes test cases. SageAI will use these test cases to run unit tests, ensuring the logic in your function is coded properly, and integration tests, to make sure that your function and argument descriptions are prompted clearly for the model.
“What if I have X use case?”
SageAI is designed with flexibility in mind. You can bring your own vector database and embeddings implementation, or bring your own test suite.
Wrapping Up
Using SageAI doesn’t just mean more efficient function calling. It’s also a shift towards a more organized, scalable, and developer-friendly approach to integrating function calling into your codebase. Developers can now focus more on crafting effective functions, without being bogged down by the challenges of organizing and testing.
SageAI is open-source and ready for use. Search GitHub for 0xnenlabs/SageAI