Sharing my new library, called AlphaWave, which is a highly opinionated library for talking to Large Lange Models, like GPT. Calling it state-of-the-art could be a stretch but I know a lot about talking to these models, I design SDK’s for a living, and this is the most capable SDK I’m either aware of or can design at this point.
The library is published to NPM here and there’s a node.js sample you can run here. The sample is basically a terminal version of ChatGPT and it’s less about the sample I’ve provided and more about what this simple library is capable of. Here’s the quick feature breakdown:
- Supports calling OpenAI and Azure OpenAI hosted models out of the box but a simple plugin model lets you extend AlphaWave to support any LLM.
- Promptrix integration means that all prompts are universal and work with either Chat Completion or Text Completion API’s.
- Automatic history management. AlphaWave manages a prompts conversation history and all you have todo is tell it where to store it. It uses an in-memory store by default but a simple plugin interface (provided by Promptrix) lets you store short term memory, like conversation history, anywhere.
- State-of-the-art response repair logic. AlphaWave lets you provide an optional “response validator” plugin which it will use to validate every response returned from an LLM. Should a response fail validation, AlphaWave will automatically try to get the model to correct its mistake.
It’s the response repair logic that’s the key bit and I’d encourage you to go read the writeup on the repo for more details as to what I do to get the model to correct its output. In a nut shell; I first detect that the model made a mistake via validation, then I isolate the error by forking the conversation history, then I attempt to get the model to correct its mistake via “feedback”.
For those that just want to see code; here’s the setup logic for defining a simple ChatGPT style wave:
import { OpenAIClient, AlphaWave } from "alphawave";
import { Prompt, SystemMessage, ConversationHistory, UserMessage, Message } from "promptrix";
// Create an OpenAI or AzureOpenAI client
const client = new OpenAIClient({
apiKey: process.env.OpenAIKey!
});
// Create a wave
const wave = new AlphaWave({
client,
prompt: new Prompt([
new SystemMessage('You are an AI assistant that is friendly, kind, and helpful', 50),
new ConversationHistory('history', 1.0),
new UserMessage('{{$input}}', 450)
]),
prompt_options: {
completion_type: 'chat',
model: 'gpt-3.5-turbo',
temperature: 0.9,
max_input_tokens: 2000,
max_tokens: 1000,
}
});
And then all you have to do is call a single completePrompt()
method with the users input and then process the waves output:
// Route users message to wave
const result = await wave.completePrompt(input);
switch (result.status) {
case 'success':
console.log((result.response as Message).content);
break;
default:
if (result.response) {
console.log(`${result.status}: ${result.response}`);
} else {
console.log(`A result status of '${result.status}' was returned.`);
}
break;
}
Here’s a screenshot of the sample output:
I’m about to start coding an AutoGPT style Agent Framework for Alpha Wave that actually works. Expect that to drop tomorrow