AlphaWave: Added function support and better LangChain interop

I just published version 0.6.0 of the JS version of AlphaWave which supports OpenAI functions and improves the interop between AlphaWave.js and LangChain.js. You can now use any LangChain based model or embedding in AlphaWave and vice versa. If you’d like details for how to use AlphaWave with LangChain.js please start a discussion over on my github page. The star of this update is function support so I’ll drill into that a bit deeper.

You can now call OpenAI functions with added reliability thanks to AlphaWaves validation and response repair logic. A new FunctionResponseValidator class prevents the model from calling hallucinated functions, returning malformed JSON arguments, or forgetting to include required arguments. All of this happens automatically without you having to do anything special.

I don’t have an example of AlphaWave performing a repair yet as my current sample is based off OpenAI’s weather sample which they’ve definitely tuned the model to do well. I’ll dig through some of the failure cases being reported here and do a follow up post showing a repair in action. It shouldn’t be too hard to come up with an example that triggers a repair.

To start with, lets show the weather scenario which works fairly well. Here’s a sample run:

The light gray is the model triggering a function call… Code wise you first define your model with included function definitions:

// Create an instance of a model
const model = new OpenAIModel({
    apiKey: process.env.OpenAIKey!,
    completion_type: 'chat',
    model: 'gpt-3.5-turbo',
    temperature: 0.9,
    max_input_tokens: 2000,
    max_tokens: 1000,
    functions: [
        {
            "name": "get_current_weather",
            "description": "Get the current weather in a given location",
            "parameters": {
              "type": "object",
              "properties": {
                "location": {
                  "type": "string",
                  "description": "The city and state, e.g. San Francisco, CA"
                },
                "unit": {
                  "type": "string",
                  "enum": ["celsius", "fahrenheit"]
                }
              },
              "required": ["location"]
            }
        }
    ]
});

Next you create a wave with that model and a prompt:

// Create a wave
const wave = new AlphaWave({
    model,
    prompt: new Prompt([
        new ConversationHistory('history'),
        new UserMessage('{{$input}}', 200)
    ]),
    logRepairs: true
});

Then you wire everything up to your UI such that you can call wave.completePrompt():

// Create a readline interface object with the standard input and output streams
const rl = readline.createInterface({
    input: process.stdin,
    output: process.stdout
});

// Define main chat loop
async function chat(botMessage: string) {
    async function completePrompt(input: string) {
        // Route users message to wave
        const result = await wave.completePrompt<string>(input);
        switch (result.status) {
            case 'success':
                const message = result.message as Message<string>;
                if (message.function_call) {
                    // Call function and add result to history
                    console.log(`\x1b[2m[${message.function_call.name}(${message.function_call.arguments})]\x1b[0m`);
                    const args = JSON.parse(message.function_call.arguments!);
                    const unit = args.unit ?? 'fahrenheit';
                    wave.addFunctionResultToHistory(message.function_call.name!, { "temperature": unit == 'fahrenheit' ? 76 : 24, "unit": unit, "description": "Sunny", "location": args.location });

                    // Call back in with the function result
                    await completePrompt('');
                } else {
                    // Call chat to display response and wait for user input
                    await chat(message.content!);
                }
                break;
            default:
                if (result.message) {
                    console.log(`${result.status}: ${result.message}`);
                } else {
                    console.log(`A result status of '${result.status}' was returned.`);
                }

                // Close the readline interface and exit the process
                rl.close();
                process.exit();
                break;
        }
    }

    // Show the bots message
    console.log(`\x1b[32m${botMessage}\x1b[0m`);

    // Prompt the user for input
    rl.question('User: ', async (input: string) => {
        // Check if the user wants to exit the chat
        if (input.toLowerCase() === 'exit') {
            // Close the readline interface and exit the process
            rl.close();
            process.exit();
        } else {
            // Complete the prompt using the user's input
            completePrompt(input);
        }
    });
}

// Start chat session
chat(`Hello, how can I help you?`);

The bulk of that code is just boilerplate for interacting with the terminal. The part we care about is here:

        const result = await wave.completePrompt<string>(input);
        switch (result.status) {
            case 'success':
                const message = result.message as Message<string>;
                if (message.function_call) {
                    // Call function and add result to history
                    console.log(`\x1b[2m[${message.function_call.name}(${message.function_call.arguments})]\x1b[0m`);
                    const args = JSON.parse(message.function_call.arguments!);
                    const unit = args.unit ?? 'fahrenheit';
                    wave.addFunctionResultToHistory(message.function_call.name!, { "temperature": unit == 'fahrenheit' ? 76 : 24, "unit": unit, "description": "Sunny", "location": args.location });

                    // Call back in with the function result
                    await completePrompt('');
                } else {
                    // Call chat to display response and wait for user input
                    await chat(message.content!);
                }
                break;

You call completePrompt() and you can be guarantied that if a function_call is returned with the message its well formed, valid, and matches the scheme you passed the model. Just parse the args, call the function, and pass the results back into the model using wave.addFunctionResultToHistory(). Call back into the model with an empty string for the input and keep on trucking…

An example repair coming soon…

1 Like

Ok so it didn’t take much effort to get even their weather example to return a response that needed to be repaired. I just told the model to not include a required parameter :slight_smile:

I probably actually need to tweak my feedback as I would have hopped that it would have called the function but with an empty location field. I’ll work on that…

UPDATE 1
I tweaked the feedback for when the model fails to return any arguments with a function call. Now it properly repairs its call. Interesting that it’s pulling defaults from the property description for the missing location field.

UPDATE 2
I have it fixing missing or invalid function arguments now as well. It’s hallucinating the location but I pretty much gave it no choice so ignore that. The important bit is that the feedback is getting the model to repair its bad response:

UPDATE 3
And here’s an example of AlphaWave repairing a hallucinated function:

I’ve published a new 0.6.1 version of the package with these fixes. AlphaWave should reliably repair any invalid function calls now.

2 Likes