@stevenic - which of these will run without the agent code?

This sample requires no agent code… It’s basically a simple ChatGPT style example. I still owe you an example of base AlphaWave with a validator that walks the model back from hallucinations but I can tell you that it works. It’s best seen in the agent demos because they make mistakes all the time but the core isolation & feedback mechanism AlphaWave employs works.

I was demoing an agent to someone today and GPT-3.5 first failed to return a JSON thought structure the agent system needs so it gave it feedback. Normally it returns the correct structure on the next call but this time the model returned a partial JSON object. AlphaWave gave it feedback as to the fields it was missing and on the next call it returned a complete and valid object. And that was GPT-3.5 which is the worst of these models.

This stuff sounds like snake oil but I promise it works to improve the overall reliability of these models to return valid structured results.

Ah. Already have that one running, and in the tests dir on github. will post now

Well, deep apologies, I’m a reasonably experienced developer, but unexperienced in packaging and pypi.

do this to get alphawave 0.1.1:

pip3 install --ugrade alphawave

now save the below to alphachat.py, then run it:

import os
from pathlib import Path
import readline
from alphawave.AlphaWave import AlphaWave
from alphawave.OpenAIClient import OpenAIClient
import promptrix
from promptrix.Prompt import Prompt
from promptrix.SystemMessage import SystemMessage
from promptrix.ConversationHistory import ConversationHistory
from promptrix.UserMessage import UserMessage
import asyncio

# Read in .env file.
env_path = Path('..') / '.env'
#load_dotenv(dotenv_path=env_path)

# Create an OpenAI or AzureOpenAI client
client = OpenAIClient(apiKey=os.getenv("OPENAI_API_KEY"))

# Create a wave
wave = AlphaWave(
    client=client,
    prompt=Prompt([
        SystemMessage('You are an AI assistant that is friendly, kind, and helpful', 50),
        ConversationHistory('history', 1.0),
        UserMessage('{{$input}}', 450)
    ]),
    prompt_options={
        'completion_type': 'chat',
        'model': 'gpt-3.5-turbo',
        'temperature': 0.9,
        'max_input_tokens': 2000,
        'max_tokens': 1000,
    }
)

# Define main chat loop
async def chat(bot_message=None):
    # Show the bots message
    if bot_message:
        print(f"\033[32m{bot_message}\033[0m")

    # Prompt the user for input
    user_input = input('User: ')
    # Check if the user wants to exit the chat
    if user_input.lower() == 'exit':
        # Exit the process
        exit()
    else:
        # Route users message to wave
        result = await wave.completePrompt(user_input)
        if result['status'] == 'success':
            print(result)
            await chat(result['message']['content'])
        else:
            if result['response']:
                print(f"{result['status']}: {result['message']}")
            else:
                print(f"A result status of '{result['status']}' was returned.")
            # Exit the process
            exit()

# Start chat session
asyncio.run(chat("Hello, how can I help you?"))

I had a lot of issues with dependencies, pyreadline3 is the windows alternative for readline, installed pyee too or so.

But still getting different errors atm “KeyError: ‘Could not automatically map gpt-4 to a tokeniser. Please use tiktok.get_encoding to explicitly get the tokeniser you expect.’”

Ah. good to know. I run it with 3.5 for testing. Doesn’t really matter, it’s just using that to get a rough token count. I don’t think I can get a fix out tonight, but maybe, otherwise early tomorrow. with luck will have core agent code in as well by then…

pyee - ah. tnx, will add to dependencies in pypi pytoken.toml. Sorry

ps - will be offline for the night shortly

1 Like

To clarify, I didn’t change the code to use gpt4, it remains in 3.5 turbo.

Why the error writes GPT4 i’m not sure. Gm

1 Like

Basic port of promptrix AND alphawave, including some example Agents, now on pypi.

thanks to @stevenic (original promptrix and alphawave developer in typescript) for all his support and encouragement.

probably a couple of bugs still in the port, don’t blame him, DM me.

1 Like

Great job!
Updated to the new version but seems I’ll test it in the morning

1 Like

@stevenic This is amazing! Thank you so much for sharing.

I was wondering how the agent handles token limits? I’ve used “smol-ai/developer)” which is based on BabyAGI, but the OpenAI token limits the complexity of the output, simply by the 8k token limit. I’ve used Prompt.md which definitely help, but even still, I can’t get the length of response needed to get the answers I want. Thoughts?

1 Like

I’m only using gpt-3.5-turbo currently so I don’t even have an 8k token limit and I’m having success but it probably depends on the task. I have access to gpt-4 but all of our customers (Microsofts Customers) want to use turbo to save cost so my current focus is making turbo work as well as I possibly can.

What’s your specific scenario?

1 Like

It’s live. pypi alphawave
that should install both alphawave and promptrix, including version 0.2.2 of alphawave Agents. Make sure you get 0.2.2
Runs the macbeth test script! ( from github: alphawave-py/tests/macbeth)

@stevenic is planning an overhaul of agents, so that part might change in subsequent releases.

There aren’t a lot of examples of how to write prompts or validation schema yet, DM me(python) or @stevenic (js) or just discuss here.

I will be posting examples on promptrix-py and alphawave-py tonight.

2 Likes

Were you ever able to resolve this?
New version live that runs complete macbeth test, a pretty thorough exercise of all functionality.

I have a number of examples and if you have Copilot installed it will pretty much write for you, a JSON Schema to validate the JSON response you get back.

basic schema validation built in now!

2 Likes

Now OpenAI can do schema validation internally!
gpt-3.5-turbo-613 and gpt-4-613

but, that is only a small part of the capability of alphawave.

This actually compliments AlphaWave quite nicely. I’ll add support for functions over the weekend. You’ll just pass AlphaWave a FunctionValidator with the schema for the function call you expect.

I can actually make them work for older models like Davinci as well…

1 Like

this makes alphawave all the more important for OS, to keep it competitive. I’ll make sure when I port your update the OSClient still does the right thing.

1 Like

btw, how are we going to support functions? incredibly cool. plugins for the API at last!

Looks like, from their example, that they want to use it for planning (otherwise you don’t need the descriptions. The thing is they’re missing any sort of chain-of-thought fields so I’m not sure how good it’s going to be at planning across multiple turns.

I suspect Alpha Waves Agents will outperform it at planning tasks. I’ll also be curious to see if it still triggers repairs. I know they’ve tuned the model for this task but I bet it still hallucinates at times so I bet you’ll still benefit from having AlphaWave do a secondary validation pass. I have bunch of hallucination test cases I can throw at it but definitely a step in the right direction for them.