It’s review time for us at Microsoft so I’m working on a sample agent that can help you write a performance review and thought I’d share a quick screen shot. This from about 2 turns in for the “Core Priorities” section of my review. This is gpt-3.5-turbo and I’ve turned off all the feedback reporting but there’s a couple of points in the task where the model tends to hallucinate:

Here’s the entire code for that agent:

As you can see the Agent made it all the way through the task and finished with a call to completeTask.

This will be structured as a hierarchy of agents that break the task down into smaller and smaller subtasks:

Now to very that agents can call other agents :slight_smile:

1 Like

Alphawave 0.1.0 and Promptrix 0.1.0 now available on pypi!

pip3 install alphawave

should install both, or if you only want promptrix: pip3 install promptrix

Alphawave 0.1.0 only includes core alphawave, no Agents. Look for that shortly.

Also, only OpenAI client support. Azure client is there, but in raw GPT-4 translation, known errors. If you need this and can’t do it yourself let me know.

An OS LLM Client with support for fastchat server api and also simple JSON/txt socket, as well as user/asst template support using the FastChat conversation.py class, coming very soon.

3 Likes

Good work rate, I’ll install the python version and test it.

1 Like

I have it installed it, but how do I use it ?

Any READ.me?

@bruce.dambrosio do you want to maybe try to port the base AlphaWave sample from JS to Python?

@jethro.adeniran the readme on the JS side of things my help get you started. I know Bruce was going to try and work up a jupyter notebook but this is still a bit of a work in progress:

Yeah, sorry. It is rather inscrutable. That’s high on the list.
@stevenic - I’ll take a look tonight. I really want to get that OSClient out, I think alphawave is really needed there.

2 Likes

@stevenic - which of these will run without the agent code?

This sample requires no agent code… It’s basically a simple ChatGPT style example. I still owe you an example of base AlphaWave with a validator that walks the model back from hallucinations but I can tell you that it works. It’s best seen in the agent demos because they make mistakes all the time but the core isolation & feedback mechanism AlphaWave employs works.

I was demoing an agent to someone today and GPT-3.5 first failed to return a JSON thought structure the agent system needs so it gave it feedback. Normally it returns the correct structure on the next call but this time the model returned a partial JSON object. AlphaWave gave it feedback as to the fields it was missing and on the next call it returned a complete and valid object. And that was GPT-3.5 which is the worst of these models.

This stuff sounds like snake oil but I promise it works to improve the overall reliability of these models to return valid structured results.

Ah. Already have that one running, and in the tests dir on github. will post now

Well, deep apologies, I’m a reasonably experienced developer, but unexperienced in packaging and pypi.

do this to get alphawave 0.1.1:

pip3 install --ugrade alphawave

now save the below to alphachat.py, then run it:

import os
from pathlib import Path
import readline
from alphawave.AlphaWave import AlphaWave
from alphawave.OpenAIClient import OpenAIClient
import promptrix
from promptrix.Prompt import Prompt
from promptrix.SystemMessage import SystemMessage
from promptrix.ConversationHistory import ConversationHistory
from promptrix.UserMessage import UserMessage
import asyncio

# Read in .env file.
env_path = Path('..') / '.env'
#load_dotenv(dotenv_path=env_path)

# Create an OpenAI or AzureOpenAI client
client = OpenAIClient(apiKey=os.getenv("OPENAI_API_KEY"))

# Create a wave
wave = AlphaWave(
    client=client,
    prompt=Prompt([
        SystemMessage('You are an AI assistant that is friendly, kind, and helpful', 50),
        ConversationHistory('history', 1.0),
        UserMessage('{{$input}}', 450)
    ]),
    prompt_options={
        'completion_type': 'chat',
        'model': 'gpt-3.5-turbo',
        'temperature': 0.9,
        'max_input_tokens': 2000,
        'max_tokens': 1000,
    }
)

# Define main chat loop
async def chat(bot_message=None):
    # Show the bots message
    if bot_message:
        print(f"\033[32m{bot_message}\033[0m")

    # Prompt the user for input
    user_input = input('User: ')
    # Check if the user wants to exit the chat
    if user_input.lower() == 'exit':
        # Exit the process
        exit()
    else:
        # Route users message to wave
        result = await wave.completePrompt(user_input)
        if result['status'] == 'success':
            print(result)
            await chat(result['message']['content'])
        else:
            if result['response']:
                print(f"{result['status']}: {result['message']}")
            else:
                print(f"A result status of '{result['status']}' was returned.")
            # Exit the process
            exit()

# Start chat session
asyncio.run(chat("Hello, how can I help you?"))

I had a lot of issues with dependencies, pyreadline3 is the windows alternative for readline, installed pyee too or so.

But still getting different errors atm “KeyError: ‘Could not automatically map gpt-4 to a tokeniser. Please use tiktok.get_encoding to explicitly get the tokeniser you expect.’”

Ah. good to know. I run it with 3.5 for testing. Doesn’t really matter, it’s just using that to get a rough token count. I don’t think I can get a fix out tonight, but maybe, otherwise early tomorrow. with luck will have core agent code in as well by then…

pyee - ah. tnx, will add to dependencies in pypi pytoken.toml. Sorry

ps - will be offline for the night shortly

1 Like

To clarify, I didn’t change the code to use gpt4, it remains in 3.5 turbo.

Why the error writes GPT4 i’m not sure. Gm

1 Like

Basic port of promptrix AND alphawave, including some example Agents, now on pypi.

thanks to @stevenic (original promptrix and alphawave developer in typescript) for all his support and encouragement.

probably a couple of bugs still in the port, don’t blame him, DM me.

1 Like

Great job!
Updated to the new version but seems I’ll test it in the morning

1 Like

@stevenic This is amazing! Thank you so much for sharing.

I was wondering how the agent handles token limits? I’ve used “smol-ai/developer)” which is based on BabyAGI, but the OpenAI token limits the complexity of the output, simply by the 8k token limit. I’ve used Prompt.md which definitely help, but even still, I can’t get the length of response needed to get the answers I want. Thoughts?

1 Like

I’m only using gpt-3.5-turbo currently so I don’t even have an 8k token limit and I’m having success but it probably depends on the task. I have access to gpt-4 but all of our customers (Microsofts Customers) want to use turbo to save cost so my current focus is making turbo work as well as I possibly can.

What’s your specific scenario?

1 Like

It’s live. pypi alphawave
that should install both alphawave and promptrix, including version 0.2.2 of alphawave Agents. Make sure you get 0.2.2
Runs the macbeth test script! ( from github: alphawave-py/tests/macbeth)

@stevenic is planning an overhaul of agents, so that part might change in subsequent releases.

There aren’t a lot of examples of how to write prompts or validation schema yet, DM me(python) or @stevenic (js) or just discuss here.

I will be posting examples on promptrix-py and alphawave-py tonight.

2 Likes

Were you ever able to resolve this?
New version live that runs complete macbeth test, a pretty thorough exercise of all functionality.

I have a number of examples and if you have Copilot installed it will pretty much write for you, a JSON Schema to validate the JSON response you get back.