Hi, for calculations you should not depend on the model’s output.
Instead, you might want to use the functional abilities to let model call external tools.
The prompt should be like:
- Analysis the request and spit out what should be done.
- Execute the thoughts, call external tools.
- Aggregate the results.
2 Likes
I’m reading this thread with a lot of interest and also followed Steve’s post here.
It’s very inspiring - thanks for sharing. I do have questions, though.
-
If I look at the example prompts, I notice that the first instruction always seems to be repeated. From my observations (chat gpt 3.5-turbo May 24), repeating this doesn’t seem to make any difference wrt to the output. Is this due to recent improvements?
-
@stevenic Doesn’t seem to use system prompts at all to ground the model. Is this correct?
Hi @Krumelur - I am evolving the python port of Steven’s ideas GitHub - Stevenic/alphawave-py: AlphaWave is a very opinionated client for interfacing with Large Language Models. or
python -m pip install alphawave
see https://tuuyi.io for minimal, but evolving, docs
There is an ‘initial thought’ parameter when kicking off a thread (agent) that allows you to provide needed grounding. The internal prompts are solely for guiding command selection and various repair protocols.
This iteration of Steve’s thinking has strayed perhaps quite a way from that April post, so it many not seem directly related.
Bruce
1 Like