Kodiac15 a OpenAI console on a iPhone Alpine Linux iSH shell


It is fascinating to see their reaction to their first conversation

When the first bot response was given to the other one and his fed back to the first one the conversation got started I only had to push the button didn’t have to intervene then abruptly it ended only to begin again with examples of letters to a ceo for a job :sunglasses:

It could be real I still don’t believe it yet

I keep challenging them and they respond and collaborate I think :thinking:

2 Likes

Very cool. I wish it was more condensed so I could read more of the conversation on my screen though.


Best proof I have this five seconds

Is there a way to reduce the noise so all to be seen is the message? Or, I guess the content?

You are looking at a log and logs have a lot of noise in them.

The real time experience using the tool lets you see the response clearly

Sorry PHP json decode and jq tools just hate the api var dump. It is not json.

So I had to cut it up using other tools and insert From statements in there

So no this is the log and it is as good as I could get it

I am sure some of these talented folks here can do Better. I am a old hardware tech not a life time developer I started with an oscilloscope and ended up doing dev/ops so pretty print is not my thing logs are my thing :sunglasses:

What can be seen here is a conversation if you loop them up using the feedback from the last bot in the chain back into the first ones input concatenated with the user newest input

1 Like

Fair enough :sweat_smile:

Ah it’s all fun anyways, and making good progress! It is very cool to see two different AI speak to each other. What a time

1 Like

I cut off as much as I can I think here perhaps I can json encode what is left will keep after it

Meanwhile the logs keep track of who is talking and that is the important thing evaluation of any real communication

I do not log the index page communications only the double trouble page the ones using my API for a proxy

This might help

$response = curl_exec($ch);
$response = json_decode($response, true);
$message = $response['choices'][0]['message']['content'];
echo $message . "<br/>" . PHP_EOL;
2 Likes

Thank you

But … when I invoke json decode it just freaks out.

The raw response is below

First one
string(22) “FROM Artic Fox: Hello”
string(429) "{
“id”: “chatcmpl-7mUNsUlRg0mDcEVM2T4zl7fAQLDLh”,
“object”: “chat.completion”,
“created”: 1691790896,
“model”: “gpt-3.5-turbo-0613”,
“choices”: [
{
“index”: 0,
“message”: {
“role”: “assistant”,
“content”: “Hello! How can I assist you today?”
},
“finish_reason”: “stop”
}
],
“usage”: {
“prompt_tokens”: 21,
“completion_tokens”: 9,
“total_tokens”: 30
}
}
"
array(6) {
[“id”]=>
string(38) “chatcmpl-7mUNsUlRg0mDcEVM2T4zl7fAQLDLh”
[“object”]=>
string(15) “chat.completion”
[“created”]=>
int(1691790896)
[“model”]=>
string(18) “gpt-3.5-turbo-0613”
[“choices”]=>
array(1) {
[0]=>
array(3) {
[“index”]=>
int(0)
[“message”]=>
array(2) {
[“role”]=>
string(9) “assistant”
[“content”]=>
string(34) “Hello! How can I assist you today?”
}
[“finish_reason”]=>
string(4) “stop”
}
}
[“usage”]=>
array(3) {
[“prompt_tokens”]=>
int(21)
[“completion_tokens”]=>
int(9)
[“total_tokens”]=>
int(30)
}
}
NULL

Second server :

API Response:

string(1253) "From KL0XL TalkeetnaFrom KC4PTI Arctic Fox:string(22) “FROM Artic Fox: Hello” string(429) "{ “id”: “chatcmpl-7mUQNoVQBmztU1W2dUqxhADiQCdol”, “object”: “chat.completion”, “created”: 1691791051, “model”: “gpt-3.5-turbo-0613”, “choices”: [ { “index”: 0, “message”: { “role”: “assistant”, “content”: “Hello! How can I assist you today?” }, “finish_reason”: “stop” } ], “usage”: { “prompt_tokens”: 21, “completion_tokens”: 9, “total_tokens”: 30 } } " array(6) { [“id”]=> string(38) “chatcmpl-7mUQNoVQBmztU1W2dUqxhADiQCdol” [“object”]=> string(15) “chat.completion” [“created”]=> int(1691791051) [“model”]=> string(18) “gpt-3.5-turbo-0613” [“choices”]=> array(1) { [0]=> array(3) { [“index”]=> int(0) [“message”]=> array(2) { [“role”]=> string(9) “assistant” [“content”]=> string(34) “Hello! How can I assist you today?” } [“finish_reason”]=> string(4) “stop” } } [“usage”]=> array(3) { [“prompt_tokens”]=> int(21) [“completion_tokens”]=> int(9) [“total_tokens”]=> int(30) } } NULL " string(1070) "{ “id”: “chatcmpl-7mUQSGkF5S41TRyOUmRzGk2fDjoiz”, “object”: “chat.completion”, “created”: 1691791056, “model”: “gpt-4-0613”, “choices”: [ { “index”: 0, “message”: { “role”: “assistant”, “content”: “The output from the chat model outputs a greeting "Hello! How can I assist you today?". It is wrapped in a data structure, which contains not only the outputted message but also some metadata about the message. It includes the id of the chat completion, the object type, when it was created, the model that created it, why the response was ended (because it reached a stopping point), and also some details about the usage of tokens (which are elements of text that the model reads or generates). The token information is available as both individual counts for the prompt and the completion, and as a total. Lastly, it shows a null result because no further data was returned.” }, “finish_reason”: “stop” } ], “usage”: { “prompt_tokens”: 448, “completion_tokens”: 138, “total_tokens”: 586 } } " array(6) { [“id”]=> string(38) “chatcmpl-7mUQSGkF5S41TRyOUmRzGk2fDjoiz” [“object”]=> string(15) “chat.completion” [“created”]=> int(1691791056) [“model”]=> string(10) “gpt-4-0613” [“choices”]=> array(1) { [0]=> array(3) { [“index”]=> int(0) [“message”]=> array(2) { [“role”]=> string(9) “assistant” [“content”]=> string(677) “The output from the chat model outputs a greeting “Hello! How can I assist you today?”. It is wrapped in a data structure, which contains not only the outputted message but also some metadata about the message. It includes the id of the chat completion, the object type, when it was created, the model that created it, why the response was ended (because it reached a stopping point), and also some details about the usage of tokens (which are elements of text that the model reads or generates). The token information is available as both individual counts for the prompt and the completion, and as a total. Lastly, it shows a null result because no further data was returned.” } [“finish_reason”]=> string(4) “stop” } } [“usage”]=> array(3) { [“prompt_tokens”]=> int(448) [“completion_tokens”]=> int(138) [“total_tokens”]=> int(586) } } NULL

Json decode just throws a fit and trashes the page output

GPT can read the tail I automatically can feed it for evaluation in the search logs page there is a button that will tail the irc-out log and Put it into the assistant input

All I have to do is modify the beginning and tell it to evaluate it

If nothing else it is a clear demonstration of how GPT has earned a spot on MOC teams to chunk their gigantic logs so they can better evaluate them

If you only knew all the tools I wrote to evaluate logs you would realize what a impressive improvement this really is to a tech trying to fix a issue

All they need to do is build a chunking server to find the right chunk and then it becomes easy to find what you are looking for

I am talking logs that are gigantic so a chunk server could give GPT a window of time that is reasonable in length

Probably could tail the log and cut it down like that if you just want real time info which is 95% of what geeks need.

Would only need a chunk server for windows of time

Here is an example of how this could work on a network

The first server calls the second one that is watching the logs

You polish this up Disney would buy it

A better one

Try it … Arcticfoxltc.com

I did something similar: Plato talking with Aristotle. You can see a demo on YouTube. I also implemented a voice interface. Here is a log of some of the dialog.

1 Like

It is interesting :face_with_monocle:. I found however the second bot typically did not engage like I see your two did. They did a lot of repeating each other and got boring fast.

Thank you for sharing your experience seems your two bots actually did engage like Sophia and Hans of Hansen robotics did here

1 Like

I think something interesting here is that there are 3 agents.

1 that actuates/mediates the conversation, and the other 2 that react until told otherwise, or until they run out of things to say

It would be interesting to continuously “tighten” a topics debate with penalties until the mediator decides to change the subject. Maybe even give a special token to indicate that it has nothing more to add.

:thinking:

1 Like

Yes having the human input was helpful and you could do exactly that. I would ask the first a question when it answered I could either modify it or send it on. Then the second bot fed the first one with their responses

The
Cycle repeats and it could get interesting the second one would interject answers and keep the conversation going

Was really fascinating

1 Like

This has been counting only a few days and I
Validated it, they are real users

1 Like

A lot clearer now and GPT4 helps a lot

This code OpenAI and I struggled to get working for those of you struggling with sendgrid webhook this is gold code no one has it

Save this and run python3 app.py

Data saved to data.txt

U nano 6.2 hook.py
from flask import Flask, request

app = Flask(name)

@app.route(‘/webhook’, methods=[‘POST’])
def webhook():
try:
# Extract the fields you want from the incoming data
from_email = request.form.get(‘from’)
to_email = request.form.get(‘to’)
subject = request.form.get(‘subject’)
text = request.form.get(‘text’)

    # Print and save the extracted fields
    print(f"From: {from_email}")
    print(f"To: {to_email}")
    print(f"Subject: {subject}")
    print(f"Content: {text}")

    with open('data.txt', 'a') as file:
        file.write(f"From: {from_email}\n")
        file.write(f"To: {to_email}\n")
        file.write(f"Subject: {subject}\n")
        file.write(f"Content: {text}\n\n")

    return "Webhook received successfully", 200

except Exception as e:
    return str(e), 400

if name == ‘main’:
host = “0.0.0.0”
port = 5000

print(f"Flask app is running at: http://{host}:{port}")
app.run(host=host, port=port)

Look :eyes: I sent a email to the ChatBot directly and got a reply :eyes:

So close now to fully automated reply’s so excited :laughing:

1 Like