I’ve been so busy with code maintenance for my Discourse Chatbot adaptor for Open AI that I’ve not had time to experiment with this but would love to hear some opinions:
Are ChatGPT/gpt-3.5-turbo
models good at dealing with user prompts that come from different users?
We tend to use ChatGPT as one on one with the LLM.
But in a forum we usually have multiple users in the same thread/topic.
I’ve had some significant success with my Summarization plugin in adding usernames to posts and Davinci seems to reliably attribute statements and information to the right users.
But will this work with chat effectively, should it? And how best to attribute? Just prepend username:
in front of each post prompt`?
Is there an option to change the role to define different users?
2 Likes
You cannot define users at the session layer using the chat
completion method. There is an optional user
param but it is only used by the OpenAI staff for abuse monitoring.
All sessions and dialog management happens on the client side.
HTH
2 Likes
I was thinking about the role value, I suppose this will be rejected or ignored?:
openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "gary", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "steven", "content": "Where was it played?"}
]
)
Or any idea what benefit or chaos might ensue if I prepend every message with a username?
openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "gary: Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "steven: Where was it played?"}
]
)
Looking forward to a turbo
playground
It will generate a error, like so:
{message=>'error' is not one of ['system', 'assistant', 'user'] - 'messages.0.role', type=>invalid_request_error, param=>nil, code=>nil}
1 Like
I have it working in a group chat. Its simple, just insert the username in each role for user. Like so
{“role”:“user”,“content”:“%USERNAME said %COMMAND%”}]}
{“role”:“user”,“content”:“John Doe said hows it going fam”}]}
My bot remembers everything a certain person told it. Favorite color, number, what type of shoes they like. And even responds to that person with their history.
7 Likes
Perfect, thanks. I’ll try that pattern! Thanks to you both!
Hi @merefield
Here is how I do it in my lab:
Basically, I have a ChatConversation
DB table, something like this:
Draft chat_conversation
table:
create_table "chat_conversations", force: :cascade do |t|
t.string "description"
t.string "user"
t.string "messages"
t.integer "total_tokens"
t.integer "prompt_tokens"
t.integer "completion_tokens"
t.datetime "created_at", precision: 6, null: false
t.datetime "updated_at", precision: 6, null: false
end
Note, I thought about serializing the token usage
but kept them separate so I could easily update them and track. However, there is nothing wrong with serializing the usage
info.
Hope this helps.
2 Likes
hello iven been thinking the same thing. if done right, AI should learn faster! can you elaborate on the framwork?
how many people contributing for the vote? i assume one of the ways to do this is by deploying in a small group of 3. Odd number for the majority vote. Ai need absolute value as validation. Then user profiling. Maybe one has to gather all three participant before deploying into the group. This is fascinating stuff but not many are into this concept.
As Open AI team statement the progress from API is not up to their par. I think they should narrow down packages and standardize AI with proper guideline to follow, and they need to make it accessible to everyone as in a group.
More data doesn’t mean good for AI. We as human can forget about things, but not AI. Need to go trough string of nonsense just to get to the point and return with unsatisfying answer
Human will always be wrong but i guess, we want them to be like us right! So put them bigger memory foam and let them learn by themselves. …just like human did
You’ve lost me, this hasn’t got anything to do with voting? This is just about helping ChatGPT interact in a conversation with more than one human user.
I followed the suggestions on this Topic wrt user prompts and my Discourse Plugin is working fine.
I’ve marked the correct solution.
Came across this post here by googling, I can confirm that the “user:” prefix works, esp. if already mentioned in the system role.
See here for an example: Devs & Architects Assembly powered by Polymindra - HackMD
1 Like