Currently only the
user role is supported by
Message. Sometimes though, I’d like to start and/or send messages in threads as another entity. This was possible with chat completions with both the
name and the
role parameters. This was super convenient for allowing other entities (other assistants and/or humans) to move the conversation forward while keeping the assistant in the loop.
If there is currently a way to do this with the Assistant API and i missed it, please just let me know, i’m loving the new update overall!
I second this – this is also very much needed if we want to implement caching of assistant responses. When we find the asst response in the cache, we want to stick it in the thread so everything continues as if the LLM actually produced the msg.
I second this as well would be helpful, in scenarios where backend processing and api response isn’t synchronous but async
An idea occurred to me: if the
role parameter is not allowed to be other than
user, then instead we can simply stick our own role in the
content field, and it works just fine! With this ability, I am able to now cache assistant responses in Langroid, see this code
I keep an updated hash H of the conversation in the thread metadata, and cache the assistant response R at any stage so C[H] = R. On any run, before starting an assistant run, I look whether there is a cached R = C[H] , and if there is, I stick in R (the cached assistant response) onto the thread, as a msg with
user role (since that is the only role allowed), but I prefix the content with
"ASSISTANT:...". This assistant spoofing is handled in the expected way by the assistant, i.e. it treats the
ASSISTANT-labled messages as if they were generated by the assistant. E.g. see this test
Yea, I am doing something similar! Though, i’d love to not have to use a workaround like this.