Send updated context in between audio input stopping and the model/agent processing the user's audio

I have an issue with conversational audio and injecting context into the system at the latest point possible. This context is useful for the agent to be able to answer a set of potential questions. I had assumed/hoped that sending updated context information via a new message (conversation.item.create, type:message role:system) at the point the input_audio_buffer.speech_stopped event is received would give the model time to use the updated context when processing its response. But it seems that it is not using this information in responding the the user’s speech/question. Why is this? Am I missing something?

Looking at the other events being received, it seems that there is not another appropriate hook after the speech_started event. Maybe it can’t be done this way or something is wrong in how the updated context is pushed to the model. I can’t see many other solutions other than to supply a function hook that the model can call to fetch the updated data, but I’m surprised this other approach does not work.