In the vscode extension, the queuing system works such that when you add prompts while the model is running, the prompt is queued unless you press “steer” on it. But sometimes it will just give the prompt to the model instead (as if you pressed steer) and completely wreck the order of the prompting.
When this happens, the model obviously gets confused and often abandons the original prompt. The way to reset it is then to manually delete every item in the queue (you can’t stop it temporarily as that will just pass the next prompt in the queue to it) and explain that the queue failed and then paste in the first prompt again and then rebuild the entire queue.
Please implement these things in the queuing system:
Fix the bug where prompts are unintendedly used to steer the model when it was supposed to land in the queue
Build a global pause button so you can stop the current prompt without moving to the next prompt in the queue
In the last few months the pace and level of changes to Codex is increasing as such the new features are not so bullet proof, in return we get more and better features faster.
The way I stay up on such is to check the OpenAI Codex repository for commits and issues daily. The are even bookmarked in my browser.
While that will not immediately solve your problem, you can give a thumbs up to the problems you see as a vote that this needs attention.
For the most part I think OpenAI is or will be automating most of the issue handling with the tools for creating Codex and eventually it will not matter how many people report an issue as the AI will realize the commonalities of the urgent and popular ones, create a fix and that will just be part of the loop.