All questions about code are treated as rhetorical requests

I’m wondering if I’m the only one who is experiencing this incredibly annoying part of the current state of the models;

If I start a thread and ask for some code; if I then ask a follow-up question I never receive an answer to my question, instead the model always assumes that it’s a purely rhetorical question and picks one possible answer (or the “leading” proposal in a leading question) as a directive/command, and proceeds to ignore the fact that you asked a question and instead updates the code.

This is sometimes good, if your intuition is correct as to why you would ask the question, but if it’s just an honest question and I am honestly confused very often it will output nonsense code as a response.

Or if I’m asking if something is possible to implement and it’s in fact not possible within the current specs of the language or a given framework, it will very rarely tell me that I’m wrong or explain where I am confused, and instead provide some very bad or wrong code, unless I write a lot of very explicit sentences about the fact that I want the model to directly answer my question instead of updating the code et.c.

I’ve noticed that Gpt-4 does this less than Gpt-4o, but still that this is something that Gpt-4 is doing a lot more now than it used to…

If you have this problem, what are you currently doing about it?
Custom instructions seem extremely weak all of the sudden, I’ve tried to put some prompts for this here but it has not been very effective, if you have effective instructions for this I would love if you could share them.

Thank you,