Providing user capability to add additional context to prompt

I have designed an API endpoint which accepts below JSON payload

{ “question” : “Do you security system available in your company?”,

“response” : “Yes”,

“Goal” : “If Yes provide the details of the security protocol. If no why it is not available”,

“comment” : “We have cloud security enabled”

}

I am using gpt 4.0 and instructing via user prompt and system prompt to validate the response and comment field which is provided by the user. Comment should have the details based on the goal defined. If goal defined is met then output is Yes else No.

Our user prompt and system prompt is very generic. There are few questions which is domain specific and in those scenario where we are giving SME to add additional optional context to the prompt.

I tried this with strict prompt to ensure additional option context which is just to feed the domain detail and not any instruction and should not override the actual system prompt.

But what I am seeing is if the additional optional context contains any instruction LLM is considering that.

Can someone help?