Another thing you might want to try is re-formatting your prompt using markup.
My applications send large amounts of information in their LLM prompts, and while gpt-4-1106-preview (I’ve not seen the issues you’re having) reads them fairly well, gpt-3.5-turbo-16k consistently failed to comprehend. I improved it’s comprehension by sending it the prompts formatted in XML. API Prompt for gpt-3.5-turbo-16k - #12 by SomebodySysop