GPT3.5 returning incorrect data

I am using the gpt3.5 APIs for my task.

It worked initially correctly with accuracy. Now I inverted the same test case, where now it should say no data found.

But it keeps returning the previous output. I am using assistant setting in the prompt.

I did not change the prompt though, only a new input. Any advise?

For eg:
This should not be in the output at all. The output should be blank

Id updated from 1023 to 1024 on 2022-05-12 09:38:29 by LISA.
Id updated from 1024 to 1025 on 2022-05-13 09:38:29 by GREG.
Id updated from 1025 to 1023 on 2022-05-14 09:38:29 by SAMANTHA.

We have no idea what your input or your prompt is, so we have no idea what it is you’re doing wrong or where the model is failing.


There’s been no changes to 3.5. If you were receiving good results and now with the exact same parameters you are getting bad results, you probably just got lucky initially. If you’re inverting a test case that makes it an entirely different problem and not unexpected.

Without any more details on your exact prompts and expected output there’s not much advice that can be given.

Sorry, I think it is because of the Choice of the model. I used gpt3.5 which is extension to the ChatGPT interface, while I should be using the playground APIs.

Wait a minute. This output is bothering me for few reasons. a) those values were never ever supplied in the test cases. b) it took the APIs 111.36 seconds to respond for the input prompt. c) those text formatting were not requested yet it was provided in the response. How can I turn all the noise off and get just the output format what I want.?

I moved to the first prompt that ever worked exactly how I wanted to, like the output was perfect. This is the original prompt

Input: Provided are few(n) HL7 messages in the form of a map with Integer keys and String values.
Analyze the messages and perform comparisons between them.
Starting from the A28 message (the base version), identify and record the changes made to the patient demographics only.
Perform a total of n-1 comparisons.
Extract the value of MSH.7, which represents the date and time of the message.
Extract the value of EVN.5, which represents the OperatorId who generated the message.
Based on the value of EVN.5, assign the appropriate username as follows:
If EVN.5 equals 1023, set username = LISA.
If EVN.5 equals 1024, set username = GREG.
If EVN.5 equals 1025, set username = SAMANTHA.
Give precise values and all the items that you observed were changed in ascending order of the field name, a delta was recorded.Generate only a report in the following format only:
Do not add any input information back in the response, like the messages or syntax.
Do not compare MSH and EVN segments.
Do not return double entries.
Only compare PID segments.
Syntax: <Field Name> updated from <previous value> to <current value> on <DateTime> by <UserName>

Okay so there are couple of things to consider.

  1. Temperature. I gradually increased the temperature from 0 to 0.1 and then to 0.2 and so on. Higher values (closer to 1) make output more diverse and sometimes random, while lower values (closer to 0) make the model more deterministic and focused. Taken directly from ChatGPT 4.

  2. Prompt optimization : Whatever prompt you write, submit that to GPT4 (not 3.5) and ask it to optimize it for you.

  3. Feedback Prompting or supplemental prompts : The first response of the prompt need to be submitted back to the APIs, with a supplemental prompt that asks it to generate a formatted report. So there are total 2 prompts, fired one after the other both as a role of a user. I am not sure what difference does it make but as per the docs, I am using user.
    The reason for this is that any structured data will require parsing and computational power of gpt3.5, which gives out an unstructured textual report. From unstructured the supplemental prompt can give a formatted structured JSON report, which I originally intended.

Hope this helps anyone who is struggling with the output.