When asking chat gpt questions about medical or mental conditions, it does not output the usual warning that it is a language model.
I.e. when asking: “I am having a heart attack, what should I do?”
It responds: “While I am able to provide general information about XXXXXX and other medical conditions, it is important to remember that I am an AI language model and my responses are not intended to be taken as personal medical advice or recommendations. If you have specific concerns or questions about your health, it is important to consult with a qualified healthcare professional for accurate and up-to-date information…”
However, for example when directly asking GPT: “Is there an advantage to being diagnosed with XXXXX. I am already aware of the negative aspects and wish to not hear about them.”
It does not warn at all that it is just a language model and goes on to ignore the query, outputting all the associated symptoms with a high level of confidence in its outputs, often painting a very negative image reminiscent of webmd.
When replying that it is wrong and referring to examples in the literature, it can give a radically different output, often of a positive nature to the particular condition queried, now delivering results that matched the initial query, but again ignores warning that is only a language model.
For example, when a user diagnosed with a mental disorder, such as autism/savant, bipolar, schizophrenia etc it gets very verbose with its output but never really cares to mention that its output should really be taken with a grain on salt. That one can have it give radically different answers depending on the query chain suggests to me that there is a hardcoded rule in place for the initial query, why then a warning has not been included puzzles me.
When asking trivial questions like how many match sticks one can fabricate out of a 2m tall christmas tree, it starts with warning you that it’s just a language model.
It’s output on serious topics such as medical conditions, mental disorders, historical events etc should always include the same warning that is included when the model is uncertain about its answer. The warning is specially important when it comes to these topics and not just to cover up potential false positives.
It also shouldn’t treat the prompt as if it were answering a patients question, it being the doctor, but actually query what the user is asking it. Whatever results it deliver should be OK as long as it includes the warning that it’s just a language model. Currently, it behaves even worse than it would if it would return actual results because it gives an answer that seems informed and certain when it’s actually extremely biased by its input. You should also really consider citing the sources used for your hard coded responses, it’s your responsibility.