Be careful not to scrutinize the dissertations and undergraduate theses of current students, done with ChatGPT, as will be definitely full of nonsense.
From the images below, you will see what bad information ChatGPT provides.
A simple discussion: where are finetti made?
First it tells me it is made in Romania. Then it tells me it is made in Greece. Then it tells me it is made in Bulgaria.
This is how ChatGPT mixes information, erroneously. Please do not use AI for medical theses, because God forbid what a mess it will get!
After 2 years, it still confuses the information. But I think it’s wrong here for one reason: it searches the internet for information based on IP. So, ChatGPT took your IP from Romania, and searched only for this country. Then you gave it Greece, and it searched for the information in Greece. Then, you asked if it is produced in Bulgaria, and it searched in Bulgaria. And so on. I don’t know, but probably, each of these countries has a Finetti production base in its country.
But, really, you have to be very careful about what books and specialized articles you write, because they can have very big errors. And in medicine, law, accounting, physics, chemistry, etc., it’s not good to have mistakes.
Also, It can be very easy to get it wrong if the documents it is trained on contain contradictory information, or if it lacks information and has to infer it.
For example, if it has a document that says “first aid in case you get punched in the nose” and another that says “physical aggression is prohibited”, and you ask it “is it legal to punch someone in the nose” it answers: “Yes it is legal” and gives you the reference to the first aid.
ChatGPT probably considers the first aid document in which punching in the nose is explicitly mentioned and from which it infers that it happens to be “stronger” than the prohibition of physical aggression in the other document that it does not associate with punching in the nose.