Welcome to the community!
Have you read up on how large language models work?
There’s still a lot of “hallucinations” - i.e. bad info given, depending on your settings.
Are you using ChatGPT? Or another model?
Another useful link is the ELIZA Effect…
ETA:
GPT-3 predicts feasible responses, which look like reasonable text but may not always be true. This is called information hallucination and is an open problem in the research space. For example, you can ask ‘Describe what it was like when humans landed on the Sun’ and it will respond as though that has happened. You may ask it to complete some task (e.g. send an email or print the current directory) and it may respond as though it has some external operating power. GPT is only a text-in, text-out system and has no additional capabilities.
In this case, it is possible that the advice given by ChatGPT and Davinci is incorrect. Please refer to our documentation for guidance.