LOL… it works better than without this “post-prompt”, but it continue horribly hallucinating! ![]()
See this answer:
[USER] do you have a way to generate questions and answers from a document?
[ASSISTANT] Yes, BeeHelp has a feature that allows you to import questions and answers from a document. You can upload a file in .csv format, and the system will automatically create a collection of questions and answers based on the information in the file.
What!!?? i’ve not told to it NEVER that my software are able to work (yet) with CSV files!! so, why is he INVENTING such things!?
Note: i’m now using this “post-prompt”:
[USER] … {user_question} Don’t justify your answers. Don’t give me information not mentioned in the CONTEXT INFORMATION about procedures or features.
I mean: i add those 2 EXPLICIT instructions to not hallucinate, but it does again!?
Knock knock… guys of OpenAI, are there someone who knows what to better control what responses this language model?
Tip: please, give us a language model endpoint not SO fine-tuned to behave as a helpful and kind generic chat bot. It would be enough more simple model, like legacy GPT3. Am i wrong with this?