So I am using GPT4 API (Azure) for creating a report for clients, it constantly adds disclaimers and unwanted responses at both sides of the report, I have explained in the NPL that the customer is trained and a subject matter expert and also the are aware of GPT’s limitations and odd hallucination. And said that there is really no need for the responses that are currently outside the report, I even described to GPT of the audience type and they could add a disclaimer inside the reports but focus it as an expert specific response than “consult a professional”, everything I tried, GPT is having none of it, and refuses to budge, is there a prompt, or something in the config or some company whitelist, the responses look a bit amateur hour… I am writing a regex to deal with the problem, but seems a bit extreme for (just a report) prompt.
Yeah, it’s annoying. We have a thread here How to stop models returning "preachy" conclusions where we discussed strategies on how to intercept that stuff.
it is indeed, preachy conclusions … hilarious. Thanks
I’ve just read through the other threads on the topic. Agreeing that the behaviour is annoying. In practice, I personally found that by just including in my prompt a specific sentence that says something along the lines of “Your output consists only of…”, I have been pretty successful in getting rid of it (using the GPT-4-Turbo API via Azure). My prompt involves giving the GPT an expert persona - not sure that makes a difference though for this particular phenomenom.
Might be use case specific but that simple phrase did it for me. That said, the proposed solutions in the other threads are also pretty good and interesting.
thanks jr. tried similar… I think it’s because it’s focused on new generation, the other areas of reporting, doesn’t through up that error, but your point gave me an idea of creating a pre exisitng documents to be improved upon, GPT might be more forgiving in that scenario rather than the create from scratch scenario I present her with…