Friends，i will be appreciated if anyone can give me suggestion.
When I use GPT4 to ask him to output an ancient Chinese poem, it often makes it up. When I pointed out his mistake, he would give another wrong answer. I tried asking him not to output without confirming the answer, but that didn’t seem to work
So, what prompt are you using, and what is the name of the poem? Have you tried simply asking for the poem by name?
Keep in mind, there is always a risk that GPT will “hallucinate” or make up stuff that doesn’t exist or isn’t true. This can be mitigated by asking for what you want in particular ways (prompting), but you should always be careful and recognize these are inherent limitations in the model.
There may also be a limited extent to which it is trained on specific Chinese language data. We don’t know what was fed into the model, so it could very well be the poem is not in its training data depending on its obscurity (and the characters represented in it).