One of the components of a project I am working on is a paraphrasing prompt. With a few-shot approach results are amazing. Except for one detail. Take a look at the prompts below:
O: Write a sentences that paraphrases the following sentence:
S: The time has come to pay the rent.
R: It is about time to give money to the landlord.
O: Write a sentences that paraphrases the following sentence:
S: We have to catch the train before it is too late.
R: We need to hurry and get to the train station before the train leaves.
S: Whenever it is raining we have to wear raincoats.
R: We have to put on raincoats whenever it rains.
S: What is a knowledge graph?
R: A knowledge graph is a database of interconnected information about real-world entities.
O is for Order, S for sentence, R for response. I gave GPT-3 only a one shot (first OSR).
Only the last response was inexact because, probably, of its interrogative nature. Removing question mark and the “What” word has GPT-3 producing unsatisfying answers.
Do you have any idea about a well-formed prompt for paraphrasing a question?
That is what I like, not having to re-order. I noticed that a rebarbative prompt, like one a child is summoned to, might be effective. I have a profound questioning about generalizing about generalization, which GPT-3 is able to perform when well-formed prompt is submitted.
My concern was only about LLM being able to distinguish paraphrasing a sentence which is a question and perceiving it as a question to be answered. Maybe being able to make this distinction is one step toward AGI and artificial consciousness.
I am following this topic with much interest as I have a similar requirement, except that I need multiple paraphrases for a given statement (the more different, the better).