What is the primary subject in the question: “List all student names having more than 50% score”. Respond with just the subject.
Its response was: “student names having more than 50% score”
This may be grammatically correct (not sure) but for practical purposes I’d expect the response to be just ‘student’ or ‘student name’.
Google bard (while still experimental) responded as below:
Here is a breakdown of the question:
** “List all” - This is the command to list all student names.*
** “student names” - This is the primary subject of the question.*
** “having more than 50% score” - This is the secondary subject of the question. It is the criteria used to filter the list of student names.*
Is there a way to make OpenAI LLM respond more accurately?
I’d argue that the answer “Student names” is the correct response to the question you posed.
If you wanted a semantic breakdown of the sentence, you can ask for one. I’d also argue that the Bard answer is not the correct response the question posed.
I think there is a degree of subjectivity to this question and response pair, you asked for a “subject” not “subject(s)”, also “list all” is not a “Question” it is an instruction.
If Bard is working for you then that’s great, but I find the result less than ideal.
Yes I agree. I was expecting ‘student names’ too.
OpenAI GPT chat interface (at least with GPT3.5) gives the entire ‘predicate’ of the sentence as the 'primary subject.
So was wondering if there is some configuration that would improve this.