Question answer generation with given context

Hi, i am looking to use the gpt3.5 turbo for question-answer generation given a context paragraph, ideally it should generate a few questions.

I am wondering if anyone has an idea how to prompt would look like, if I want to use few-shot examples to guide it.

Also, what tokenizer is gpt-3.5 using? i would like to get the token length of the context if chunking is required.

Thanks!

All the chat completion models (so far) seem to be using cl100k_base

I’ve had a hard time generating “good” questions based on context. I end up settling on promps of the form “Generate two questions on this paragram, testing reading comprehension:” and the questions are invariably “what is the X and what are some uses of it?”
I’ve tried few-samples, variations without the “reading comprehension” phrasing, and a few other things, but it hasn’t really moved the needle.
If you find something good, I’d love to see it :slight_smile:

Hi!

I just went in “blind” and created a super simple prompt to generate the shots for the final question-answer example. Take a look at the last message for a quick evaluation.

I’ve got to say, this seems so easy that I might be missing something.

Give me a heads-up if this isn’t what you’re looking for.

One approach would be to first do a query that asks for all the main points in the content to be summarized as single-sentence bullet points. Then submit another request that says:

“for each point, rephrase it as a fill-in-the-blank question by removing one of the most important nouns from the sentence and replacing with ???, or by rephrasing it as a question”

If all else fails try: “You are a school teacher making a 10 question test on the following material, so first find the 10 key points, and then for each one ask a question about them”

1 Like