Summarization / Insight Generation Output: First Come, Most Important?

I’m thinking about a question about how the GPT models work when we ask it to generate x bullet points as a summarization or y insights from a long text.

Let’s say you have a corpus composed of 3000 customer reviews, and you want to summarize it or generate some insights from it, and get some bullet points as output.

With a prompt like

Summarize the reviews with 3 bullet points:


According to these reviews, generate 4 insights:

In this case, can we say the first point the model generates tends to be more “important” than the last ones?
For example, can we say that the topic covered by the first bullet point is more frequently present in the corpus?

Are there any explanations or hypothesis about this question?


I think the best way to think about this is, what would you do?

What as a large neural net would be your reasoning for the first bullet point or insight? I think mine would be it’s the first thing of note I saw in the documentation, I might even be diligent and look at the whole thing and pick out the 3 or 4 main points. Which things stand out the most… what topics are all the other topics pointing at, what is constantly referred to in the text? what are the key terms used over and over again?

The AI has learned what humans do with text and language and produced a model of that, so when you ask “will the first bullet point be the most mentioned”, I’d answer by saying is that what happens with most bullet points you read? I’d actually say it’s probably not the most mentioned but the first chronologically, or perhaps reverse chronologically depending on how your memory works .

1 Like