Using GPT-4 8k to emulate a scientific reviewer

Hi, I am using the API to create automated reviews of small papers. This can be useful as a starting point for actual proper reviews. However the 8k limit limits the approach to small papers, 2 to 3 pages.

I wonder if there is a way to iteratively use openai.ChatCompletion to process page by page within the limits. Otherwise I will have to way for 32k access :slight_smile:

Here is an example of what I get for this paper Biases in Author Recognition | blog@CACM | Communications of the ACM

(base) cbm@MacBook-Pro-Touch2 autoreview % python 3535190.pdf

Title: A Review of “Biases in Author Recognition” by Carlos Baquero and Rosa Cabecinhas

The reviewed paper, “Biases in Author Recognition,” takes a look at various factors and biases affecting author recognition and contribution attribution in academic papers. The paper begins by exploring the economy of attention – selective attention in the context of reading papers – and discusses alphabetic order and contribution/role order as prevalent authorship conventions. It then delves into the impact of preferential attachment, the Matthew effect, and the Matilda effect on author recognition.

On the topic of alphabetic order and contribution/role order, the paper cites research by Fernandes and Cortez (2020) that shows a lack of a clear trend in author order conventions in computer science. While providing examples of journals that require Author contribution statements, the authors could have explored more solutions to address the ambiguity in authorship-related practices within the computer science domain.

When discussing the preferential attachment in citation practices, Baquero and Cabecinhas cite Barabási and Albert (1999) and Klemm and Eguíluz (2002) and connect the concept to the popularity of academic papers. Nonetheless, the authors might have considered a more recent analysis of citation practices in computer science specifically or proposed ways to mitigate the biases, such as alternative citation metrics.
Regarding the Matthew effect, the authors provide a clear explanation, citing Merton (1968) and Merton (1988). While they discuss how the Matthew effect negatively impacts less renowned or younger authors, further exploration into strategies to counter this tendency in author attribution would enrich the paper.

The section on early success in grant applications provides interesting insights, with Bol, Vaan, and Rijt’s (2018) research showing that early grant winners subsequently secure double the funding of those just below the funding threshold. However, the authors could offer more actionable advice to early-career researchers to overcome such biases.

The final section introduces the Matilda effect and brings forth some poignant examples of women’s omission in the history of science from Rossiter (1993). Despite acknowledging the progress made in addressing gender bias, readers might benefit from a deeper analysis of methods to reduce the influence of the Matilda effect and promote equality in author recognition.

Overall, the paper presents a solid analysis of various biases in author recognition, exposing well-established practices in the computer science domain. However, the authors could have taken the opportunity to explore potential solutions more concretely, thus making the paper more actionable for researchers affected by these issues.