Discussion thread for "Foundational must read GPT/LLM papers"

It’s an interesting paper, but it looks like an attempt to lay claim to a fairly simple idea. Interleaving requests to authoritative material and reasoning about them improves the accuracy of responses.

Did it really require a paper? I’m also deeply skeptical about results beyond - this seems to work better in some situations.

We have a discord where I’ve provided some extensive examples of this, but one technique I’d like to see is a ‘paper’ or at least a blog about is just having GPT4 provide alternative responses and then pick the best one. The advantage of this approach is that it can be applied to any q/a and evaluation would be straightforward.

It’s a fairly simple idea and hopefully a paper has been written on it. GPT4 wasn’t much help, so it’s probably been written since cutoff.