Build Talk: State of GPT - Andrej Karpathy

Massively insightful talk by Andrej… He goes over how GPT is trained, gives some love to LLaMA, and offers some prompting tips. MUST WATCH!!!

1 Like

The YouTube link is up for easier access:

My biggest learning on prompting is that GPT (and LLMs) are trained data with varying quality. The goal of an LLM is to model that entire distribution - a fancy term that means it isn’t designed to be the really good, but instead be able to be good, really good, bad, and really bad. So if you just want to know the best solution to a problem, it helps to give it prompts like “how would someone with the communication skills of a Richard Feynman teach xyz”. An LLM could also answer the question “how would someone with really bad communication skills teach xyz” - there may be cases where that’s what you want.

1 Like