We are Eureka Labs and we are building a new kind of school that is AI native.
How can we approach an ideal experience for learning something new? For example, in the case of physics one could imagine working through very high quality course materials together with Feynman, who is there to guide you every step of the way. Unfortunately, subject matter experts who are deeply passionate, great at teaching, infinitely patient and fluent in all of the world’s languages are also very scarce and cannot personally tutor all 8 billion of us on demand.
If you don’t know who Karpathy is, here’s a blurb from Google:
Andrej Karpathy is a Slovak-Canadian computer scientist who served as the director of artificial intelligence and Autopilot Vision at Tesla. He co-founded and formerly worked at OpenAI, where he specialized in deep learning and computer vision.
I highly recommend this to anyone who is deciding to use any sort of LLM like GPT. Karpathy in my opinion has provided the most high-quality, easy-to-follow, start-to-finish training materials for Large Language Models.
Update June 25. To clarify, the course will take some time to build. There is no specific timeline. Thank you for your interest but please do not submit Issues/PRs.
I suspect that they will be made available via GitHub as that is how the first class was released. However do not be surprised if the there is a new GitHub repository for these as it is not uncommon for such to be posted in a repository that is not tied to a user but to an organization.
The idea with GitHub repositories for classes like this are that it is like a living course in that it can be updated and one an go back to previous versions when needed.
As the post was an announcement, we should just wait for more to come.
Personally if I were not in the middle of another course I would spend time with this one.