Are LLMs the Beginning or End of NLP?


I hadn’t heard of Ghostbuster AI detection tool before today…

Dan Klein (UC Berkeley)…

Large Language Models and Transformers
I’ll talk about three major tensions in NLP resulting from rapid advances of large language models. First, we are in the middle of a switch from vertical research on tasks (parsing, coreference, sentiment) to the kind of horizontal tech stacks that exist elsewhere in CS. Second, there is a fundamental tension between the factors that drive machine learning (scaled, end-to-end optimization of monoliths) and the factors that drive human software engineering (modularity, abstraction, interoperability). Third, modern models can be stunning on some axes while showing major gaps on others – they can, in different ways, simultaneously be general, fragile, or dangerous. I’ll give an NLP perspective on these issues along with some possible solution directions.

Well, here’s my no-time-to-watch-the-video-right-now hot-take on the question…

  • Natural language processing has been a thing for all of about 70 years.
  • Natural language processing has more time ahead of it than behind it.
  • In 100-years LLMs will be but a blip in the storied history of natural language processing. A thing we barbaric cave-people used before “real” natural language processing became a thing.

It’s like asking in 1991 if the Neo Geo AES was the beginning or end of video games.

I think we have barely scratched the surface of what is possible and, if quantum computing ever becomes an accessible tool, all the things we can even imagine in the realm of NLP will seem absurdly quaint.


Yeah, quick summary, he says we’re at the end of the beginning…

Just thought others might enjoy when they have the time.