The evidence shows there are issues with building suitable and successful systems, but it’s also showing it’s not absolutely useless. Ironically, nothing about AI is absolute.
I feel ya man.
This risk is not new. Since the early 80’s, the debate over [seemingly] “sacred” territory by aftermarket providers has raged on. Companies insensitive to the delicate balance and advantages of having partners soon find themselves without partners. And aftermarket providers aloof to free market forces soon find themselves without customers. It requires great skill and agility to thrive in the aftermarket of any industry.
That’s the skill part - you need to forecast probable moves by your competitors and infrastructure providers. For any given idea or approach, you must weigh the likelihood that your disruptive or cool idea won’t be displaced by others who might do it better. You are not competing only with the infrastructure providers; peers in your aftermarket want to eat your lunch as well.
Known unknowns are rife in any segment where magical advancements are being made at an increasing pace. This was true in 1980 when Jobs and Wozniak were changing the landscape. It was a climate of vastly unknown outcomes. Today is no different.
During those years, I placed a bet that no single communications protocol would emerge soon that worked with all personal computers. I built LapLink, which thrived for almost a decade until Ethernet killed it. Sure, it still exists today, but not in its original spirit. We helped ~250 million users move information between desktops and laptops, and then it was wholly replaced by something better.
Today, I’m putting a lot of trust in one very simple idea - the compression of time and the skilled effort required to bind the power of LLMs with deeply specific information about work. Big players will make similar investments, and one (or many) may end my investment overnight. However, I’m betting big players won’t see what I see - at least not initially for the “S” in SMB.
A good example is the segment being explored by ChatPDF and CustomGPT. I’m putting a lot of effort into one of these players because only one has a vision that aligns with my own. But I’m also clear-eyed about this investment. It could end in ashes because I’m actually an aftermarket supporter two levels of abstraction from the infrastructure providers - e.g., very high risk indeed.
LLMs are designed like the blood-brain barrier in most creatures; they are isolated because trained models are based on a mix of lots of energy plus lots of silicon. They take massive amounts of power and assimilation to become intelligent. This architecture will exist for a while. As such, the only reasonable pathway to create dynamic interaction between LLMs and real-time data is by interfacing with external information systems.
Plugins are that interface, but building plugins in a no-code (LLM) fashion is very different from building the pipeline’s last mile. Something has to glue the circulatory system with the brain. It’s not surprising to see Zapier on the short list of partners; it has the integration adhesive to about five-thousand endpoints and OpenAI needs those endpoints. It can certainly train its LLMs to build apps to those endpoints over time, but an additional element is needed - business logic.
This is where you, me, and millions of developers and domain experts have a slight advantage concerning plugins.
Indeed. And without this symbiotic collaboration, plugins would not work well. This is where aftermarket opportunities exist and will likely exist for about a decade. This real-world → LLM-world context-building requirement is critical to making it all work.