Product Idea of the day: Caching OpenAI APIs

It occurs to me that there are a lot of things on the web that we’re all summarizing and perhaps even asking GPT4 the same question about.

As with pretty much everything we do in computing, when should we start putting a cache in front of it?

I don’t know the legality of this product and perhaps the only valid provider of it is OpenAI themselves, but I think there is potentially force multiplying value here.

While some caching may already exist, because of the chat history it’s tricky to do that much via caching, so I don’t know how much OAI is doing this just yet. When/If they do, I hope they open this up to competition, even if it’s just select partners, as there is a tonne of innovation to be done here, not to mention competing on price.

For exampe:

  • a simple article summary endpoint which gives a url and title of an article, can provide different types of summaries of different lengths, along with embedding vectors.
  • basic search engine questions for common technical problems, like ‘how do i take a screenshot’.
  • common coding questions
  • lots of interesting problems to be solved around querying the cache, with the recent work in vector databases likely being relevant
  • The internal value of being able to analyze the cache is massive, of course, but that value isn’t necessarily captured by the customers of the cache, but rather mostly by the cache owner

A lot of these queries should be able to come back from a cache with zero cost rather than going to a transformer. There would also be a lot of downstream/second order products that could arise as folks build / leverage these lower cost ‘caching proxies’.

It will be critically important to always ensure the user is aware of what is cache and what is fresh transformer output. Losing that trust would be somewhat disastrous for everyone concerned.

This idea is somewhat related to my post on AAO and knowledge bases, which you can read about here - SEO becomes AAO - autonomous agent optimisation?

I believe there would be a lot of value for knowledge bases to (at least in part) provide cached results from OAI. For example, let’s say I wanted to use the GPT4 summaries of a 1000s of articles in some research I’m doing with GPT4 api. It seems silly to pay all that money and use all that compute for something that can be productively re-used by others.