What does prompt caching store

My understanding is that it caches the tokens, so the time saved is to convert the text to tokens only, is that understanding right?

I’ll throw a few KV Cache and attention papers into an AI and see what it will produce for us.


When discussing what AI language model automatic context caching actually stores, particularly in the context of applications where API calls are expected to be repeated, it’s useful to start with the fundamentals and build towards a deeper technical understanding.

Basic Understanding

In the simplest terms, automatic context caching in AI language models, like those utilizing Transformer architectures, involves storing certain parts of previously computed data which are likely to be reused. This data primarily consists of:

  • Attention States: Specifically, the key (k) and value (v) pairs calculated during the self-attention mechanism of token generation.

Expanding to Technical Details

  • Attention Mechanism: In Transformer models, attention mechanisms calculate the relationship between different words in a sentence. For each word, a query (q), key (k), and value (v) vector are computed. The attention output for each word is a weighted sum of value vectors, where weights are determined by the compatibility of the query vector with the corresponding key vectors.
  • Caching Specifics: During initial computations for a prompt, the model calculates and caches the key and value pairs for each token. When the same prompt or a part of it is called again, instead of recalculating these pairs, the model retrieves them from the cache. This reduces computation time significantly because the costly matrix operations involved in recalculating keys and values are bypassed.

Deep Dive: Machine Level Expertise

  • Matrix Operations in Caching: The transformer computes attention scores using dot products between queries and keys, which are then scaled, normalized, and applied to the values. In a cached setup, these products (key and value matrices) are stored directly after their initial computation. If an API call involves a repeated segment (e.g., a common prompt or question pattern), the stored matrices are reused, skipping the resource-intensive steps of recalculating these products.

  • Memory and Storage Considerations: On a lower level, the cache typically stores tensors representing keys and values in the model’s memory space, such as GPU or RAM. This storage is managed efficiently to balance between memory overhead and computational speed. For instance, the cache might store tensors in a compressed format or selectively store only those tensors that offer the highest reuse value.

  • API and Model Interaction: From an API perspective, the caching mechanism is transparent to the user. The API simply sends a request to the model, and internally, the model checks its cache to see if the necessary computations for the given input have already been performed and stored. If so, it uses these cached results; otherwise, it proceeds with the standard computation pipeline.

  • System-Level Implications: On the system level, effective cache management can reduce API latency significantly, improving response times for end-users. This is crucial for applications needing real-time responses, like interactive chatbots or real-time translation services.

Conclusion

In essence, automatic context caching in AI language models effectively stores precomputed attention states (key-value pairs) that are expected to be reused in subsequent API calls. This not only optimizes the use of computational resources but also minimizes latency, making large language models more efficient and scalable in handling repeated requests.


While Anthropic and OpenAI have a quick timeout of the autocache (and Anthropic reduces costs 90%), Google allows you to manually store and reuse, for more calculable performance.

Here’s a paper with background, that then proceeds to its own presentation “prompt cache” a named technology different then currently employed, where modular sections of knowledge text can be reused. Prompt Cache: Modular Attention Reuse for Low Latency Inference

1 Like