Incorporating external knowledge in large language models (LLMs) enhances their utility across diverse applications, but existing methods have trade-offs. Retrieval-Augmented Generation (RAG) fetches evidence via similarity search, but key information may fall outside top ranked results. Long-context models can process multiple documents but are computationally expensive and limited by context window size. Inspired by students condensing study material for open-book exams, we propose task-aware key-value (KV) cache compression, which compresses external knowledge in a zero- or few-shot setup. This enables LLMs to reason efficiently over a compacted representation of all relevant information. Experiments show our approach outperforms both RAG and task-agnostic compression methods. On LongBench v2, it improves accuracy by up to 7 absolute points over RAG with a 30x compression rate, while reducing inference latency from 0.43s to 0.16s. A synthetic dataset highlights that RAG performs well when sparse evidence suffices, whereas task-aware compression is superior for broad knowledge tasks.
Beyond RAG: Task-aware KV cache compression for comprehensive knowledge reasoning
Submitted to ArXiV, 6 March 2025
      
  Type:
        Rapport
      Date:
        2025-03-06
      Department:
        Data Science
      Eurecom Ref:
        8151
      Copyright:
        © EURECOM. Personal use of this material is permitted. The definitive version of this paper was published in Submitted to ArXiV, 6 March 2025 and is available at : 
      See also:
        
      PERMALINK : https://www.eurecom.fr/publication/8151