Skip to content

amazon-science/information-preservation-in-prompt-compression

Understanding and Improving the Information Preservation in Prompt Compression for LLMs

Code style: black Python version

This repository provides resources developed within the following article:

W. Łajewska, M. Hardalov, L. Aina, N. A. John, H. Su., L. Màrquez Understanding and Improving the Information Preservation in Prompt Compression for LLMs. In: Findings of the Association for Computational Linguistics: EMNLP 2025.

The preprint of this paper is available on arXiv.

Summary

Recent advancements in large language models (LLMs) have enabled their successful application to a broad range of tasks. However, in information-intensive tasks, the prompt length can grow fast, leading to increased computational requirements, performance degradation, and induced biases from irrelevant or redundant information. Recently, various prompt compression techniques have been introduced to optimize the trade-off between reducing input length and retaining performance. We propose a holistic evaluation framework that allows for in-depth analysis of prompt compression methods. We focus on three key aspects, besides compression ratio: (i) downstream task performance, (ii) grounding in the input context, and (iii) information preservation. Using our framework, we analyze state-of-the-art soft and hard compression methods and show that some fail to preserve key details from the original prompt, limiting performance on complex tasks. By identifying these limitations, we are able to improve one soft prompting method by controlling compression granularity, achieving up to +23% in downstream performance, +8 BERTScore points in grounding, and 2.7× more entities preserved in compression. Ultimately, we find that the best effectiveness/compression rate trade-off is achieved with soft prompting combined with sequence-level training.

Get Started

Refer to Dockerfile for required packages.

Response Generation (w/ Prompt Compression)

All the scripts developed for experimenting with different response generation methods with prompt compression are described in details here.

To run experiments on smaller scale locally, use the scripts available in local-scripts.

xRAG Training

To learn more about our training procedures, see here.

Evaluation Framework

Details about benchmarking data used in our experiments with all the preprocssing scripts are available in dataset_processing/.

Our prompt compression evaluation framework captures the following three dimensions:

  • donstream task performance
  • grounding
  • information preservation

More details about running specific scripts can be found here. To run reconstruction experiments, use the following script: xrag_reconstruction.py.

Citation

If you use the resources presented in this repository, please cite:

@inproceedings{Lajewska:2025:EMNLP,
	author = {{\L}ajewska, Weronika and Hardalov, Momchil and Aina, Laura and John, Neha Anna and Su, Hang and M\`arquez, Llu\'{i}s},
	title = {Understanding and Improving Information Preservation in Prompt Compression for LLMs},
	year = {2025},
	booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2025},
	series = {EMNLP ’25}
}

Contact

Should you have any questions, please contact Weronika Łajewska at lajewska[AT]amazon.lu (with [AT] replaced by @).

Security

See CONTRIBUTING for more information.

License

This library is licensed under the CC-BY-NC-4.0 License.

About

Understanding and Improving the Information Preservation in Prompt Compression for LLMs

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •