Silicon Sonnets

Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization

January 30, 2024
Silicon Sonnets
Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization
Show Notes

This paper studies the effect of pruning on hallucinations in large language models used for abstractive summarization. It finds that pruned models hallucinate less and rely more on the source document, leading to summaries with higher lexical overlap and reduced hallucination risk.