Abstract
We systematically investigate lightweight strategies to adapt large language models (LLMs) for the task of radiology report summarization (RRS). Specifically, we focus on domain adaptation via pretraining (on natural language, biomedical text, or clinical text) and via discrete prompting or parameter-efficient fine-tuning. Our results consistently achieve best performance by maximally adapting to the task via pretraining on clinical text and fine-tuning on RRS examples. Importantly, this method fine-tunes a mere 0.32% of parameters throughout the model, in contrast to end-to-end fine-tuning (100% of parameters). Additionally, we study the effect of in-context examples and out-of-distribution (OOD) training before concluding with a radiologist reader study and qualitative analysis. Our findings highlight the importance of domain adaptation in RRS and provide valuable insights toward developing effective natural language processing solutions for clinical tasks.
Original language | English |
---|---|
Journal | Proceedings of the Annual Meeting of the Association for Computational Linguistics |
Pages (from-to) | 449-460 |
Number of pages | 12 |
ISSN | 0736-587X |
Publication status | Published - 2023 |
Externally published | Yes |
Event | 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks, BioNLP 2023 - Toronto, Canada Duration: 13 Jul 2023 → … |
Conference
Conference | 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks, BioNLP 2023 |
---|---|
Country/Territory | Canada |
City | Toronto |
Period | 13/07/2023 → … |