Generating Query Focused Summaries without Fine-tuning the Transformer-based Pre-trained Models
Published in arXiv, 2023
Deen Abdullah, Shamanth Nayak, Gandharv Suri, Yllias Chali
Fine-tuning the Natural Language Processing (NLP) models for each new data set requires higher computational time associated with increased carbon footprint and cost. However, fine-tuning helps the pre-trained models adapt to the latest data sets; what if we avoid the fine-tuning steps and attempt to generate summaries using just the pre-trained models to reduce computational time and cost. In this paper, we tried to omit the fine-tuning steps and investigate whether the Marginal Maximum Relevance (MMR)-based approach can help the pre-trained models to obtain query-focused summaries directly from a new data set that was not used to pre-train the models.
Read more
Download here