Adapting Pre-trained Language Models to Low-Resource Text Simplification: The Path Matters @CoLLAs 2022

Our long paper “Adapting Pre-trained Language Models to Low-Resource Text Simplification: The Path Matters” by Cristina Garbacea and Qiaozhu Mei has been accepted at the 1st Conference on Lifelong Learning Agents (CoLLAs), which is held in Montreal, Canada between 18th -23rd August 2022. If you are attending the conference, please stop by on Thursday August 18th, 11 am – 2 pm to learn more about our work. Please see the abstract of the paper below:

“We frame the problem of text simplification from a task and domain adaptation perspective, where neural language models are pre-trained on large-scale corpora and then adapted to new tasks in different domains through limited training examples. We investigate the performance of two popular vehicles of task and domain adaptation: meta-learning and transfer learning (in particular fine-tuning), in the context of low-resource text simplification that involves a diversity of tasks and domains. We find that when directly adapting a Web-scale pre-trained language model to low-resource text simplification tasks, fine-tuning based methods present a competitive advantage over meta-learning approaches. Surprisingly, adding an intermediate stop in the adaptation path between the source and target, an auxiliary dataset and task that allow for the decomposition of the adaptation process into multiple steps, significantly increases the performance of the target task. The performance is however sensitive to the selection and ordering of the adaptation strategy (task adaptation vs. domain adaptation) in the two steps. When such an intermediate dataset is not available, one can build a “pseudostop” using the target domain/task itself. Our extensive analysis serves as a preliminary step towards bridging these two popular paradigms of few-shot adaptive learning and towards developing more structured solutions to task/domain adaptation in a novel setting.”

For more details please see our paper, talk, slides and poster.