Can we guide a multi-hop reasoning language model to incrementally learn at each single-hop? - Recherche d’Information et Synthèse d’Information Access content directly
Conference Papers Year : 2022

Can we guide a multi-hop reasoning language model to incrementally learn at each single-hop?

Abstract

Despite the success of state-of-the-art pretrained language models (PLMs) on a series of multi-hop reasoning tasks, they still suffer from their limited abilities to transfer learning from simple to complex tasks and vice-versa. We argue that one step forward to overcome this limitation is to better understand the behavioral trend of PLMs at each hop over the inference chain. Our critical underlying idea is to mimic human-style reasoning: we envision the multihop reasoning process as a sequence of explicit single-hop reasoning steps. To endow PLMs with incremental reasoning skills, we propose a set of inference strategies on relevant facts and distractors allowing us to build automatically generated training datasets. Using the SHINRA and ConceptNet resources jointly, we empirically show the effectiveness of our proposal on multiple-choice question answering and reading comprehension, with a relative improvement in terms of accuracy of 68.4% and 16.0% w.r.t. classic PLMs, respectively.
Fichier principal
Vignette du fichier
2022.coling-1.125.pdf (2.09 Mo) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-03885173 , version 1 (05-12-2022)

Licence

Identifiers

  • HAL Id : hal-03885173 , version 1

Cite

Jesus Lovon, Jose G. Moreno, Romaric Besançon, Olivier Ferret, Lynda Tamine. Can we guide a multi-hop reasoning language model to incrementally learn at each single-hop?. 29th International Conference on Computational Linguistics (COLING 2022), Oct 2022, Gyeongju, South Korea. pp.1455-1466. ⟨hal-03885173⟩
124 View
50 Download

Share

Gmail Mastodon Facebook X LinkedIn More