Search
Generic filters
Exact matches only
Filter by content type
Users
Attachments

A Philosophically-Informed Contribution to the Generalization Problem of Neural Natural Language Inference: Shallow Heuristics, Bias, and the Varieties of Inference

Download

Alexandria

Transformer-based pre-trained language models (PLMs) currently dominate the field of Natural Language Inference (NLI). It is also becoming increasingly clear that these models might not be learning the actual underlying task, namely NLI, during training. Rather, they learn what is often called bias, or shallow heuristics, leading to the problem of generalization. In this article, building on the philosophy of logics, we discuss the central concepts in which this problem is couched, we survey the proposed solutions, including those based on natural logic, and we propose or own dataset based on syllogisms to contribute to addressing the problem.

Reto Gubelmann, Christina Niklaus, Siegfried Handschuh

16 Dec 2022

Item Type
Journal paper
Journal Title
Language
English
Subject Areas
computer science