[📝 Paper] [🤗 Demo] Transformers are dramatically pushing the boundaries of machines’ natural language reasoning abilities. We’ve asked: Can we use current pre-trained language models for reconstructing arguments? Argument reconstruction is a central critical thinking skill (see for example Joe Lau’s Critical Thinking Web). In the same time, it ... Read more 06 Dec 2021 - 2 minute read
[📝 Paper] We have contributed the task Formal Fallacies and Syllogisms to the BIG-bench 🪑 project, a collaborative benchmark intended to probe large language models, and to extrapolate their future capabilities The following spotlight presentation at the Workshop on Enormous Language Models explains our tasks motivation and design: ... Read more 11 May 2021 - less than 1 minute read
[📝 Paper] [💻 Code] In our recent paper, we develop a natural-language agent-based model of argumentation (ABMA). Its artificial deliberative agents (ADAs) are constructed with the help of so-called neural language models recently developed in AI and computational linguistics (and which we’ve explored here and here). ADAs are equipped with a mi... Read more 15 Apr 2021 - 1 minute read
Thinking Aloud is well-studied and widely used meta-cognitive strategy to improve one’s reasoning skill. In our paper “Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning Performance of GPT-2” we explore whether neural language models like GPT-2 can similarly (self-)improve their performance on a reasoning task by elaborating... Read more 24 Mar 2021 - 1 minute read
Neural language models such as GPT-2 and GPT-3 display a breathtaking skill in generating sensible texts, and achieve state of the art results in a variety of natural language processing (NLP) tasks. But can these systems reason? Or, more precisely, can they successfully engage in the linguistic practice of giving and taking reasons? In our pap... Read more 15 Sep 2020 - 8 minute read