[📝 Paper] Doxastic conservativism is the idea that, when forced to change one’s beliefs (e.g., because of novel evidence), one should retain as many previously held beliefs as possible. Yet, is this really a sensible policy? Or, couldn’t it makes sense to revise one’s position more drastically (than strictly required) in order to attain an over... Read more 17 Jan 2022 - 3 minute read
[📝 Paper] [💻 Code] In “Making Reflective Equilibrium Precise: A Formal Model,” co-authored with Claus Beisbart and Georg Brun, we present a formal, computational model of reflective equilibrium (RE). The basic thrust is to explicate the method of RE as a process of step-wise “belief” revision that modifies – alternately – an agent’s current co... Read more 12 Jan 2022 - less than 1 minute read
[📝 Paper] [🤗 Demo] Transformers are dramatically pushing the boundaries of machines’ natural language reasoning abilities. We’ve asked: Can we use current pre-trained language models for reconstructing arguments? Argument reconstruction is a central critical thinking skill (see for example Joe Lau’s Critical Thinking Web). In the same time, it ... Read more 06 Dec 2021 - 2 minute read
[📝 Paper] We have contributed the task Formal Fallacies and Syllogisms to the BIG-bench 🪑 project, a collaborative benchmark intended to probe large language models, and to extrapolate their future capabilities The following spotlight presentation at the Workshop on Enormous Language Models explains our tasks motivation and design: ... Read more 11 May 2021 - less than 1 minute read
[📝 Paper] [💻 Code] In our recent paper, we develop a natural-language agent-based model of argumentation (ABMA). Its artificial deliberative agents (ADAs) are constructed with the help of so-called neural language models recently developed in AI and computational linguistics (and which we’ve explored here and here). ADAs are equipped with a mi... Read more 15 Apr 2021 - 1 minute read