DebateLab@KIT computational philosophy projects

BIG-bench contribution: Formal Fallacies

We have contributed the task Formal Fallacies and Syllogisms to the BIG-bench 🪑 project, a collaborative benchmark intended to probe large language models, and to extrapolate their future capabilities The following spotlight presentation at the Workshop on Enormous Language Models explains our tasks motivation and design: Read more

Natural-Language Multi-Agent Simulations of Argumentative Opinion Dynamics

In our recent paper, we develop a natural-language agent-based model of argumentation (ABMA). Its artificial deliberative agents (ADAs) are constructed with the help of so-called neural language models recently developed in AI and computational linguistics (and which we’ve explored here and here). ADAs are equipped with a minimalist belief syste... Read more

Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning Performance of GPT-2

Thinking Aloud is well-studied and widely used meta-cognitive strategy to improve one’s reasoning skill. In our paper “Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning Performance of GPT-2” we explore whether neural language models like GPT-2 can similarly (self-)improve their performance on a reasoning task by elaborating... Read more

Can Neural Language Models (Learn to) Argue?

Neural language models such as GPT-2 and GPT-3 display a breathtaking skill in generating sensible texts, and achieve state of the art results in a variety of natural language processing (NLP) tasks. But can these systems reason? Or, more precisely, can they successfully engage in the linguistic practice of giving and taking reasons? In our pap... Read more