DebateLab@KIT computational philosophy projects

Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning Performance of GPT-2

Thinking Aloud is well-studied and widely used meta-cognitive strategy to improve one’s reasoning skill. In our paper “Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning Performance of GPT-2” we explore whether neural language models like GPT-2 can similarly (self-)improve their performance on a reasoning task by elaborating... Read more

Can Neural Language Models (Learn to) Argue?

Neural language models such as GPT-2 and GPT-3 display a breathtaking skill in generating sensible texts, and achieve state of the art results in a variety of natural language processing (NLP) tasks. But can these systems reason? Or, more precisely, can they successfully engage in the linguistic practice of giving and taking reasons? In our pap... Read more