Thinking Aloud is well-studied and widely used meta-cognitive strategy to improve one’s reasoning skill. In our paper “Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning Performance of GPT-2” we explore whether neural language models like GPT-2 can similarly (self-)improve their performance on a reasoning task by elaborating the problem before providing an answer.
We create a simple multi-hop deductive inference task and evaluate various “elaboration strategies,” e.g., by including different sample elaborations in the prompt, or by querying the model iteratively.
Our main findings are:
As a follow-up to this study, we’re thinking about how a neural language model might acquire (be taught) the general competence to effectively and sensibly elaborate on a given problem.Written on March 24th, 2021 by Gregor Betz