Over-Reasoning and Redundant Calculation of Large Language Models

Cheng-Han Chiang, Hung-yi Lee

Main: Interpretability and Model Analysis in NLP Oral Paper

Session 10: Interpretability and Model Analysis in NLP (Oral)
Conference Room: Carlson
Conference Time: March 20, 11:00-12:30 (CET) (Europe/Malta)
TLDR:
You can open the #paper-88-Oral channel in a separate window.
Abstract: Large language models (LLMs) can solve problems step-by-step. While this chain-of-thought (CoT) reasoning boosts LLMs' performance, it is unclear if LLMs know when to use CoT and whether those CoT are always necessary to answer the question. This paper shows that LLMs tend to generate redundant calculations and reasoning on a manually constructed math QA dataset, GSM8K-Zero. GSM8K-Zero is constructed such that the questions can be answered without any calculations, but LLMs, including Llama-2 models and Claude-2, tend to generate lengthy and unnecessary calculations to answer the questions. We also conduct experiments to explain why LLMs generate redundant calculations and reasonings.