Discriminator-Guided Multi-step Reasoning with Language Models

05/24/2023
by   Muhammad Khalifa, et al.
0

In the context of multi-step reasoning, language models (LMs) probabilities are often miscalibrated – solutions with high probabilities are not always correct. Therefore, greedy decoding, which is the standard decoding method for reasoning tasks, often yields incorrect solutions. In addition, methods such as self-consistency and verifiers rely on sampling from the LM distribution and do not tackle the underlying issue. To address this, we introduce Guiding Multi-step ReAsoning with a CorrectnEss Discriminator (GRACE), a stepwise decoding approach that nudges the model towards producing correct reasoning steps. GRACE employs a discriminator model, which is trained to differentiate correct steps from invalid ones, to adjust decoding preferences based on the correctness of each reasoning step. Importantly, GRACE does not require fine-tuning or re-training the LMs. When compared with conventional decoding strategies over four popular math reasoning benchmarks, GRACE exhibits significant improvements in both final answer accuracy and step correctness, outperforming both greedy decoding and self-consistency.[Our code can be found at <https://github.com/mukhal/grace.>]

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset