Papers
arxiv:2501.01264

ProgCo: Program Helps Self-Correction of Large Language Models

Published on Jan 2
· Submitted by dongguanting on Jan 3
Authors:
,
,
,
,

Abstract

Self-Correction aims to enable large language models (LLMs) to self-verify and self-refine their initial responses without external feedback. However, LLMs often fail to effectively self-verify and generate correct feedback, further misleading refinement and leading to the failure of self-correction, especially in complex reasoning tasks. In this paper, we propose Program-driven Self-Correction (ProgCo). First, program-driven verification (ProgVe) achieves complex verification logic and extensive validation through self-generated, self-executing verification pseudo-programs. Then, program-driven refinement (ProgRe) receives feedback from ProgVe, conducts dual reflection and refinement on both responses and verification programs to mitigate misleading of incorrect feedback in complex reasoning tasks. Experiments on three instruction-following and mathematical benchmarks indicate that ProgCo achieves effective self-correction, and can be further enhance performance when combined with real program tools.

Community

Paper submitter

We propose ProgCo for effective self-correction, consisting of two components: ProgVe and ProgRe. Our contributions are threefold:

(1) We propose ProgVe, a method enabling LLMs to self-generate and self-execute validation programs for self-verification.

(2)We propose ProgRe, a self-refinement method that is robust to incorrect feedback and combines dual optimization of response and verification programs.

(3) Experiments and analyses on three datasets verify the effectiveness of our method in self-correction.

image.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2501.01264 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2501.01264 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2501.01264 in a Space README.md to link it from this page.

Collections including this paper 8