ProgCo: Program Helps Self-Correction of Large Language Models
Abstract
Self-Correction aims to enable large language models (LLMs) to self-verify and self-refine their initial responses without external feedback. However, LLMs often fail to effectively self-verify and generate correct feedback, further misleading refinement and leading to the failure of self-correction, especially in complex reasoning tasks. In this paper, we propose Program-driven Self-Correction (ProgCo). First, program-driven verification (ProgVe) achieves complex verification logic and extensive validation through self-generated, self-executing verification pseudo-programs. Then, program-driven refinement (ProgRe) receives feedback from ProgVe, conducts dual reflection and refinement on both responses and verification programs to mitigate misleading of incorrect feedback in complex reasoning tasks. Experiments on three instruction-following and mathematical benchmarks indicate that ProgCo achieves effective self-correction, and can be further enhance performance when combined with real program tools.
Community
We propose ProgCo for effective self-correction, consisting of two components: ProgVe and ProgRe. Our contributions are threefold:
(1) We propose ProgVe, a method enabling LLMs to self-generate and self-execute validation programs for self-verification.
(2)We propose ProgRe, a self-refinement method that is robust to incorrect feedback and combines dual optimization of response and verification programs.
(3) Experiments and analyses on three datasets verify the effectiveness of our method in self-correction.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision (2024)
- PerfCodeGen: Improving Performance of LLM Generated Code with Execution Feedback (2024)
- MC-NEST - Enhancing Mathematical Reasoning in Large Language Models with a Monte Carlo Nash Equilibrium Self-Refine Tree (2024)
- AlphaVerus: Bootstrapping Formally Verified Code Generation through Self-Improving Translation and Treefinement (2024)
- GReaTer: Gradients over Reasoning Makes Smaller Language Models Strong Prompt Optimizers (2024)
- EDA-Aware RTL Generation with Large Language Models (2024)
- Forest-of-Thought: Scaling Test-Time Compute for Enhancing LLM Reasoning (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper