fill out the readme with github version
Browse files
README.md
CHANGED
@@ -9,4 +9,76 @@ tags:
|
|
9 |
- code
|
10 |
size_categories:
|
11 |
- n<1K
|
12 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
- code
|
10 |
size_categories:
|
11 |
- n<1K
|
12 |
+
---
|
13 |
+
|
14 |
+
# DafnyBench: A Benchmark for Formal Software Verification
|
15 |
+
|
16 |
+
Dataset & code for our paper [DafnyBench: A Benchmark for Formal Software Verification]()
|
17 |
+
<br>
|
18 |
+
|
19 |
+
## Overview 📊
|
20 |
+
|
21 |
+
DafnyBench is the largest benchmark of its kind for training and evaluating machine learning systems for formal software verification, with over 750 Dafny programs.
|
22 |
+
<br><br>
|
23 |
+
|
24 |
+
|
25 |
+
## Usage 💻
|
26 |
+
|
27 |
+
- <b>Dataset</b>: The dataset for DafnyBench (with ~782 programs) could be found in the `DafnyBench` directory, which contains the `ground_truth` set & the `hints_removed`set (with compiler hints, i.e. annoataions, removed).
|
28 |
+
- <b>Evaluation</b>: Evaluate LLMs on DafnyBench by asking models to fill in missing hints in a test file from the `hints_removed` set and checking if the reconstructed program could by verified by Dafny. Please refer to the `eval` directory.
|
29 |
+
<br>
|
30 |
+
|
31 |
+
|
32 |
+
<p align="center">
|
33 |
+
<img src="assets/task_overview.jpg" width="600px"/>
|
34 |
+
</p>
|
35 |
+
<br><br>
|
36 |
+
|
37 |
+
|
38 |
+
|
39 |
+
## Set Up for Evaluation 🔧
|
40 |
+
|
41 |
+
1. Install Dafny on your machine by following [this tutorial](https://dafny.org/dafny/Installation)
|
42 |
+
2. Clone & `cd` into this repository
|
43 |
+
3. Set up environment by running the following lines:
|
44 |
+
```
|
45 |
+
python -m venv stats
|
46 |
+
source stats/bin/activate
|
47 |
+
pip install -r requirements.txt
|
48 |
+
cd eval
|
49 |
+
```
|
50 |
+
4. Set up environment variable for the root directory:
|
51 |
+
```
|
52 |
+
export DAFNYBENCH_ROOT=
|
53 |
+
```
|
54 |
+
5. Set up environment variable for path to Dafny executable on your machine (for example, `/opt/homebrew/bin/Dafny`):
|
55 |
+
```
|
56 |
+
export DAFNY_PATH=
|
57 |
+
```
|
58 |
+
6. If you're evaluating an LLM through API access, set up API key. For example:
|
59 |
+
```
|
60 |
+
export OPENAI_API_KEY=
|
61 |
+
```
|
62 |
+
7. You can choose to evaluate an LLM on a single test program, such as:
|
63 |
+
```
|
64 |
+
python fill_hints.py --model "gpt-4o" --test_file "Clover_abs_no_hints.dfy" --feedback_turn 3 --dafny_path "$DAFNY_PATH"
|
65 |
+
```
|
66 |
+
or evaluate on the entire dataset:
|
67 |
+
```
|
68 |
+
export model_to_eval='gpt-4o'
|
69 |
+
./run_eval.sh
|
70 |
+
```
|
71 |
+
<br>
|
72 |
+
|
73 |
+
|
74 |
+
## Contents 📁
|
75 |
+
|
76 |
+
- `DafnyBench`
|
77 |
+
- A collection of 782 Dafny programs. Each program has a `ground_truth` version that is fully verified with Dafny & a `hints_removed` version that has hints (i.e. annotations) removed
|
78 |
+
- `eval`
|
79 |
+
- Contains scripts to evaluate LLMs on DafnyBench
|
80 |
+
- `results`
|
81 |
+
- `results_summary` - Dataframes that summarize LLMs' success on every test program
|
82 |
+
- `reconstructed_files` - LLM outputs with hints filled back in
|
83 |
+
- `analysis` - Contains a notebook for analyzing the results
|
84 |
+
<br>
|