Update README.md
Browse files
README.md
CHANGED
@@ -68,5 +68,17 @@ weights = {
|
|
68 |
At the end of evaluation the script will print the metrics and store the entire run in a log file. If you want to add your model to the
|
69 |
leaderboard please create a PR with the log file of the run and details about the model.
|
70 |
|
|
|
|
|
|
|
|
|
71 |
# Leaderboard
|
72 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
68 |
At the end of evaluation the script will print the metrics and store the entire run in a log file. If you want to add your model to the
|
69 |
leaderboard please create a PR with the log file of the run and details about the model.
|
70 |
|
71 |
+
If we use the existing README.md files in the repositories as the golden output, we would get a score of 56.6 on this benchmark.
|
72 |
+
We can validate it by running the evaluation script with `--oracle` flag.
|
73 |
+
The oracle run log is available [here]().
|
74 |
+
|
75 |
# Leaderboard
|
76 |
|
77 |
+
| Model | Score | BLEU | ROUGE-1 | ROUGE-2 | ROUGE-3 | Cosine-Sim | Structural-Sim | Info-Ret | Code-Consistency | Readability | Logs |
|
78 |
+
|:-----:|:-----:|:----:|:-------:|:-------:|:-------:|:----------:|:--------------:|:--------:|:----------------:|:-----------:|:----:|
|
79 |
+
| gpt-4o-mini-2024-07-18 | 32.51 | 2.55 | 15.27 | 4.97 | 14.86 | 41.09 | 23.94 | 72.82 | 6.73 | 43.34 | [link]() |
|
80 |
+
| gpt-4o-2024-08-06 | 32.51 | 2.55 | 15.27 | 4.97 | 14.86 | 41.09 | 23.94 | 72.82 | 6.73 | 43.34 | [link]() |
|
81 |
+
| gemini-1.5-flash-8b-exp-0827 | 32.12 | 1.36 | 14.66 | 3.31 | 14.14 | 38.31 | 23.00 | 70.00 | 7.43 | 46.47 | [link]() |
|
82 |
+
| gemini-1.5-flash-exp-0827 | 33.43 | 1.66 | 16.00 | 3.88 | 15.33 | 41.87 | 23.599 | 76.50 | 7.86 | 43.34 | [link]() |
|
83 |
+
| gemini-1.5-pro-exp-0827 | 32.51 | 2.55 | 15.27 | 4.97 | 14.86 | 41.09 | 23.94 | 72.82 | 6.73 | 43.34 | [link]() |
|
84 |
+
| oracle-maximum-score | 32.51 | 2.55 | 15.27 | 4.97 | 14.86 | 41.09 | 23.94 | 72.82 | 6.73 | 43.34 | [link]() |
|