codelion commited on
Commit
36d5034
·
verified ·
1 Parent(s): 314657b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md CHANGED
@@ -25,4 +25,48 @@ configs:
25
  path: data/train-*
26
  - split: test
27
  path: data/test-*
 
 
 
 
 
 
 
28
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  path: data/train-*
26
  - split: test
27
  path: data/test-*
28
+ license: apache-2.0
29
+ task_categories:
30
+ - summarization
31
+ tags:
32
+ - code
33
+ size_categories:
34
+ - n<1K
35
  ---
36
+
37
+ # Generate README Eval
38
+
39
+ The generate-readme-eval is a dataset (train split) and benchmark (test split) to evaluate the effectiveness of LLMs
40
+ when summarizing entire GitHub repos in form of a README.md file. The datset is curated from top 400 real Python repositories
41
+ from GitHub with at least 1000 stars and 100 forks. The script used to generate the dataset can be found [here](_script_for_gen.py).
42
+ For the dataset we restrict ourselves to GH repositories that are less than 100k tokens in size to allow us to put the entire repo
43
+ in the context of LLM in a single call. The `train` split of the dataset can be used to fine-tune your own model, the results
44
+ reported here are for the `test` split.
45
+
46
+ To evaluate a LLM on the benchmark we can use the evaluation script given [here](_script_for_eval.py). During evaluation we prompt
47
+ the LLM to generate a structured README.md file using the entire contents of the repository (`repo_content`). We evaluate the output
48
+ response from LLM by comparing it with the actual README file of that repository across several different metrics.
49
+
50
+ In addition to the traditional NLP metircs like BLEU, ROUGE scores and cosine similarity, we also compute custom metrics
51
+ that capture structural similarity, code consistency, readbility and information retrieval (from code to README). The final score
52
+ is generated between by taking a weighted average of the metrics. The weights used for the final score are shown below.
53
+
54
+ ```
55
+ weights = {
56
+ 'bleu': 0.1,
57
+ 'rouge-1': 0.033,
58
+ 'rouge-2': 0.033,
59
+ 'rouge-l': 0.034,
60
+ 'cosine_similarity': 0.1,
61
+ 'structural_similarity': 0.1,
62
+ 'information_retrieval': 0.2,
63
+ 'code_consistency': 0.2,
64
+ 'readability': 0.2
65
+ }
66
+ ```
67
+
68
+ At the end of evaluation the script will print the metrics and store the entire run in a log file. If you want to add your model to the
69
+ leaderboard please create a PR with the log file of the run and details about the model.
70
+
71
+ # Leaderboard
72
+