kaikaidai commited on
Commit
9897c61
·
verified ·
1 Parent(s): 9b05683

Update common.py

Browse files
Files changed (1) hide show
  1. common.py +21 -18
common.py CHANGED
@@ -91,61 +91,64 @@ Atla is an applied research organization that trains models as evaluators to cap
91
  <br><br>
92
  # Our Mission
93
 
94
- By creating advanced evaluation models, we enable AI developers to identify and fix risks, leading to safer, more reliable AI that can be trusted and widely used. Our aim is to surpass the current state-of-the-art evaluation methods by training models specifically for evaluation. AIs will probably become very powerful, and perform tasks that are difficult for us to verify. We want to enable humans to oversee AI systems that are solving tasks too difficult for humans to evaluate. We have written more about [our approach to scalable oversight](https://www.atla-ai.com/post/scaling-alignment) on our blog.
 
95
  <br><br>
96
  # Judge Arena Policy
97
 
98
  ## Overview
99
 
100
- Judge Arena is an open-source platform dedicated to improving the standard of evaluation of generative AI models in their role as judges. Users can run evals and assess anonymized responses from two competing model judges, choosing the better judgement or declaring a tie. This policy outlines our commitments to maintain a fair, open, and collaborative environment :)
101
 
102
  ## Transparency
103
 
104
  - **Open-Source**: Judge Arena's code is open-source and available on GitHub. We encourage contributions from the community and anyone can replicate or modify the platform to suit their needs. We use proprietary model provider APIs where provided and Together AI's API to serve leading open-source models.
105
- - **Methodology**: All processes related to model evaluation, rating calculations, and model selection are openly documented. We'd like to ensure that our ranking system is understandable and reproducible by others!
106
  - **Data Sharing**: Periodically, we'll share 20% of the collected evaluation data with the community. The data collected from Judge Arena is restricted to an anonymized user ID, the final prompt sent, the model responses, the user vote, and the timestamp.
107
 
108
  ## Model Inclusion Criteria
109
 
110
  Judge Arena is specifically designed to assess AI models that function as evaluators (a.k.a judges). This includes but is not limited to powerful general-purpose models and the latest language models designed for evaluation tasks. Models are eligible for inclusion if they meet the following criteria:
111
 
112
- - **Judge Capability**: The model should possess the ability to score AND critique responses, content, or other models' outputs effectively.
113
- - **Adaptable:** The model must be prompt-able to be evaluate in different scoring formats, for different criteria.
114
  - **Accessibility**:
115
  - **Public API Access**: Models accessible through public APIs without restrictive barriers.
116
  - **Open-Source Models**: Models with publicly available weights that can be downloaded and run by the community.
117
 
118
  ## Leaderboard Management
119
 
120
- - **ELO Ranking System**: Models are ranked on a public leaderboard based on aggregated user evaluations. We use an ELO rating system to rank AI judges on the public leaderboard. Each model begins with an initial rating of 1500 (as is used by the International Chess Federation), and we use a K-factor of 32 to determine the maximum rating adjustment after each evaluation.
121
  - **Minimum Period**: Listed models remain accessible on Judge Arena for a minimum period of two weeks so they can be comprehensively evaluated.
122
  - **Deprecation Policy**: Models may be removed from the leaderboard if they become inaccessible or are no longer publicly available.
123
 
124
- This policy might be updated to reflect changes in our practices or in response to community feedback.
125
-
126
  # FAQ
127
 
128
- **Isn't this the same as Chatbot Arena?**
129
 
130
  We are big fans of what the LMSYS team have done with Chatbot Arena and fully credit them for the inspiration to develop this. We were looking for a dynamic leaderboard that graded on AI judge capabilities and didn't manage to find one, so we created Judge Arena. This UI is designed especially for evals; to match the format of the model-based eval prompts that you would use in your LLM evaluation / monitoring tool.
131
 
132
- **What are the Evaluator Prompt Templates based on?**
133
-
134
- As a quick start, we've set up templates that cover the most popular evaluation metrics out there on LLM evaluation / monitoring tools, often known as 'base metrics'. The data samples used in these were randomly picked from popular datasets from academia - [ARC](https://huggingface.co/datasets/allenai/ai2_arc), [Preference Collection](https://huggingface.co/datasets/prometheus-eval/Preference-Collection), [RewardBench](https://huggingface.co/datasets/allenai/reward-bench), [RAGTruth](https://arxiv.org/abs/2401.00396).
135
-
136
- These templates are designed as a starting point to showcase how to interact with the Judge Arena, especially for those less familiar with using LLM judges.
137
-
138
- **Why should I trust this leaderboard?**
139
 
140
  We have listed out our efforts to be fully transparent in the policies above. All of the code for this leaderboard is open-source and can be found on our [Github](https://github.com/atla-ai/judge-arena).
141
 
142
- **Who funds this effort?**
143
 
144
  Atla currently funds this out of our own pocket. We are looking for API credits (with no strings attached) to support this effort - please get in touch if you or someone you know might be able to help.
145
 
146
- **What is Atla working on?**
147
 
148
  We are training a general-purpose evaluator that you will soon be able to run in this Judge Arena. Our next step will be to open-source a powerful model that the community can use to run fast and accurate evaluations.
149
  <br><br>
150
  # Get in touch
151
  Feel free to email us at [[email protected]](mailto:[email protected]) or leave feedback on our [Github](https://github.com/atla-ai/judge-arena)!"""
 
 
 
 
 
 
 
 
 
91
  <br><br>
92
  # Our Mission
93
 
94
+ By creating advanced evaluation models, we enable AI developers to identify and fix risks, leading to safer, more reliable AI that can be trusted and widely used. Our aim is to surpass the current state-of-the-art evaluation methods by training models specifically for evaluation. AIs will probably become very powerful, and perform tasks that are difficult for us to verify. We want to enable humans to oversee AI systems that are solving tasks too difficult for humans to evaluate.
95
+ Read more about [our approach to scalable oversight](https://www.atla-ai.com/post/scaling-alignment) on our blog.
96
  <br><br>
97
  # Judge Arena Policy
98
 
99
  ## Overview
100
 
101
+ Judge Arena is an open-source platform dedicated to determining which models make the best judges. Users can run evals and assess anonymized responses from two competing model judges, choosing the better judgement or declaring a tie. This policy outlines our commitments to maintain a fair and open environment :)
102
 
103
  ## Transparency
104
 
105
  - **Open-Source**: Judge Arena's code is open-source and available on GitHub. We encourage contributions from the community and anyone can replicate or modify the platform to suit their needs. We use proprietary model provider APIs where provided and Together AI's API to serve leading open-source models.
106
+ - **Methodology**: All processes related to model evaluation, rating calculations, and model selection are openly documented.
107
  - **Data Sharing**: Periodically, we'll share 20% of the collected evaluation data with the community. The data collected from Judge Arena is restricted to an anonymized user ID, the final prompt sent, the model responses, the user vote, and the timestamp.
108
 
109
  ## Model Inclusion Criteria
110
 
111
  Judge Arena is specifically designed to assess AI models that function as evaluators (a.k.a judges). This includes but is not limited to powerful general-purpose models and the latest language models designed for evaluation tasks. Models are eligible for inclusion if they meet the following criteria:
112
 
113
+ - **Judge Capability**: The model should possess the ability to score AND critique other models' outputs effectively.
114
+ - **Promptable:** The model must be promptable to be evaluate in different scoring formats, for different criteria.
115
  - **Accessibility**:
116
  - **Public API Access**: Models accessible through public APIs without restrictive barriers.
117
  - **Open-Source Models**: Models with publicly available weights that can be downloaded and run by the community.
118
 
119
  ## Leaderboard Management
120
 
121
+ - **ELO Ranking System**: Models are ranked on a public leaderboard based on aggregated user evaluations. We use an ELO rating system to rank AI judges on the public leaderboard. Each model begins with an initial rating of 1200, and we use a K-factor of 32 to determine the maximum rating adjustment after each evaluation.
122
  - **Minimum Period**: Listed models remain accessible on Judge Arena for a minimum period of two weeks so they can be comprehensively evaluated.
123
  - **Deprecation Policy**: Models may be removed from the leaderboard if they become inaccessible or are no longer publicly available.
124
 
125
+ *This policy might be updated to reflect changes in our practices or in response to community feedback.*
126
+ <br><br>
127
  # FAQ
128
 
129
+ -**Isn't this the same as Chatbot Arena?**
130
 
131
  We are big fans of what the LMSYS team have done with Chatbot Arena and fully credit them for the inspiration to develop this. We were looking for a dynamic leaderboard that graded on AI judge capabilities and didn't manage to find one, so we created Judge Arena. This UI is designed especially for evals; to match the format of the model-based eval prompts that you would use in your LLM evaluation / monitoring tool.
132
 
133
+ -**Why should I trust this leaderboard?**
 
 
 
 
 
 
134
 
135
  We have listed out our efforts to be fully transparent in the policies above. All of the code for this leaderboard is open-source and can be found on our [Github](https://github.com/atla-ai/judge-arena).
136
 
137
+ -**Who funds this effort?**
138
 
139
  Atla currently funds this out of our own pocket. We are looking for API credits (with no strings attached) to support this effort - please get in touch if you or someone you know might be able to help.
140
 
141
+ -**What is Atla working on?**
142
 
143
  We are training a general-purpose evaluator that you will soon be able to run in this Judge Arena. Our next step will be to open-source a powerful model that the community can use to run fast and accurate evaluations.
144
  <br><br>
145
  # Get in touch
146
  Feel free to email us at [[email protected]](mailto:[email protected]) or leave feedback on our [Github](https://github.com/atla-ai/judge-arena)!"""
147
+
148
+
149
+
150
+ #**What are the Evaluator Prompt Templates based on?**
151
+
152
+ #As a quick start, we've set up templates that cover the most popular evaluation metrics out there on LLM evaluation / monitoring tools, often known as 'base metrics'. The data samples used in these were randomly picked from popular datasets from academia - [ARC](https://huggingface.co/datasets/allenai/ai2_arc), [Preference Collection](https://huggingface.co/datasets/prometheus-eval/Preference-Collection), [RewardBench](https://huggingface.co/datasets/allenai/reward-bench), [RAGTruth](https://arxiv.org/abs/2401.00396).
153
+
154
+ #These templates are designed as a starting point to showcase how to interact with the Judge Arena, especially for those less familiar with using LLM judges.