Spaces:
Running
Running
Updated language
Browse files
common.py
CHANGED
@@ -12,7 +12,7 @@ BATTLE_RULES = """
|
|
12 |
- Both AIs stay anonymous - if either reveals its identity, the duel is void
|
13 |
- Choose the LLM judge that most aligns with your judgement
|
14 |
- If both score the same - choose the critique that you prefer more!
|
15 |
-
<br
|
16 |
"""
|
17 |
|
18 |
# CSS Styles
|
@@ -39,12 +39,12 @@ CSS_STYLES = """
|
|
39 |
# Default Eval Prompt
|
40 |
EVAL_DESCRIPTION = """
|
41 |
## 📝 Instructions
|
42 |
-
**Precise evaluation criteria
|
43 |
- Evaluation criteria
|
44 |
- Scoring rubric
|
45 |
- (Optional) Examples\n
|
46 |
|
47 |
-
**Any variables you define in your prompt using {{double curly braces}} will automatically map to the corresponding input fields under "Sample to evaluate" section on the right.**
|
48 |
|
49 |
<br><br>
|
50 |
"""
|
|
|
12 |
- Both AIs stay anonymous - if either reveals its identity, the duel is void
|
13 |
- Choose the LLM judge that most aligns with your judgement
|
14 |
- If both score the same - choose the critique that you prefer more!
|
15 |
+
<br>
|
16 |
"""
|
17 |
|
18 |
# CSS Styles
|
|
|
39 |
# Default Eval Prompt
|
40 |
EVAL_DESCRIPTION = """
|
41 |
## 📝 Instructions
|
42 |
+
**Precise evaluation criteria lead to more consistent and reliable judgments.** A good Evaluator Prompt should include the following elements:
|
43 |
- Evaluation criteria
|
44 |
- Scoring rubric
|
45 |
- (Optional) Examples\n
|
46 |
|
47 |
+
**Any variables you define in your prompt using {{double curly braces}} will automatically map to the corresponding input fields under the "Sample to evaluate" section on the right.**
|
48 |
|
49 |
<br><br>
|
50 |
"""
|