GihhArwtw commited on
Commit
b5b5498
·
1 Parent(s): 2dbd962

update readmes and eval script; the same with official DriveLM track

Browse files
.gitattributes CHANGED
@@ -57,3 +57,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ ground_truth/drivelm_val.json filter=lfs diff=lfs merge=lfs -text
COMPETITION_DESC.md CHANGED
@@ -1 +1,56 @@
1
- Sample competition description
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # **Driving with Language Official Leaderboard**
2
+
3
+ ## Overview
4
+
5
+ Welcome to official leaderboard of `driving with language`.
6
+
7
+ Incorporating the language modality, this task connects Vision Language Models (VLMs) and autonomous driving systems. The model will introduce the reasoning ability of LLMs to make decisions, and pursue generalizable and explainable driving behavior. Given multi-view images as inputs, models are required to answer questions covering various aspects of driving.
8
+
9
+ Besides the official leaderboard, if you want to participate in the PRCV driving-with-language challenge, it is a **strict requirement** to register for your team by filling in this <a href="https://docs.google.com/forms/d/e/1FAIpQLSef_L4L9jXV_88pXkuFmaloifhRuFjVARbjsV-8GWETc6aNCA/viewform" target="_blank">Google Form</a>. The registration information can be edited till **TBA**. If you just want to submit your result on the official leaderboard, you can ignore this googel form.
10
+
11
+ If you want to participate in the PRCV driving-with-language challenge, please follow <a href="http://prcv.cn/?competition_130/" target="_blank">PRCV challenge general rules</a>. If you just want to submit your result to the official leaderboard, please check the <a href="https://opendrivelab.com/challenge2024/#general_rules" target="_blank">general rules</a> and <a href="https://github.com/OpenDriveLab/DriveLM/tree/main/challenge" target="_blank">track details</a>. For now we inherit the general rules from the CVPR AGC 2024.
12
+
13
+ ## Specific Rules
14
+
15
+ - We do not restrict the input modality and history frames for the inference of the model, while we don't allow using any human-labelled annotation as well as nusc provided ground truth annotation (including but not limited to bbox, map, lidar seg). Also please note that our baseline model only uses camera input.
16
+ - Using offline label from the question text is prohibited. Please see the <a href="https://docs.google.com/document/d/1QguVBhv03lIEsbrNOKqx5MyDpaS-fPxhjEWR39pBMTw/edit#heading=h.42gawos02r5l" target="_blank">statement</a>.
17
+
18
+ ## Important Dates
19
+
20
+ | Important Event | Date |
21
+ | --- | --- |
22
+ | **Test Server Open** | July 12, 2024 |
23
+ | **Leaderborad Public** | July 12, 2024 |
24
+ | **Test Server Close** | TBA |
25
+
26
+
27
+ ## Baseline
28
+ Everything you need is in our <a href="https://github.com/OpenDriveLab/DriveLM/tree/main/challenge" target="_blank">DriveLM Challenge</a> repo.
29
+
30
+
31
+ ## Dataset
32
+
33
+ This track is based on the **DriveLM** dataset we proposed. Please refer to the `Dataset` tab of the competition space.
34
+
35
+ ## Primary Metrics
36
+ - Language Evaluation
37
+
38
+ - Submetrics (BLEU, ROUGE_L, CIDEr), used for the evaluation of various unsupervised automated metrics for Natural Language Generation (NLG) code.
39
+
40
+ - Accuracy
41
+
42
+ - Ratio of correctly predicted samples to the total number of samples.
43
+
44
+ - ChatGPT Score
45
+
46
+ - Using ChatGPT to give the score between ground truth and predicted answers.
47
+
48
+ - Match Score
49
+
50
+ - Ratio of the number of correctly predicted important objects over the total number of objects.
51
+
52
+ We weighted and averaged several of the previous scores to get the **final score**, with ChatGPT Score, Language Score, Match Score and Accuracy having a weight of 0.4, 0.2, 0.2 and 0.2 respectively.
53
+
54
+
55
+ ## Submission Instructions
56
+ The participants are expected to submit their predictions as a huggingface model hub repo. Please refers to **Submision Information** for detailed steps.
DATASET_DESC.md CHANGED
@@ -1 +1,16 @@
1
- Sample dataset description
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # DriveLM for Driving with Language
2
+ - <a href="https://github.com/OpenDriveLab/DriveLM" target="_blank">Github</a> | <a href="https://arxiv.org/abs/2312.14150" target="_blank">Paper</a>
3
+
4
+ - Point of contact: [Chonghao (司马崇昊)](mailto:[email protected])
5
+
6
+ ## Dataset Description
7
+
8
+ Please visit <a href="https://github.com/OpenDriveLab/DriveLM" target="_blank">DriveLM: Driving with Graph Visual Question Answering</a> for details.
9
+
10
+ ## Dataset Download
11
+
12
+ <a href="https://github.com/OpenDriveLab/DriveLM/tree/main/challenge" target="_blank">The baseline code</a> can run on both full-scale and demo train data. All the code in the challenge repo (including train / infer the baseline, and evaluation) supports demo train data (which is in the same format of the full-scale trian data).
13
+
14
+ For dataset download, you can visit the following pages.
15
+
16
+ - <a href="https://github.com/OpenDriveLab/DriveLM/tree/main/challenge#drivelm" target="_blank">DriveLM-nuScenes version-1.1 dataset download</a>.
RULES.md CHANGED
@@ -1 +1,10 @@
1
- Sample rules
 
 
 
 
 
 
 
 
 
 
1
+ # Rules
2
+
3
+ ## General Rules
4
+
5
+ If you just want to submit your result to the official leaderboard, please check the <a href="https://opendrivelab.com/challenge2024/#general_rules" target="_blank">general rules</a> and <a href="https://github.com/OpenDriveLab/DriveLM/tree/main/challenge" target="_blank">track details</a>. For now we inherit the general rules from the CVPR AGC 2024.
6
+
7
+ ## Specific Rules
8
+
9
+ - We do not restrict the input modality and history frames for the inference of the model, while we don't allow using any human-labelled annotation as well as nusc provided ground truth annotation (including but not limited to bbox, map, lidar seg). Also please note that our baseline model only uses camera input.
10
+ - Using offline label from the question text is prohibited. Please see the <a href="https://docs.google.com/document/d/1QguVBhv03lIEsbrNOKqx5MyDpaS-fPxhjEWR39pBMTw/edit#heading=h.42gawos02r5l" target="_blank">statement</a>.
SUBMISSION_DESC.md CHANGED
@@ -1 +1,103 @@
1
- Sample submission description
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Submission
2
+
3
+ ## Submission Instruction
4
+
5
+ Please refer to [challenge README](https://github.com/OpenDriveLab/DriveLM/blob/main/challenge/README.md) on Github to prepare data and train your model. Please evaluate your [output.json](https://github.com/OpenDriveLab/DriveLM/blob/main/challenge/output.json) locally first before submitting to test server.
6
+
7
+ 1. Prepare your result
8
+
9
+ Open [prepare_submission.py](https://github.com/OpenDriveLab/DriveLM/blob/main/challenge/prepare_submission.py) and fill in the following information starting line 4:
10
+ ```
11
+ method = "" # <str> -- name of the method
12
+ team = "" # <str> -- name of the team, !!!identical to the Google Form!!!
13
+ authors = [""] # <list> -- list of str, authors
14
+ email = "" # <str> -- e-mail address
15
+ institution = "" # <str> -- institution or company
16
+ country = "" # <str> -- country or region
17
+ ```
18
+ While other fields can change between different submissions, make sure you <font color=red> always use your team name submitted on Google registration form for the `team` field, NOT the anonymous team name to be shown on the leaderboard</font>.
19
+ Then run this file:
20
+ ```bash
21
+ # make sure you are under ./challenge
22
+ python prepare_submission.py
23
+ ```
24
+ This will generate `submission.json` with your information and result. An [example](https://github.com/OpenDriveLab/DriveLM/blob/main/challenge/submission.json) is given in this folder.
25
+
26
+ 2. Upload your result as **a Hugging Face model**
27
+
28
+ Click your profile picture on the top right of the Hugging Face website, and select `+ New Model`. Create a new model repository, and upload the `submission.json` file.
29
+
30
+ Note that private models are also acceptable by the competition space.
31
+
32
+ 3. Submit your result and evaluate on test server
33
+
34
+ Click `New Submission` on the left panel of the competition space. Paste the link of the Hugging Face model you created under `Hub model`. Then click `Submit`.
35
+
36
+ <font color=red> Note: you can make up to 3 submissions per day. </font>
37
+
38
+
39
+ ## FAQ
40
+
41
+ ### How to View My Submissions?
42
+
43
+ You can check the status of your submissions in the `My submissions` tab of the competition space.
44
+
45
+ Please refer to [these slides](https://docs.google.com/presentation/d/1bicxoR_L3t05p5xw-qZM0Dj5KdJhjynqLM0Rck0qdcI/edit?usp=sharing) for explaination of each score.
46
+
47
+ You can select a submission and click `Update Selected Submissions` on the bottom to update its evaluation status to the private leaderboard. Please note that <font color=red>public score and private score are exactly the same</font> in our case. So please ignore the descriptions in `My Submissions` tab.
48
+
49
+ ### The `New Submission` page shows `Invalid Token` when I click `Submit`, what should I do?
50
+
51
+ This means you are no longer logged in to the current competition space, or the space has automatically logged you out due to inactivity (more than a day).
52
+
53
+ Please refresh the page, click `Login with Hugging Face` at the bottom of the left panel, and resubmit.
54
+
55
+ ### Can I Submit Without Making My Submission Public?
56
+
57
+ Of course. The competition space accepts Hugging Face private models. In fact, we recommend participants to submit as private models to keep their submissions private.
58
+
59
+ ### Will My Evaluation Status Be Visible to Others?
60
+
61
+ The public leaderboard will be open with the best results of all teams about a week before the competition ends.
62
+
63
+ **Note that** you can change your team name even after the competition ends. Thus, if you want to stay anonymous on the public leaderboard, you can first use a temporary team name and change it to your real team name after the competition ends.
64
+
65
+ ### My evaluation status shows `Failed`, how can I get the error message?
66
+
67
+ First, make sure your submission is in the correct format as in [submission preparation](#submission-preparation) and you upload the correct Hugging Face **model** link (in the format of `Username/model`) in `New Submission`.
68
+
69
+ The error message is listed under `Submission Comment` column under `My Submissions` tab.
70
+
71
+ ### I could not visit `My Submissions` page, what should I do?
72
+
73
+ Chances are that you are not logged in to the current competition space.
74
+
75
+ Please refresh the page, click `Login with Hugging Face` at the bottom of the left panel.
76
+
77
+ ### If I encounter a reshape error, what should I do?
78
+
79
+ You should first refer to this [location](https://github.com/OpenDriveLab/DriveLM/blob/main/challenge/evaluation.py#L89). Most of the reshape errors occur here.
80
+
81
+
82
+ ### Finally, which dataset do we submit to the competition?
83
+
84
+ Please refrain from using demo data. Instead, utilize the [validation data](https://drive.google.com/file/d/1fsVP7jOpvChcpoXVdypaZ4HREX1gA7As/view?usp=sharing) for inference and submission to the evaluation server.
85
+
86
+ ### I encountered KeyError: 'b789de07180846cc972118ee6d1fb027_b0e6fd5561454b2789c853e5350557a8_0' in my Submission Comment, what should I do?
87
+ If you saw a random UUID in Submission Comment, the error happens on [this line](https://github.com/OpenDriveLab/DriveLM/blob/030265cb243dd5b88bd0e20130c1a72e68bcf14e/challenge/evaluation.py#L178), you can try to reproduce this error locally. Most likely, this is due to not using the validation data we mentioned above.
88
+
89
+ ### My submission is stuck at `Processing`, what should I do?
90
+ This is likely due to server error on Hugging Face side, please wait for a while and submit again. If this error persists, contact our challenge host below.
91
+
92
+ ### My error information is not listed here, what should I do?
93
+ If you confirm that the submission format is correct, please contact the challenge host [Chonghao Sima](mailto:[email protected]) via email. Please include the **Submission ID** of the corresponding submission in the email. The Submission ID can be found in the `My Submissions` tab.
94
+
95
+ ```
96
+ Email Subject:
97
+ [OFFICIAL DRIVELM] Failed submission - {Submission ID}
98
+ Body:
99
+ Your Name: {}
100
+ Team Name: {}
101
+ Institution / Company: {}
102
+ Email: {}
103
+ ```
ground_truth/drivelm_val.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5632298786bf2c670832a1054962e6870c3814610c987716728be2954c96225d
3
+ size 10838947
metric.py ADDED
@@ -0,0 +1,636 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+ import argparse
3
+ import json
4
+ import time
5
+ import copy
6
+ import traceback
7
+ import random
8
+ import requests
9
+ import numpy as np
10
+ import language_evaluation
11
+ from multiprocessing import Pool
12
+ import openai
13
+ from huggingface_hub import HfApi, hf_hub_download
14
+
15
+ USE_INTERNAL = True
16
+
17
+ GROUND_TRUTH = "ground_truth/drivelm_val.json"
18
+ FORMAT = "json"
19
+
20
+ """Error handling code"""
21
+ TEAM_MESSAGE_TEMPLATE = "The team name in your submission is [<TEAM_NAME>].\n"
22
+ def update_teamname_to_submission_comment(params, team_name):
23
+ hfapi = HfApi()
24
+ team_info_in_repo = "submission_info/{}.json".format(params.team_id)
25
+ team_info_file = hf_hub_download(
26
+ repo_id=params.competition_id,
27
+ filename=team_info_in_repo,
28
+ token=params.token,
29
+ repo_type="dataset",
30
+ )
31
+ team_info = json.load(open(team_info_file, "r"))
32
+
33
+ for sub_info in team_info["submissions"]:
34
+ if sub_info["submission_id"] == params.submission_id:
35
+ sub_info["submission_comment"] = TEAM_MESSAGE_TEMPLATE.replace("<TEAM_NAME>", team_name) + sub_info["submission_comment"]
36
+ break
37
+
38
+ with open(team_info_file, "w") as f:
39
+ json.dump(team_info, f, indent=4)
40
+
41
+ hfapi.upload_file(
42
+ path_or_fileobj=team_info_file,
43
+ path_in_repo=team_info_in_repo,
44
+ repo_id=params.competition_id,
45
+ repo_type="dataset",
46
+ token=params.token,
47
+ )
48
+ return
49
+
50
+ ERROR_MESSAGE_TEMPLATE = "[ERROR] [<ERROR_MESSAGE>]\n"
51
+ def update_error_message_to_submission_comment(params, error_message):
52
+ hfapi = HfApi()
53
+ team_info_in_repo = "submission_info/{}.json".format(params.team_id)
54
+ team_info_file = hf_hub_download(
55
+ repo_id=params.competition_id,
56
+ filename=team_info_in_repo,
57
+ token=params.token,
58
+ repo_type="dataset",
59
+ )
60
+ team_info = json.load(open(team_info_file, "r"))
61
+
62
+ for sub_info in team_info["submissions"]:
63
+ if sub_info["submission_id"] == params.submission_id:
64
+ sub_info["submission_comment"] = ERROR_MESSAGE_TEMPLATE.replace("[<ERROR_MESSAGE>]", error_message) + sub_info["submission_comment"]
65
+ break
66
+
67
+ with open(team_info_file, "w") as f:
68
+ json.dump(team_info, f, indent=4)
69
+
70
+ hfapi.upload_file(
71
+ path_or_fileobj=team_info_file,
72
+ path_in_repo=team_info_in_repo,
73
+ repo_id=params.competition_id,
74
+ repo_type="dataset",
75
+ token=params.token,
76
+ )
77
+ return
78
+
79
+ def exception_handler_decorator(func):
80
+ def wrapper(params):
81
+ try:
82
+ return func(params)
83
+ except Exception as e:
84
+ hfapi = HfApi()
85
+ team_info_in_repo = "submission_info/{}.json".format(params.team_id)
86
+ team_info_file = hf_hub_download(
87
+ repo_id=params.competition_id,
88
+ filename=team_info_in_repo,
89
+ token=params.token,
90
+ repo_type="dataset",
91
+ )
92
+ team_info = json.load(open(team_info_file, "r"))
93
+
94
+ for sub_info in team_info["submissions"]:
95
+ if sub_info["submission_id"] == params.submission_id:
96
+ sub_info["error_message"] = str(e) + '\n\n' + traceback.format_exc()
97
+ break
98
+
99
+ with open(team_info_file.replace('.json', '_error.json'), "w") as f:
100
+ json.dump(sub_info, f, indent=4)
101
+
102
+ hfapi.upload_file(
103
+ path_or_fileobj=team_info_file.replace('.json', '_error.json'),
104
+ path_in_repo=f'submission_error/{params.submission_id}.json',
105
+ repo_id=params.competition_id,
106
+ repo_type="dataset",
107
+ token=params.token,
108
+ )
109
+ raise e
110
+ return wrapper
111
+
112
+ """DriveLM Specific"""
113
+ # API_KEYS = ["sk-NuE4a50TeXtSPVom099111B600C9435eAdCc445fE3FfFa72"] # need openai.api_base = "https://api.chatweb.plus/v1"
114
+
115
+ API_KEYS = [
116
+ # "sk-proj-s8DmEUz3c0fMsXNvoK2eT3BlbkFJgxqZ2ZN35VfGbf6pz32K", # batch 7
117
+ # "sk-proj-WJA3qO4cTryRDhOEpB4QT3BlbkFJTxgvO3xyMOEoWsga3iIj",
118
+ # "sk-proj-UXLnPZ54GqsFt8PUSUcfT3BlbkFJbcBmGdsnkUxKv9DMUUCv",
119
+ # "sk-proj-STphQ681iiXwW3ooY0sKT3BlbkFJYlqJFZr3lVqRNVdFh87a",
120
+ # "sk-proj-TnvZchmH06y3cKzixZ8XT3BlbkFJfMog1yTeQn1sX1kgLDw7",
121
+ # "sk-proj-GBuvgZ7HbVcrgXWMx7poT3BlbkFJBPq7Wjl3DC7WCizEgO1y",
122
+ # "sk-proj-iNV3Na6hlyFAVcDRAasiT3BlbkFJJavS6Od5RdDFyefuBRwq", # batch 8
123
+ # "sk-proj-DKvaprWQ8QSuV3Dd2LUmT3BlbkFJueAfFMUp1LZOeRoes1vD",
124
+ # "sk-proj-HHziJI12Spjjj0UeFc4ET3BlbkFJ9DjK4XKuLtBPCj88kMsq",
125
+ # "sk-proj-zImCrO7m5gjswwZhbMpOT3BlbkFJthf6TsqCiphXB4DtQxUG",
126
+ # "sk-proj-KWKBI0kUMINMetoYwJWST3BlbkFJ3EdKYkLUkm4tzXcV8dbl",
127
+ # "sk-proj-kv1aJThY7iupJ6qXdZpXT3BlbkFJSVIN1D2oJk7n60DXKngX",
128
+ # "sk-proj-wtVxu5lh9rKbl1FUBXwOT3BlbkFJ2mF0RsqOEzfUpaKGVQ45",
129
+ # "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", # batch 9
130
+ # "sk-wrFnTE0zlU1UngGmE8Dd1eC4880142Cd99A3CeB33eAe8d1e", # need openai.api_base = "https://api.claudeshop.top/"
131
+ # "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",
132
+ # "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",
133
+ # "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",
134
+ # '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',
135
+ # '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',
136
+ # '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',
137
+ # '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',
138
+ ######################### new api key ##############################
139
+ 'sk-wrFnTE0zlU1UngGmE8Dd1eC4880142Cd99A3CeB33eAe8d1e',
140
+ 'sk-DLon77JND74AgaCLKZoS0kZAdmUb3jU3oTzSvHfclS40flhS',
141
+ 'sk-sXnrftCjtiduDy40ecIT3h0Xu0H8YM8dWATM06TLH7Lt0zpv',
142
+ ]
143
+
144
+ class KeyManager:
145
+ def __init__(self):
146
+ self.keys = API_KEYS
147
+ # Initialize all keys as "unused"
148
+ self.status = ['unused' for _ in API_KEYS]
149
+
150
+ def get_key(self):
151
+ # Try to find an unused key first
152
+ unused_indices = [i for i, s in enumerate(self.status) if s == 'unused']
153
+ if unused_indices:
154
+ index = random.choice(unused_indices)
155
+ self.status[index] = 'using'
156
+ return self.keys[index], index
157
+
158
+ print("No unused key available! Assigning a key currently in use.")
159
+ # If no unused key is available, try a 'using' key
160
+ using_indices = [i for i, s in enumerate(self.status) if s == 'using']
161
+ if using_indices:
162
+ index = random.choice(using_indices)
163
+ return self.keys[index], index
164
+
165
+ # No suitable key is left
166
+ raise Exception("No available key left!")
167
+
168
+ def set_fail(self, index):
169
+ if 0 <= index < len(self.keys):
170
+ self.status[index] = 'fail'
171
+ else:
172
+ raise Exception("Error: Index out of bounds")
173
+
174
+ def __str__(self):
175
+ return "\n".join(f"{self.keys[i]}: {self.status[i]}" for i in range(len(self.keys)))
176
+
177
+ key_manager = KeyManager()
178
+
179
+
180
+ class GPTEvaluation:
181
+ def __init__(self, api_keys):
182
+ self.api_keys = api_keys
183
+ self._key_use = random.randint(0, len(self.api_keys)-1)
184
+ self._switch_key()
185
+
186
+ def _switch_key(self):
187
+ self._key_use = (self._key_use + 1) % len(self.api_keys)
188
+ openai.api_key = self.api_keys[self._key_use]
189
+ print("Switched to key: ", self._key_use)
190
+ # openai.api_base = "https://api.claudeshop.top/"
191
+
192
+ def call_chatgpt(self, chatgpt_messages, max_tokens=40, model="gpt-3.5-turbo"): # default model: gpt-3.5-turbo
193
+ response = openai.chat.completions.create(
194
+ model=model, messages=chatgpt_messages, temperature=0.6, max_tokens=max_tokens
195
+ )
196
+ reply = response.choices[0].message.content
197
+ total_tokens = response.usage.total_tokens
198
+ return reply, total_tokens
199
+
200
+ def prepare_chatgpt_message(self, prompt):
201
+ system_message = "an evaluator who rates my answer based on the correct answer"
202
+ messages = [{"role": "system", "content": system_message}]
203
+ messages.append({"role": "user", "content": "{}".format(prompt)})
204
+
205
+ return messages
206
+
207
+ def forward(self, data):
208
+ answer, GT = data
209
+ prompts = "Rate my answer based on the correct answer out of 100, with higher scores indicating that the answer is closer to the correct answer, and you should be accurate to single digits like 62, 78, 41,etc. Output the number only, no need for explanation. "
210
+ prompts = prompts + "This is the correct answer: " + GT + ". This is my answer: " + answer
211
+
212
+ output = ""
213
+ messages = self.prepare_chatgpt_message(prompts)
214
+ reply, total_tokens = self.call_chatgpt(messages, max_tokens=3000)
215
+
216
+ time.sleep(2) # default 1
217
+
218
+ output += reply
219
+ output += "\n\n"
220
+
221
+ output = output[:-2]
222
+
223
+ return output
224
+
225
+ class GPTEvaluationInternal:
226
+ def __init__(self):
227
+ self.api_key, self.key_idx = key_manager.get_key()
228
+ print("Initial key id: ", self.key_idx)
229
+ self.query_count = 0
230
+ # self.client = openai.Client(api_key=api_key)
231
+
232
+ self.prompts_p1 = ["Rate my answer based on the correct answer out of 100, ", "Please score my answer out of 100 compared with correct answer, "]
233
+ self.prompts_p2 = ["with higher scores indicating that the answer is closer to the correct answer, ", "higher is better, ", ""]
234
+ self.prompts_p3 = ["you should be accurate to single digits like 62, 78, 41, etc. ", "be accurate to integer value. ", ""]
235
+ self.prompts_p4 = ["Output the number only, no need for explanation. ", "Please respond the number only. ", "Please answer the score only. "]
236
+
237
+ def call_chatgpt(self, chatgpt_messages, max_tokens=40, model="gpt-3.5-turbo"): # default model: gpt-3.5-turbo
238
+ while True:
239
+ try:
240
+ # old
241
+ # client = openai.Client(api_key=self.api_key)
242
+ # response = client.chat.completions.create(
243
+ # model=model, messages=chatgpt_messages, temperature=0.6, max_tokens=max_tokens
244
+ # )
245
+
246
+ # new
247
+ Baseurl = "https://api.claudeshop.top"
248
+ url = Baseurl + "/v1/chat/completions"
249
+
250
+ headers = {
251
+ 'Accept': 'application/json',
252
+ 'Authorization': f'Bearer {self.api_key}',
253
+ 'User-Agent': 'Apifox/1.0.0 (https://apifox.com)',
254
+ 'Content-Type': 'application/json'
255
+ }
256
+
257
+ payload = json.dumps({
258
+ "model": model,
259
+ "messages": [
260
+ {
261
+ "role": "system",
262
+ "content": chatgpt_messages[0]["content"]
263
+ },
264
+ {
265
+ "role": "user",
266
+ "content": chatgpt_messages[1]["content"]
267
+ }
268
+ ],
269
+ "temperature": 0.6,
270
+ "max_tokens": max_tokens,
271
+ })
272
+
273
+ response = requests.request("POST", url, headers=headers, data=payload)
274
+ content = response.json()
275
+
276
+ except Exception as e:
277
+ print("Failed prompt: ", chatgpt_messages)
278
+ print("Key id ", self.key_idx, " fail with error after querying ", self.query_count, " times: ", e)
279
+ print(type(e))
280
+ if ("We've encountered an issue with repetitive patterns in your prompt." in str(e)):
281
+ return '00', 0
282
+ key_manager.set_fail(self.key_idx)
283
+ self.api_key, self.key_idx = key_manager.get_key()
284
+ self.query_count = 0
285
+ continue
286
+ break
287
+
288
+ self.query_count += 1
289
+
290
+ # old
291
+ # reply = response.choices[0].message.content
292
+ # total_tokens = response.usage.total_tokens
293
+
294
+ # new
295
+ print(content)
296
+ reply = content['choices'][0]['message']['content']
297
+ total_tokens = content['usage']['total_tokens']
298
+
299
+ return reply, total_tokens
300
+
301
+ def prepare_chatgpt_message(self, prompt):
302
+ system_message = "an evaluator who rates my answer based on the correct answer"
303
+ messages = [{"role": "system", "content": system_message}]
304
+ messages.append({"role": "user", "content": "{}".format(prompt)})
305
+
306
+ return messages
307
+
308
+ def forward(self, chunk_data):
309
+ # self.client = client
310
+ outputs = []
311
+ for data in chunk_data:
312
+ answer, GT = data
313
+ # prompts = random.choice(self.prompts_p1) + random.choice(self.prompts_p2) + random.choice(self.prompts_p3) + random.choice(self.prompts_p4)
314
+ prompts = "Rate my answer based on the correct answer out of 100, with higher scores indicating that the answer is closer to the correct answer, and you should be accurate to single digits like 62, 78, 41,etc. Output the number only, no need for explanation. "
315
+ prompts = prompts + "This is the correct answer: " + GT + ". This is my answer: " + answer
316
+
317
+ output = ""
318
+ messages = self.prepare_chatgpt_message(prompts)
319
+ reply, total_tokens = self.call_chatgpt(messages, max_tokens=3000)
320
+
321
+ time.sleep(0.5) # default 0.25
322
+
323
+ output += reply
324
+ output += "\n\n"
325
+
326
+ output = output[:-2]
327
+
328
+ outputs.append(output)
329
+ return outputs
330
+
331
+
332
+ class evaluation_suit():
333
+ def __init__(self):
334
+ self.language_eval = language_evaluation.CocoEvaluator(coco_types=["BLEU", "ROUGE_L", "CIDEr"])
335
+ self.num_process = 3 # default: 32
336
+
337
+ if USE_INTERNAL:
338
+ self.chatgpt_eval = []
339
+ for i in range(self.num_process):
340
+ self.chatgpt_eval.append(GPTEvaluationInternal())
341
+ else:
342
+ # API_KEYS = API_KEYS_FAST
343
+ # API_KEYS.extend(API_KEYS_SLOW)
344
+ self.chatgpt_eval = GPTEvaluation(API_KEYS)
345
+ self.GPT = []
346
+ self.accuracy = {"answer": [], "GT": []}
347
+ self.language = {"answer": [], "GT": []}
348
+ self.language_score_keys = []
349
+ self.match = {"match": {"answer": [], "GT": []}, "GPT": []}
350
+
351
+ def eval_acc(self):
352
+ scores = []
353
+ for i in range(len(self.accuracy["answer"])):
354
+ answer = self.accuracy["answer"][i]
355
+ GT = self.accuracy["GT"][i]
356
+ if answer == GT:
357
+ scores.append(1.0)
358
+ else:
359
+ scores.append(0.0)
360
+
361
+ scores = sum(scores) / len(scores)
362
+ return scores
363
+
364
+ def eval_chatGPT(self, data):
365
+ remain_attempts = len(self.chatgpt_eval.api_keys)
366
+ while remain_attempts > 0:
367
+ try:
368
+ with Pool(3) as p: # Change the number based on your CPU cores
369
+ scores = p.map(self.chatgpt_eval.forward, data)
370
+ scores = list(map(float, scores))
371
+ except Exception as e:
372
+ print("This key fail with error: ", e)
373
+ remain_attempts -= 1
374
+ if remain_attempts == 0:
375
+ print("All keys failed!")
376
+ raise e
377
+ else:
378
+ self.chatgpt_eval._switch_key()
379
+ continue
380
+ break
381
+
382
+ scores = sum(scores) / len(scores)
383
+ return scores
384
+
385
+ def apply_function(self, task):
386
+ func, chunk = task
387
+ return func(chunk)
388
+
389
+ def eval_chatGPT_internal(self, data):
390
+
391
+ chunk_size = len(data) // self.num_process
392
+ tasks = [(self.chatgpt_eval[i].forward, data[i * chunk_size : (i+1) * chunk_size]) for i in range(self.num_process)]
393
+
394
+ with Pool(self.num_process) as p:
395
+ scores_chunked = p.map(self.apply_function, tasks)
396
+ scores = [score for chunk in scores_chunked for score in chunk]
397
+ scores = list(map(float, scores))
398
+
399
+ scores = sum(scores) / len(scores)
400
+ return scores
401
+
402
+ def eval_language(self):
403
+ """
404
+ return the dict evaluation results
405
+ """
406
+ answer = self.language["answer"]
407
+ GT = self.language["GT"]
408
+ results_gen = self.language_eval.run_evaluation(answer, GT)
409
+ results_gen_dict = {
410
+ f"language/{k}": v for k, v in results_gen.items()
411
+ }
412
+ self.language_score_keys = list(results_gen_dict.keys())
413
+ return results_gen_dict
414
+
415
+ def eval_match(self):
416
+ outs1 = []
417
+ for i in range(len(self.match["match"]["answer"])):
418
+ answer = self.match["match"]["answer"][i]
419
+ GT = self.match["match"]["GT"][i]
420
+ _, F1_score = self.match_result(answer, GT)
421
+ outs1.append(F1_score * 100)
422
+
423
+ outs1 = sum(outs1) / len(outs1)
424
+ if USE_INTERNAL:
425
+ outs2 = self.eval_chatGPT_internal(self.match["GPT"])
426
+ else:
427
+ outs2 = self.eval_chatGPT(self.match["GPT"])
428
+ scores = (outs1 + outs2) / 2.0
429
+ return scores
430
+
431
+ def eval_graph(self, question):
432
+ # check if answer in self.graph
433
+ question_nums = re.findall(r'\d+\.\d+', question)
434
+ question_nums = np.array([list(map(float, x.split()))[0] for x in question_nums]).reshape(-1, 2)
435
+ question_nums = [list(i) for i in question_nums]
436
+ for q in question_nums:
437
+ if q not in self.graph:
438
+ return False
439
+ return True
440
+
441
+ def match_result(self, answer, GT):
442
+ """
443
+ answer: [[1.,2.], [2., 3.]]
444
+ GT: [[1., 2.], [2., 3.]]
445
+ """
446
+ answer_nums = re.findall(r'\d+\.\d+', answer)
447
+ GT_nums = re.findall(r'\d+\.\d+', GT)
448
+ # transform string into float
449
+ if len(answer_nums) % 2 != 0:
450
+ answer_nums = answer_nums[:-1]
451
+ answer_nums = np.array([list(map(float, x.split()))[0] for x in answer_nums]).reshape(-1, 2)
452
+ GT_nums = np.array([list(map(float, x.split()))[0] for x in GT_nums]).reshape(-1, 2)
453
+ length = len(GT_nums)
454
+
455
+ matched_out = []
456
+ true_positives = 0
457
+ false_positives = 0
458
+ false_negatives = 0
459
+ for pred in answer_nums:
460
+ closest_distance = float('inf')
461
+ closest_gt = None
462
+ closest_id = None
463
+ for i, gt in enumerate(GT_nums):
464
+ distance = np.sum(np.abs(pred - gt))
465
+ if distance < closest_distance:
466
+ closest_distance = distance
467
+ closest_gt = gt
468
+ closest_id = i
469
+
470
+ if closest_distance < 16:
471
+ true_positives += 1
472
+ matched_out.append(closest_gt)
473
+ GT_nums = np.delete(GT_nums, closest_id, axis=0)
474
+ else:
475
+ false_positives += 1
476
+
477
+ false_negatives = length - true_positives
478
+ precision = true_positives / (true_positives + false_positives + 1e-8)
479
+ recall = true_positives / (true_positives + false_negatives + 1e-8)
480
+ F1 = 2 * precision * recall / (precision + recall + 1e-8)
481
+
482
+ return matched_out, F1
483
+
484
+ def set_graph(self, answer, GT):
485
+ self.graph, _ = self.match_result(answer, GT)
486
+ self.graph = [list(i) for i in self.graph]
487
+
488
+ def forward(self, tag, answer, GT):
489
+ if 0 in tag:
490
+ self.accuracy["answer"].append(answer)
491
+ self.accuracy["GT"].append(GT)
492
+ if 1 in tag:
493
+ self.GPT.append((answer, GT))
494
+ if 2 in tag:
495
+ self.language["GT"].append(GT)
496
+ self.language["answer"].append(answer)
497
+ if 3 in tag:
498
+ self.match["match"]["GT"].append(GT)
499
+ self.match["match"]["answer"].append(answer)
500
+ self.match["GPT"].append((answer, GT))
501
+
502
+
503
+ def evaluation(self):
504
+ print("evaluation start!")
505
+ scores = {}
506
+ scores["accuracy"] = self.eval_acc()
507
+ print("USE_INTERNAL: ", USE_INTERNAL)
508
+ if USE_INTERNAL:
509
+ scores["chatgpt"] = self.eval_chatGPT_internal(self.GPT)
510
+ else:
511
+ scores["chatgpt"] = self.eval_chatGPT(self.GPT)
512
+ scores.update(self.eval_language())
513
+ scores["match"] = self.eval_match()
514
+
515
+ return scores
516
+
517
+
518
+ @exception_handler_decorator
519
+ def compute(params, quiet=True):
520
+ try:
521
+ print("Team name is: ", params.team_id)
522
+ # if "29857a24" in params.team_id:
523
+ # global USE_INTERNAL
524
+ # USE_INTERNAL = True
525
+ submission_filename = "submissions/{}-{}.{}".format(params.team_id, params.submission_id, FORMAT)
526
+ print(submission_filename)
527
+
528
+ submission = hf_hub_download(
529
+ repo_id=params.competition_id,
530
+ filename=submission_filename,
531
+ token=params.token,
532
+ repo_type="dataset",
533
+ )
534
+ except Exception as e:
535
+ error_message = "submission.json not found in the repository, or it cannot be loaded."
536
+ update_error_message_to_submission_comment(params, error_message)
537
+ raise e
538
+
539
+ with open(submission, 'r') as f :#, \
540
+ pred_file = json.load(f)
541
+ team_name = pred_file.get('team', None)
542
+ pred_file = pred_file["results"]
543
+ pred_file = {pred_file[i]["id"]: pred_file[i] for i in range(len(pred_file))}
544
+
545
+ if team_name is not None:
546
+ update_teamname_to_submission_comment(params, team_name)
547
+ else:
548
+ update_error_message_to_submission_comment(params, "Team name not found in the submission file.")
549
+
550
+ ground_truth = hf_hub_download(
551
+ repo_id=params.competition_id,
552
+ filename=GROUND_TRUTH,
553
+ token=params.token,
554
+ repo_type="dataset",
555
+ )
556
+
557
+ with open(ground_truth, 'r') as f:
558
+ test_file = json.load(f)
559
+
560
+ print("Submission and Ground Truth downloaded.")
561
+ print("Evaluating...")
562
+
563
+ try:
564
+ evaluation = evaluation_suit()
565
+ output = {"accuracy": [], "chatgpt": [], "language": [], "match": []}
566
+ for scene_id in test_file.keys():
567
+ scene_data = test_file[scene_id]['key_frames']
568
+
569
+ for frame_id in scene_data.keys():
570
+ frame_data_qa = scene_data[frame_id]['QA']
571
+ first_flag = True
572
+
573
+ for i, qa in enumerate(frame_data_qa["perception"] + frame_data_qa["prediction"] + frame_data_qa["planning"] + frame_data_qa["behavior"]):
574
+ question = qa['Q']
575
+ GT = qa['A']
576
+ tag = qa['tag']
577
+ idx = scene_id + "_" + frame_id + "_" + str(i)
578
+ predict = pred_file[idx]["answer"]
579
+ if first_flag:
580
+ first_flag = False
581
+ evaluation.set_graph(predict, GT)
582
+ evaluation.forward(tag, predict, GT)
583
+ else:
584
+ if evaluation.eval_graph(question):
585
+ res = evaluation.forward(tag, predict, GT)
586
+
587
+ output = evaluation.evaluation()
588
+ print("accuracy score: ", output["accuracy"])
589
+ print("chatgpt score: ", output["chatgpt"])
590
+ print("match score: ", output["match"])
591
+ print("language score:")
592
+ for key in evaluation.language_score_keys:
593
+ print(key, output[key])
594
+
595
+ # Normalize to 0-1 and combine the scores: chatgpt, language, match, accuracy
596
+ scores = []
597
+ weights = [0.4, 0.2, 0.2, 0.2]
598
+
599
+ # chatGPT
600
+ score = output["chatgpt"] / 100.
601
+ scores.append(score)
602
+
603
+ # language
604
+ score = 0
605
+ for idx, key in enumerate(evaluation.language_score_keys):
606
+ if idx < 4:
607
+ score += output[key] / 4. / 3.
608
+ elif idx == 4:
609
+ score += output[key] / 3.
610
+ else:
611
+ score += output[key] / 10. / 3.
612
+
613
+ scores.append(score)
614
+
615
+ # match
616
+ score = output["match"] / 100.
617
+ scores.append(score)
618
+
619
+ # accuracy
620
+ score = output["accuracy"]
621
+ scores.append(score)
622
+
623
+ final_score = sum([x * y for x, y in zip(scores, weights)])
624
+ output["final_score"] = final_score
625
+
626
+ except Exception as e:
627
+ error_message = "Evaluation failed. " + str(e)
628
+ update_error_message_to_submission_comment(params, error_message)
629
+ raise e
630
+
631
+ evaluation = {
632
+ "public_score": output,
633
+ "private_score": output
634
+ }
635
+
636
+ return evaluation
requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ openai==1.23.2
2
+
3
+ git+https://github.com/bckim92/language-evaluation.git
solution.csv DELETED
@@ -1,32 +0,0 @@
1
- id,pred,split
2
- 0,1,public
3
- 1,0,private
4
- 2,0,private
5
- 3,1,private
6
- 4,0,public
7
- 5,1,private
8
- 6,1,public
9
- 7,1,private
10
- 8,0,public
11
- 9,0,private
12
- 10,0,private
13
- 11,0,private
14
- 12,1,private
15
- 13,0,private
16
- 14,1,public
17
- 15,1,private
18
- 16,1,private
19
- 17,0,private
20
- 18,0,private
21
- 19,0,public
22
- 20,0,private
23
- 21,0,private
24
- 22,1,private
25
- 23,1,public
26
- 24,0,private
27
- 25,0,private
28
- 26,0,public
29
- 27,1,private
30
- 28,1,private
31
- 29,0,private
32
- 30,0,public