Datasets:

Modalities:
Tabular
Formats:
csv
ArXiv:
DOI:
Libraries:
Datasets
pandas
License:
LabChameleon commited on
Commit
e000a7e
·
verified ·
1 Parent(s): f42cb14

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -6
README.md CHANGED
@@ -227,11 +227,6 @@ configs:
227
  - "pbt/cc_pendulum_sac.csv"
228
  ---
229
 
230
- <script
231
- src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"
232
- type="text/javascript">
233
- </script>
234
-
235
  # The ARLBench Performance Dataset
236
 
237
  [ARLBench](https://github.com/automl/arlbench) is a benchmark for hyperparameter optimization in Reinforcement Learning.
@@ -241,6 +236,6 @@ These runs could be used to meta-learn information about the hyperparameter land
241
  In detail, it contains each 10 runs for the landscape data of PPO, DQN and SAC respectively on the Atari-5 environments, four XLand gridworlds, four Brax walkers, five classic control and two Box2D environments.
242
  Additionally, it contains each 3 runs for the 5 optimzation algorithms PBT, SMAC, SMAC with Multi-Fidelity and Random Search for each algorithm and environment pair.
243
  The dataset follows the mapping:
244
- $$\text{Training Budget and Seed, Hyperparameter Configuration} \mapsto \text{Training Performance}$$.
245
  For the optimization runs, it additionally includes the key *optimization seed* to distinguish configurations between the 5 optimization runs.
246
  For more information, refer to the [ARLBench](https://arxiv.org/abs/2409.18827) paper.
 
227
  - "pbt/cc_pendulum_sac.csv"
228
  ---
229
 
 
 
 
 
 
230
  # The ARLBench Performance Dataset
231
 
232
  [ARLBench](https://github.com/automl/arlbench) is a benchmark for hyperparameter optimization in Reinforcement Learning.
 
236
  In detail, it contains each 10 runs for the landscape data of PPO, DQN and SAC respectively on the Atari-5 environments, four XLand gridworlds, four Brax walkers, five classic control and two Box2D environments.
237
  Additionally, it contains each 3 runs for the 5 optimzation algorithms PBT, SMAC, SMAC with Multi-Fidelity and Random Search for each algorithm and environment pair.
238
  The dataset follows the mapping:
239
+ $$\text{Training Budget and Seed, Hyperparameter Configuration} \mapsto \text{Training Performance}$$
240
  For the optimization runs, it additionally includes the key *optimization seed* to distinguish configurations between the 5 optimization runs.
241
  For more information, refer to the [ARLBench](https://arxiv.org/abs/2409.18827) paper.