LabChameleon
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -233,5 +233,8 @@ configs:
|
|
233 |
Since we performed several thousand runs on the benchmark to find meaningful HPO test settings in RL, we collect them in this dataset for future use.
|
234 |
These runs could be used to meta-learn information about the hyperparameter landscape or warmstart HPO tools.
|
235 |
|
236 |
-
In detail, it contains each 10 runs for PPO, DQN and SAC respectively on the Atari-5 environments, four XLand gridworlds, four Brax walkers, five classic control and two Box2D environments.
|
|
|
|
|
|
|
237 |
For more information, refer to the [ARLBench](https://arxiv.org/abs/2409.18827) paper.
|
|
|
233 |
Since we performed several thousand runs on the benchmark to find meaningful HPO test settings in RL, we collect them in this dataset for future use.
|
234 |
These runs could be used to meta-learn information about the hyperparameter landscape or warmstart HPO tools.
|
235 |
|
236 |
+
In detail, it contains each 10 runs for the landscape data of PPO, DQN and SAC respectively on the Atari-5 environments, four XLand gridworlds, four Brax walkers, five classic control and two Box2D environments.
|
237 |
+
Additionally, it contains each 3 runs for the 5 optimzation algorithms PBT, SMAC, SMAC with Multi-Fidelity and Random Search for each algorithm and environment pair.
|
238 |
+
The dataset follows the mapping: $$\text{Training Budget and Seed, Hyperparameter Configuration} \mapsto \text{Training Performance}$$.
|
239 |
+
For the optimization runs, it additionally includes the key *optimization seed* to distinguish configurations between the 5 optimization runs.
|
240 |
For more information, refer to the [ARLBench](https://arxiv.org/abs/2409.18827) paper.
|