-
2023 in AutoRL
A retrospective on 2023 in AutoRL research written by a collective of AutoRL researchers. Redirects to the post on the AutoRL blog.
-
Why You Should Try Science Communication During Your PhD
SciComm can be great, even at a PhD level. Here's why.
-
Contextualize Me – The Case for Context in Reinforcement Learning
Carolin Benjamins, Theresa Eimer, Frederik Schubert, Aditya Mohan, Sebastian Döhler, André Biedenkapp, Bodo Rosenhahn, Frank Hutter and Marius Lindauer TLDR: We can model and investigate generalization in RL with contextual RL and our benchmark library CARL. In theory, without adding context we cannot achieve optimal performance and in the experiments we saw that using context […]
-
Hyperparameter Tuning in Reinforcement Learning is Easy, Actually
Hyperparameter Optimization tools perform well on Reinforcement Learning, outperforming Grid Searches with less than 10% of the budget. If not reported correctly, however, all hyperparameter tuning can heavily skew future comparisons.
-
Self-Paced Context Evaluation for Contextual Reinforcement Learning
RL agents, just like humans, often benefit from a difficulty curve in learning [Matiisen et al. 2017, Fuks et al. 2019, Zhang et al. 2020]. Progressing from simple task instances, e.g. walking on flat surfaces or towards goals that are very close to the agent, to more difficult ones lets the agent accomplish much harder […]