-
Introducing Hypersweeper: Bridging the HPO Gap Between AutoML Research and ML Practitioners
The lack of widespread adoption of AutoML tools in the broader ML community has been a recurring topic of discussion within the field. Is this due to a lack of trust in these systems? Do our benchmarks fail to reflect real-world use cases? Or is it simply too difficult to find and implement state-of-the-art methods? […]
-
2023 in AutoRL
A retrospective on 2023 in AutoRL research written by a collective of AutoRL researchers. Redirects to the post on the AutoRL blog.
-
Why You Should Try Science Communication During Your PhD
SciComm can be great, even at a PhD level. Here's why.
-
Contextualize Me – The Case for Context in Reinforcement Learning
Carolin Benjamins, Theresa Eimer, Frederik Schubert, Aditya Mohan, Sebastian Döhler, André Biedenkapp, Bodo Rosenhahn, Frank Hutter and Marius Lindauer TLDR: We can model and investigate generalization in RL with contextual RL and our benchmark library CARL. In theory, without adding context we cannot achieve optimal performance and in the experiments we saw that using context […]
-
Hyperparameter Tuning in Reinforcement Learning is Easy, Actually
Hyperparameter Optimization tools perform well on Reinforcement Learning, outperforming Grid Searches with less than 10% of the budget. If not reported correctly, however, all hyperparameter tuning can heavily skew future comparisons.