PUBLIKATIONSSERVER

Towards robust and domain agnostic reinforcement learning competitions

W.H. Guss, S. Milani, N. Topin, B. Houghton, S. Mohanty, A. Melnik, A. Harter, B. Buschmaas, B. Jaster, C. Berganski, D. Heitkamp, M. Henning, H. Ritter, C. Wu, X. Hao, Y. Lu, H. Mao, Y. Mao, C. Wang, M. Opanowicz, A. Kanervisto, Y. Schraner, C. Scheller, X. Zhou, L. Liu, D. Nishio, T. Tsuneda, K. Ramanauskas, G. Juceviciute, ArXiv (2021).

Download (ext.)
OA
Artikel | Veröffentlicht | Englisch
Autor*in
Guss, William Hebgen; Milani, Stephanie; Topin, Nicholay; Houghton, Brandon; Mohanty, Sharada; Melnik, Andrew; Harter, Augustin; Buschmaas, Benoit; Jaster, BjarneFH Bielefeld ; Berganski, Christoph; Heitkamp, Dennis; Henning, Marko
Alle
Abstract
Reinforcement learning competitions have formed the basis for standard research benchmarks, galvanized advances in the state-of-the-art, and shaped the direction of the field. Despite this, a majority of challenges suffer from the same fundamental problems: participant solutions to the posed challenge are usually domain-specific, biased to maximally exploit compute resources, and not guaranteed to be reproducible. In this paper, we present a new framework of competition design that promotes the development of algorithms that overcome these barriers. We propose four central mechanisms for achieving this end: submission retraining, domain randomization, desemantization through domain obfuscation, and the limitation of competition compute and environment-sample budget. To demonstrate the efficacy of this design, we proposed, organized, and ran the MineRL 2020 Competition on Sample-Efficient Reinforcement Learning. In this work, we describe the organizational outcomes of the competition and show that the resulting participant submissions are reproducible, non-specific to the competition environment, and sample/resource efficient, despite the difficult competition task.
Erscheinungsjahr
Zeitschriftentitel
arXiv
FH-PUB-ID

Zitieren

Guss, William Hebgen ; Milani, Stephanie ; Topin, Nicholay ; Houghton, Brandon ; Mohanty, Sharada ; Melnik, Andrew ; Harter, Augustin ; Buschmaas, Benoit ; u. a.: Towards robust and domain agnostic reinforcement learning competitions. In: arXiv, arXiv (2021)
Guss WH, Milani S, Topin N, et al. Towards robust and domain agnostic reinforcement learning competitions. arXiv. 2021. doi:10.48550/ARXIV.2106.03748
Guss, W. H., Milani, S., Topin, N., Houghton, B., Mohanty, S., Melnik, A., … Juceviciute, G. (2021). Towards robust and domain agnostic reinforcement learning competitions. ArXiv. https://doi.org/10.48550/ARXIV.2106.03748
@article{Guss_Milani_Topin_Houghton_Mohanty_Melnik_Harter_Buschmaas_Jaster_Berganski_et al._2021, title={Towards robust and domain agnostic reinforcement learning competitions}, DOI={10.48550/ARXIV.2106.03748}, journal={arXiv}, publisher={arXiv}, author={Guss, William Hebgen and Milani, Stephanie and Topin, Nicholay and Houghton, Brandon and Mohanty, Sharada and Melnik, Andrew and Harter, Augustin and Buschmaas, Benoit and Jaster, Bjarne and Berganski, Christoph and et al.}, year={2021} }
Guss, William Hebgen, Stephanie Milani, Nicholay Topin, Brandon Houghton, Sharada Mohanty, Andrew Melnik, Augustin Harter, et al. “Towards Robust and Domain Agnostic Reinforcement Learning Competitions.” ArXiv, 2021. https://doi.org/10.48550/ARXIV.2106.03748.
W. H. Guss et al., “Towards robust and domain agnostic reinforcement learning competitions,” arXiv, 2021.
Guss, William Hebgen, et al. “Towards Robust and Domain Agnostic Reinforcement Learning Competitions.” ArXiv, arXiv, 2021, doi:10.48550/ARXIV.2106.03748.

Link(s) zu Volltext(en)
URL
Access Level
Restricted Closed Access

Export

Markierte Publikationen

Open Data LibreCat

Suchen in

Google Scholar