offline-rl-neurips.github.io - Offline Reinforcement Learning Workshop

Example domain paragraphs

The website for 3 rd offline RL workshop at NeurIPS 2022 can be found at offline-rl-neurips.github.io/2022 . The website for 2 nd offline RL workshop at NeurIPS 2021 can be found at offline-rl-neurips.github.io/2021 . The remarkable success of deep learning has been driven by the availability of large and diverse datasets such as ImageNet. In contrast, the common paradigm in reinforcement learning (RL) assumes that an agent frequently interacts with the environment and learns using its own collected experie

Alternatively, Offline RL focuses on training agents with logged data with no further environment interaction. Offline RL promises to bring forward a data-driven RL paradigm and carries the potential to scale up end-to-end learning approaches to real-world decision making tasks such as robotics, recommendation systems, dialogue generation, autonomous driving, healthcare systems and safety-critical applications. Recently, successful deep RL algorithms have been adapted to the offline RL setting and demonstra

Goal of the workshop : Our goal is to bring attention to offline RL, both from within and from outside the RL community (e.g., causal inference, optimization, self-supervised learning), discuss algorithmic challenges that need to be addressed, discuss potential real-world applications, discuss limitations and challenges, and come up with concrete problem statements and evaluation protocols, inspired from real-world applications, for the research community to work on. In particular, we are interested in brin

Links to offline-rl-neurips.github.io (13)