Denis Yarats


I am a PhD student at New York University advised by Rob Fergus and Lerrel Pinto and at Facebook AI Research advised by Alessandro Lazaric.
I am also a visiting PhD student at UC Berkeley RLL with Pieter Abbeel.

My research aims to make reinforcement learning practical by learning effective visual representations, improving sample efficiency,
enabling unsupervised exploration, learning from demonstrations, exploring better network architectures and more.

[Google Scholar] [GitHub] [CV] [denisyarats at cs dot nyu dot edu]

Publications

ExORL: Exploratory Data for Offline Reinforcement Learning.
Denis Yarats*, David Brandfonbrener*, Hao Liu, Michael Laskin, Pieter Abbeel, Alessandro Lazaric, Lerrel Pinto.
Submitted to ICML 2022.
[PDF] [arXiv] [Code]

CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery.
Michael Laskin, Hao Liu, Xue Bin Peng, Denis Yarats, Aravind Rajeswaran, Pieter Abbeel.
Submitted to ICML 2022.
[PDF] [arXiv] [Blog]

Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement Learning.
Denis Yarats, Rob Fergus, Alessandro Lazaric, Lerrel Pinto.
ICLR 2022.
[PDF] [arXiv] [Code]

URLB: Unsupervised Reinforcement Learning Benchmark.
Michael Laskin*, Denis Yarats*, Hao Liu, Kimin Lee, Albert Zhan, Kevin Lu, Catherine Cang, Lerrel Pinto, Pieter Abbeel.
NeurIPS 2021.
[PDF] [arXiv] [Code] [Blog]

Reinforcement Learning with Prototypical Representations.
Denis Yarats, Rob Fergus, Alessandro Lazaric, Lerrel Pinto.
ICML 2021. SSL-RL Workshop at ICLR 2021 (Oral).
[PDF] [arXiv] [Code]

Learning Navigation Skills for Legged Robots with Learned Robot Embeddings.
Joanne Truong, Denis Yarats, Tianyu Li, Franziska Meier, Sonia Chernova, Dhruv Batra, Akshara Rai.
ArXiv 2020.
[PDF] [arXiv]

On the Model-Based Stochastic Value Gradient for Continuous Reinforcement Learning.
Brandon Amos, Sam Stanton, Denis Yarats, Andrew Wilson.
L4DC 2021 (Oral).
[PDF] [arXiv] [Website]

Automatic Data Augmentation for Generalization in Deep Reinforcement Learning.
Roberta Raileanu, Max Goldstein, Denis Yarats, Ilya Kostrikov, Rob Fergus.
NeurIPS 2021. BIG Workshop at ICML 2020 (Oral).
[PDF] [arXiv] [Code] [Website]

Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels.
Denis Yarats*, Ilya Kostrikov*, Rob Fergus.
ICLR 2021 (Spotlight).
[PDF] [arXiv] [Code] [Website]

On the adequacy of untuned warmup for adaptive optimization.
Jerry Ma, Denis Yarats.
AAAI 2021.
[PDF] [arXiv]

Improving Sample Efficiency in Model-Free Reinforcement Learning from Images.
Denis Yarats, Amy Zhang, Ilya Kostrikov, Brandon Amos, Joelle Pineau, Rob Fergus.
AAAI 2021.
[PDF] [arXiv] [Code] [Website]

The Differentiable Cross-Entropy Method.
Brandon Amos, Denis Yarats.
ICML 2020.
[PDF] [arXiv] [Website]

Generalized Inner Loop Meta-Learning.
Edward Grefenstette, Brandon Amos, Denis Yarats, Phu Mon Htut, Artem Molchanov
Franziska Meier, Douwe Kiela, Kyunghyun Cho, Soumith Chintala.
ArXiv 2019.
[PDF] [arXiv] [Code] [Website]

Hierarchical Decision Making by Generating and Following Natural Language Instructions.
Hengyuan Hu*, Denis Yarats*, Qucheng Gong, Yuandong Tian, Mike Lewis.
NeurIPS 2019.
[PDF] [arXiv] [Code] [Website] [Blog]

Quasi-hyperbolic momentum and Adam for deep learning.
Jerry Ma, Denis Yarats.
ICLR 2019.
[PDF] [arXiv] [Code] [Website]

Hierarchical Text Generation and Planning for Strategic Dialogue.
Denis Yarats, Mike Lewis.
ICML 2018.
[PDF] [arXiv] [Code]

Deal or No Deal? End-to-End Learning for Negotiation Dialogues.
Mike Lewis, Denis Yarats, Yann Dauphin, Devi Parikh, Dhruv Batra.
EMNLP 2017.
[PDF] [arXiv] [Code]

Convolutional Sequence to Sequence Learning.
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann Dauphin.
ICML 2017.
[PDF] [arXiv] [Code]

Open-source Code

DrQ-v2: Improved Data-Augmented RL.
[Code]

URLB: Unsupervised Reinforcement Learning Benchmark.
[Code]

DrQ: Data Regularized Q.
[Code]

PyTorch implementation of Soft Actor-Critic.
[Code]

OpenAI Gym wrapper for the DeepMind Control Suite.
[Code]

End-to-End Negotiator.
[Code]