I am a master’s student in Technology Management for Innovation at The University of Tokyo, working on Deep Reinforcement Learning. I am advised by Yutaka Matsuo and mentored by Shixiang Shane Gu.

My research interest has focused on data-driven control towards real-world application, and the nature of environments in deep reinforcement learning.

Preprints

  1. Hiroki Furuta, Yutaka Matsuo, Shixiang Shane Gu.
    Generalized Decision Transformer for Offline Hindsight Information Matching
    arXiv preprint arXiv:2111.10364, 2021.
    [arxiv] [code] [website]

  2. Shixiang Shane Gu, Manfred Diaz, C. Daniel Freeman, Hiroki Furuta, Seyed Kamyar Seyed Ghasemipour, Anton Raichuk, Byron David, Erik Frey, Erwin Coumans, Olivier Bachem.
    Braxlines: Fast and Interactive Toolkit for RL-driven Behavior Generation Beyond Reward Maximization
    arXiv preprint arXiv:2110.04686, 2021.
    [arxiv] [code]

Conference Publications

  1. Hiroki Furuta, Tadashi Kozuno, Tatsuya Matsushima, Yutaka Matsuo, Shixiang Shane Gu.
    Co-Adaptation of Algorithmic and Implementational Innovations in Inference-based Deep Reinforcement Learning
    Neural Information Processing Systems (NeurIPS 2021).
    [arxiv] [code]

  2. Hiroki Furuta, Tatsuya Matsushima, Tadashi Kozuno, Yutaka Matsuo, Sergey Levine, Ofir Nachum, Shixiang Shane Gu.
    Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning
    International Conference on Machine Learning (ICML 2021).
    [arxiv] [code]

  3. Tatsuya Matsushima*, Hiroki Furuta*, Yutaka Matsuo, Ofir Nachum, Shixiang Gu. (*Equal Contribution)
    Deployment-Efficient Reinforcement Learning via Model-Based Offline Optimization
    International Conference on Learning Representations (ICLR 2021).
    [openreview] [code]

Workshop Papers

  1. Hiroki Furuta, Yutaka Matsuo, Shixiang Shane Gu.
    Generalized Decision Transformer for Offline Hindsight Information Matching
    NeurIPS 2021 Deep Reinforcement Learning Workshop.

  2. Hiroki Furuta, Tatsuya Matsushima, Tadashi Kozuno, Yutaka Matsuo, Sergey Levine, Ofir Nachum, Shixiang Shane Gu.
    Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning
    ICLR 2021 Workshop on Never-Ending RL. (Contributed Talk)

  3. Hiroki Furuta, Tadashi Kozuno, Tatsuya Matsushima, Yutaka Matsuo, Shixiang Shane Gu.
    A Unified View of Inference-based Off-Policy RL: Decoupling Algorithmic and Implementational Sources of Performance Differences
    NeurIPS 2020 Deep Reinforcement Learning Workshop.

  4. Tatsuya Matsushima*, Hiroki Furuta*, Yutaka Matsuo, Ofir Nachum, Shixiang Gu. (*Equal Contribution)
    Deployment-Efficient Reinforcement Learning via Model-Based Offline Optimization
    NeurIPS 2020 Offline Reinforcement Learning Workshop.

  5. Tatsuya Matsushima*, Hiroki Furuta*, Yutaka Matsuo, Ofir Nachum, Shixiang Gu. (*Equal Contribution)
    Deployment-Efficient Reinforcement Learning via Model-Based Offline Optimization
    Bay Area Machine Learning Symposium 2020.

Talks

  1. Hiroki Furuta. “Co-Adaptation of Algorithmic and Implementational Innovations in Inference-based Deep Reinforcement Learning”. NeurIPS Meetup Japan 2021, 2021.

Academic Activitites

  1. Co-organizer for Ecological Theory of RL Workshop at NeurIPS 2021.

  2. Reviewer for International Conference on Learning Representations (ICLR), 2022.

  3. Reviewer for Neural Information Processing Systems (NeurIPS), 2021.

  4. Reviewer for International Conference on Machine Learning (ICML), 2021.