Conference Publications

  1. Open X-Embodiment Collaboration, et al.
    Open X-Embodiment: Robotic Learning Datasets and RT-X Models
    IEEE International Conference on Robotics and Automation (ICRA 2024).
    [arxiv] [website]

  2. Izzeddin Gur*, Hiroki Furuta*, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust. (*Equal Contribution)
    A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis
    International Conference on Learning Representations (ICLR 2024) (Oral, 1.2% of 7262 submissions).
    [arxiv]

  3. Hiroki Furuta, Kuang-Huei Lee, Ofir Nachum, Yutaka Matsuo, Aleksandra Faust, Shixiang Shane Gu, Izzeddin Gur.
    Multimodal Web Navigation with Instruction-Finetuned Foundation Models
    International Conference on Learning Representations (ICLR 2024).
    [arxiv] [website]

  4. Hiroki Furuta, Yusuke Iwasawa, Yutaka Matsuo, Shixiang Shane Gu.
    A System for Morphology-Task Generalization via Unified Representation and Behavior Distillation
    International Conference on Learning Representations (ICLR 2023) (Notable-top-25%, 8.0% of 4966 submissions).
    [arxiv] [code] [website]

  5. Hiroki Furuta, Yutaka Matsuo, Shixiang Shane Gu.
    Generalized Decision Transformer for Offline Hindsight Information Matching
    International Conference on Learning Representations (ICLR 2022) (Spotlight, 6.8% of 3391 submissions).
    [arxiv] [code] [website]

  6. Hiroki Furuta, Tadashi Kozuno, Tatsuya Matsushima, Yutaka Matsuo, Shixiang Shane Gu.
    Co-Adaptation of Algorithmic and Implementational Innovations in Inference-based Deep Reinforcement Learning
    Neural Information Processing Systems (NeurIPS 2021).
    [arxiv] [code]

  7. Hiroki Furuta, Tatsuya Matsushima, Tadashi Kozuno, Yutaka Matsuo, Sergey Levine, Ofir Nachum, Shixiang Shane Gu.
    Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning
    International Conference on Machine Learning (ICML 2021).
    [arxiv] [code]

  8. Tatsuya Matsushima*, Hiroki Furuta*, Yutaka Matsuo, Ofir Nachum, Shixiang Gu. (*Equal Contribution)
    Deployment-Efficient Reinforcement Learning via Model-Based Offline Optimization
    International Conference on Learning Representations (ICLR 2021).
    [openreview] [code]

Journal Publications

  1. So Kuroki, Tatsuya Matsushima, Junpei Arima, Hiroki Furuta, Yutaka Matsuo, Shixiang Shane Gu, Yujin Tang.
    Collective Intelligence for 2D Push Manipulations With Mobile Robots
    IEEE Robotics and Automation Letters (RA-L), 2023.
    [paper]

Preprints

  1. Hiroki Furuta, Gouki Minegishi, Yusuke Iwasawa, Yutaka Matsuo.
    Interpreting Grokked Transformers in Complex Modular Arithmetic
    arXiv preprint arXiv:2402.16726, 2024.
    [arxiv] [code]

  2. Kuang-Huei Lee, Xinyun Chen, Hiroki Furuta, John Canny, Ian Fischer.
    A Human-Inspired Reading Agent with Gist Memory of Very Long Contexts
    arXiv preprint arXiv:2402.09727, 2024.
    [arxiv] [website]

  3. Hiroki Furuta, Yutaka Matsuo, Aleksandra Faust, Izzeddin Gur.
    Exposing Limitations of Language Model Agents in Sequential-Task Compositions on the Web
    arXiv preprint arXiv:2311.18751, 2023.
    [arxiv] [code]

  4. Shixiang Shane Gu, Manfred Diaz, C. Daniel Freeman, Hiroki Furuta, Seyed Kamyar Seyed Ghasemipour, Anton Raichuk, Byron David, Erik Frey, Erwin Coumans, Olivier Bachem.
    Braxlines: Fast and Interactive Toolkit for RL-driven Behavior Generation Beyond Reward Maximization
    arXiv preprint arXiv:2110.04686, 2021.
    [arxiv] [code]

Workshop Presentations

  1. Kuang-Huei Lee, Xinyun Chen, Hiroki Furuta, John Canny, Ian Fischer.
    A Human-Inspired Reading Agent with Gist Memory of Very Long Contexts
    ICLR 2024 Workshop on Large Language Model (LLM) Agents $^{*}$

  2. Hiroki Furuta, Gouki Minegishi, Yusuke Iwasawa, Yutaka Matsuo.
    Interpreting Grokked Transformers in Complex Modular Arithmetic
    ICLR 2024 Workshop Bridging the Gap Between Practice and Theory in Deep Learning $^{*}$ (Oral).

  3. Hiroki Furuta, Yutaka Matsuo, Aleksandra Faust, Izzeddin Gur.
    Exposing Limitations of Language Model Agents in Sequential-Task Compositions on the Web
    NeurIPS 2023 Foundation Models for Decision Making Workshop $^{*}$
    ICLR 2024 Workshop on Large Language Model (LLM) Agents $^{*}$

  4. Open X-Embodiment Collaboration, et al.
    Open X-Embodiment: Robotic Learning Datasets and RT-X Models
    CoRL 2023 2nd Workshop on Language and Robot Learning (LangRob): Language as Grounding $^{*}$
    CoRL 2023 Towards Generalist Robots: Learning Paradigms for Scalable Skill Acquisition $^{*}$ (Oral)
    NeurIPS 2023 6th Robot Learning Workshop: Pretraining, Fine-Tuning, and Generalization with Large Scale Models $^{*}$.

  5. Hiroki Furuta, Ofir Nachum, Kuang-Huei Lee, Yutaka Matsuo, Shixiang Shane Gu, Izzeddin Gur.
    Instruction-Finetuned Foundation Models for Multimodal Web Navigation
    ICLR 2023 Workshop on Multimodal Representation Learning $^{*}$ (Spotlight)
    ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models $^{*}$
    ICLR 2023 Workshop on Reincarnating Reinforcement Learning $^{*}$.

  6. Hiroki Furuta, Yusuke Iwasawa, Yutaka Matsuo, Shixiang Shane Gu.
    Control Graph as Unified IO for Morphology-Task Generalization
    NeurIPS 2022 3rd Offline Reinforcement Learning Workshop: Offline RL as a “Launchpad” $^{*}$ (Contributed Talk)
    NeurIPS 2022 Foundation Models for Decision Making Workshop $^{*}$.

  7. Hiroki Furuta, Yutaka Matsuo, Shixiang Shane Gu.
    Generalized Decision Transformer for Offline Hindsight Information Matching
    NeurIPS 2021 Deep Reinforcement Learning Workshop $^{*}$.

  8. Hiroki Furuta, Tatsuya Matsushima, Tadashi Kozuno, Yutaka Matsuo, Sergey Levine, Ofir Nachum, Shixiang Shane Gu.
    Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning
    ICLR 2021 Workshop on Never-Ending RL $^{*}$ (Contributed Talk).

  9. Hiroki Furuta, Tadashi Kozuno, Tatsuya Matsushima, Yutaka Matsuo, Shixiang Shane Gu.
    A Unified View of Inference-based Off-Policy RL: Decoupling Algorithmic and Implementational Sources of Performance Differences
    NeurIPS 2020 Deep Reinforcement Learning Workshop $^{*}$.

  10. Tatsuya Matsushima*, Hiroki Furuta*, Yutaka Matsuo, Ofir Nachum, Shixiang Gu. (*Equal Contribution)
    Deployment-Efficient Reinforcement Learning via Model-Based Offline Optimization
    NeurIPS 2020 Offline Reinforcement Learning Workshop $^{*}$,
    Bay Area Machine Learning Symposium 2020 $^{*}$.