Vision-Guided Quadrupedal Locomotion in the Wild with Multi-Modal Delay Randomization
Published in IROS 2022, 2022
Recommended citation: Chieko Imai, Minghao Zhang, Yuchen Zhang, Marcin Kierebinski, Ruihan Yang, Yuzhe Qin, & Xiaolong Wang (2022). Vision-Guided Quadrupedal Locomotion in the Wild with Multi-Modal Delay Randomization. In 2022 IEEE/RSJ international conference on intelligent robots and systems (IROS). https://arxiv.org/abs/2109.14549
Author: Chieko Imai, Minghao Zhang, Yuchen Zhang, Marcin Kierebinski, Ruihan Yang, Yuzhe Qin, & Xiaolong Wang (2022)
Developing robust vision-guided controllers for quadrupedal robots in complex environments with various obstacles, dynamical surroundings and uneven terrains is very challenging. While Reinforcement Learning (RL) provides a promising paradigm for agile locomotion skills with vision inputs in simulation, it is still very challenging to deploy the RL policy in the real world. Our key insight is that aside from the discrepancy in the observation domain gap between simulation and the real world, the latency from the control pipeline is also a major cause of the challenge. In this paper, we propose Multi-Modal Delay Randomization (MMDR) to address this issue when training with RL agents. Specifically, we randomize the selections for both the proprioceptive state and the visual observations in time, aiming to simulate the latency of the control system in the real world. We train the RL policy for end2end control in a physical simulator, and it can be directly deployed on the real A1 quadruped robot running in the wild. We evaluate our method in different outdoor environments with complex terrain and obstacles. We show the robot can smoothly maneuver at a high speed, avoiding the obstacles, and achieving significant improvement over the baselines.