Coupling Vision and Proprioception for
Navigation of Legged Robots

Zipeng Fu*1    ‪Ashish Kumar*2    Ananye Agarwal1    Haozhi Qi2    Jitendra Malik2    Deepak Pathak1
1Carnegie Mellon University          2UC Berkeley CVPR 2022 (Best Paper Award at Multimodal Learning Workshop)

Coupling vision and proprioception helps navigation in situations where vision alone is not enough

Abstract

We exploit the complementary strengths of vision and proprioception to develop a point-goal navigation system for legged robots, called VP-Nav. Legged systems are capable of traversing more complex terrain than wheeled robots, but to fully utilize this capability, we need a high-level path planner in the navigation system to be aware of the walking capabilities of the low-level locomotion policy in varying environments. We achieve this by using proprioceptive feedback to ensure the safety of the planned path by sensing unexpected obstacles like glass walls, terrain properties like slipperiness or softness of the ground and robot properties like extra payload that are likely missed by vision. The navigation system uses onboard cameras to generate an occupancy map and a corresponding cost map to reach the goal. A fast marching planner then generates a target path. A velocity command generator takes this as input to generate the desired velocity for the walking policy. A safety advisor module adds sensed unexpected obstacles to the occupancy map and environment-determined speed limits to the velocity command generator. We show superior performance compared to wheeled robot baselines, and ablation studies which have disjoint high-level planning and low-level control. We also show the real-world deployment of VP-Nav on a quadruped robot with onboard sensors and computation.



Project Video

Navigation in Cluttered Indoor Settings


The robot can navigate even with glass doors, rough terrain and extra 2kg of payload added during walking. Propriopcetion enables slowing down for safety.


The robot can navigate to exit a cluttered lab environment full of objects of different shapes, textures and lightning conditions and can recover from collision with the unexpected obstacles using proprioception.

Outdoor Navigation


Navigation in the wild with a goal of entering a building 30m away. The robot needs to go through water puddles and lawns. To furthur increase the difficulty, a 2kg mass is thrown at the robot and an unexpected human obstacle blindsided it. The robot can overcome these by coupling vision and proprioception.

Collision Detector


The robot collides with a glass door (invisible to the onboard camera), detects the collision using proprioception and then walks around it to reach the goal. The vision-only baseline fails.

Vision + Proprioception vs. Only Vision


Avoiding a tree using proprioception. In low-lighting conditions, the depth camera is unable to detect the tree.


A human obstacle suddenly appearing from outside of FoV. The robot relies on proprioception to detect the collision and navigate around it.


A glass obstacle is invisible to the depth camera. By using proprioception, the robot can recover from the collision and successfully reach the goal.

Fall Predictor


The fall predictor predicts a high probability of falling down when the robot is walking on unstable planks and decreases the command velocity to safer values.

Related Work

This work is part of our series of research on learning based control of legged robots. Please check out other works below.


RMA: Rapid Motor Adaptation for Legged Robots
Ashish Kumar, Zipeng Fu, Deepak Pathak, Jitendra Malik
RSS 2021


PDF | Video | Project Page
Minimizing Energy Consumption Leads to the Emergence of Gaits in Legged Robots
Zipeng Fu, Ashish Kumar, Jitendra Malik, Deepak Pathak
CoRL 2021


PDF | Video | Project Page

BibTeX

@inproceedings{fu2022coupling,
      title = {Coupling Vision and Proprioception for Navigation of Legged Robots}, 
      author = {Zipeng Fu and Ashish Kumar and Ananye Agarwal and Haozhi Qi and Jitendra Malik and Deepak Pathak},
      booktitle = {{CVPR}}
      year = 2022
}