Roll-Drop: accounting for observation noise with a single parameter

Recommended citation: Luigi Campanaro, Daniele De Martini, Siddhant Gangapurwala, Wolfgang Merkt, and Ioannis Havoutis. Roll-Drop: accounting for observation noise with a single parameter. Learning for Dynamics & Control Conference (L4DC), 2023.

This paper proposes a simple strategy for sim-to-real in Deep-Reinforcement Learning (DRL) – called Roll-Drop – that uses dropout during simulation to account for observation noise during deployment without explicitly modelling its distribution for each state. DRL is a promising approach to control robots for highly dynamic and feedback-based manoeuvres, and accurate simulators are crucial to providing cheap and abundant data to learn the desired behaviour. Nevertheless, the simulated data are noiseless and generally show a distributional shift that challenges the deployment on real machines where sensor readings are affected by noise. The standard solution is modelling the latter and injecting it during training; while this requires a thorough system identification, Roll-Drop enhances the robustness to sensor noise by tuning only a single parameter. We demonstrate an 80% success rate when up to 25% noise is injected in the observations, with twice higher robustness than the baselines. We deploy the controller trained in simulation on a Unitree A1 platform and assess this improved robustness on the physical system.

Videos and supplementary material

Official publisher’s page

[ pdf] [ arXiv]

file-pdf twitter facebook linkedin