Skip to main content
Dryad

POLAR-Sim: Augmenting NASA's POLAR dataset for data-driven lunar perception and rover simulation

Abstract

NASA's POLAR (Polar Optical Lunar Analog Reconstruction) dataset contains approximately 2,600 pairs of high dynamic range stereo photos captured across 12 varied terrain scenes, including areas with sparse or dense rock distributions, craters, and rocks of different sizes. The purpose of these photos is to spur research and development in robotics, AI-based perception, and autonomous navigation. Acknowledging a scarcity of lunar photos from around the lunar poles, NASA Ames produced on Earth but in controlled conditions, photos that resemble rover operating conditions from these regions of the Moon.
 
This dataset, named POLAR-Sim, provides bounding boxes and semantic segmentation information for all the photos in NASA's POLAR dataset. This effort results in 23,000 labels and semantic segmentation information pertaining to rocks and shadows of rocks. Furthermore, for each scene, we produced individual meshes associated with the ground and the rocks in each scene. This allows anyone with a camera model to generate synthetic images associated with any of the 12 scenarios of the POLAR dataset. Effectively, one can generate as many semantically labeled synthetic images as desired -- from different viewpoints in the scene, with different exposure values, for different positions of the Sun, with or without the presence of active illumination, etc.
 
The benefit of this work is twofold. Using outcomes of the photo annotations, one can train and/or test perception algorithms that deal with Moon photos. For meshes of the scenes, one can produce as much data as desired to train and test AI algorithms that are anticipated to be used in lunar conditions. All the outcomes of this work are available in a public repository for unfettered use and distribution.