Skip to main content
Dryad

Data from: SimPLE: A visuotactile method learned in simulation to precisely pick, localize, regrasp, and place objects

Cite this dataset

Bauza Villalonga, Maria; Bronars, Antonia; Rodriguez, Alberto (2024). Data from: SimPLE: A visuotactile method learned in simulation to precisely pick, localize, regrasp, and place objects [Dataset]. Dryad. https://doi.org/10.5061/dryad.vdncjsz3q

Abstract

Existing robotic systems have a clear tension between generality and precision. Deployed solutions for robotic manipulation tend to fall into the paradigm of one robot solving a single task, lacking precise generalization, i.e., the ability to solve many tasks without compromising on precision. This paper explores solutions for precise and general pick-and-place. In precise pick-and-place, i.e. kitting, the robot transforms an unstructured arrangement of objects into an organized arrangement, which can facilitate further manipulation. We propose simPLE (simulation to Pick Localize and PLacE) as a solution to precise pick-and-place. simPLE learns to pick, regrasp and place objects precisely, given only the object CAD model and no prior experience. We develop three main components: task-aware grasping, visuotactile perception, and regrasp planning. Task-aware grasping computes affordances of grasps that are stable, observable, and favorable to placing. The visuotactile perception model relies on matching real observations against a set of simulated ones through supervised learning. Finally, we compute the desired robot motion by solving a shortest path problem on a graph of hand-to-hand regrasps. On a dual-arm robot equipped with visuotactile sensing, we demonstrate pick-and-place of 15 diverse objects with simPLE. The objects span a wide range of shapes and simPLE achieves successful placements into structured arrangements with 1mm clearance over 90% of the time for 6 objects, and over 80% of the time for 11 objects.

README: SimPLE: a visuotactile method learned in simulation to precisely pick, localize, regrasp, and place objects

https://doi.org/10.5061/dryad.vdncjsz3q

This dataset contains code, object CAD models, and experimental results for the paper "SimPLE, a visuotactile method learned in simulation to precisely pick, localize, regrasp, and place objects".

Description of the data and file structure

We provide a zip folder contain object CAD models as stl files, a zip folder containing the main repository for generating visuotactile perception models, and a pdf and excel sheet containing the experimental results for the precise placement trials.

The file simPLE_experimenal_results.xlsx contains details on the experimental outcome of each precise pick-and-place trial we ran. For each object, we provide the experimental outcome (in the column "outcome") for each of 20 trials of our method (in the column "simPLE"). For five of the objects, we additionally provide the experimental outcome for each of 20 trials for each baseline (in the columns "vision baseline", "tactile baseline", and "DexNet baseline") as described in the paper. The outcome can take on one of three values - success, near success, and failure - as described in the paper. We color cells corresponding to successful trials green, near successful trials orange, and failure trials red. We also provide the number of hand-to-hand regrasps attempted per trial in the column "# regrasps".

Sharing/Access information

Some parts of the data also be found in the project's website:

Funding

ABB (United States)

Magna International (United States)