At the core of Deep Learning is a set of computational techniques to identify and exploit patterns in datasets. Robots will only benefit from these techniques if as roboticists we provide sufficiently large datasets that are representative of the environments and tasks we anticipate our robots acting in. The Progress Robot Object Perception Samples (PROPS) datasets have been created to support the development of deep learning models in a variety of robot perception contexts.
The PROPS datasets consist of downsampled versions of data collected from the ProgressLabeller annotation tool (Chen et al., 2022). This dataset focuses on table-top scenes that are inspired by the environments a domestic service robot would be expected to encounter. Objects in these scenes are from the YCB Object and Model Set (Calli et al., 2015).
Course projects in DeepRob are built using the PROPS datasets.
This portion of the dataset is catered for image classification tasks. The format of this portion is based on that of the CIFAR-10 dataset. The PROPS Classification dataset contains 10 object categories with 50K training images and 10K testing images. Each image in the dataset is a 32x32 RGB color image. All images in the test set are taken from scenes not represented in the training set.
This portion of the dataset is catered for object detection tasks. The PROPS Detection dataset contains 10 object categories with 2.5K training images and 2.5K validation images. Each image in the dataset is a 640x480 RGB color image. All images in the validation set are taken from scenes not represented in the training set.
This portion of the dataset is catered for 6 degrees-of-freedom rigid body object pose estimation. The PROPS Pose dataset contains 10 object categories with 500 training images and 500 validation images. Each image in the dataset is a 640x480 RGB color image. Aligned depth images and segmentation masks are also included in the dataset.