Skip to main content

Automatic Data Generation for SORNet: PROPS Relation Dataset

Jace Aldrich
University of Michigan


Ariana Verges Alicea
University of Michigan


Hannah Ho
University of Michigan


SornetArchitecture
SORNet Architecture as specified by the original authors (Yuan et al., 2021).

Abstract

Spatial Object-Centric Representation Network (SORNet, Yuan et al., 2021) is a network architecture that takes an RBG image with several canonical object views and outputs object-centric embeddings. The authors of the original paper trained and tested SORnet on their custom Leonardo and Kitchen data sets, as well as the CLEVR dataset (Compositional Language and Elementary Visual Reasoning). We expanded SORnet’s capability by training it on PROPS Dataset (Progress Robot Object Perception Samples), which was extensively used throughout this course. Training SORNet with PROPS dataset allow us to test its capabilities to a real-world dataset in order to better understand how it performs in real-life applications.

Introduction

There are a plethora of applications for robots that can perform sequential tasks that involve manipulating objects around them. These tasks can range from object assembly to organizing and sorting to packing to much more. However, in order to perform these tasks, robots need a way to recognize the orientation of objects in the world frame and in relation to each other. Having accurate results on the positional relationships between objects in a real-world setting is important in order to perform those tasks. So we tackled applying SORNet to real-world data through training it on the PROPS dataset.

Algorithmic Extension

Our update to SORnet introduces an algorithmic extension designed to boost its performance with real-world data. By developing a base class to compute relations on 3D pose or bounding box datasets, we have made it possible for SORnet to process a diverse range of datasets, including scene images, identifiable objects, and 3D object coordinates. Users only need to overload a few methods to return image and object data, and their dataset will be compatable with SORNet. This enhancement notably streamlines the conversion of data into a format that SORnet can handle.

CanonicalViews
PROPS Best Object Canonical Views

Results

As an example of our dataset conversion framework, SORNet was trained on the PROPS Pose dataset, with the resulting dataset named the PROPS Relation Dataset. It trained on relations regarding if objects in the dataset are “left”, “right”, “in front of”, or “behind” other objects.

Example Predicate Classifications

BoxesResult
CannedResult

Model Performance

With the PROPS Relational dataset, SORNet achieved over 99% total validation accuracy, and over 98% validation accuracy per object, demonstrating the efficacy of our approach. These results are almost identical to that of results on n CLEVR-CoGenT.

Full Size PROPS Data Validation Accuracy Percentages for all Relationships. Querries are in the form of "is [row] [relation] [column]?", e.g., "is the potted meat can behind the master chef can?"
 Master
Chef Can
Cracker
Box
Sugar
Box
Tomato
Soup Can
Mustard
Bottle
Tuna
Fish Can
Gelatin
Box
Potted
Meat Can
MugLarge
Marker
Average
Master Chef Can-99.3099.7798.8098.9098.7798.6599.2099.1598.8599.04
Cracker Box99.10-99.3799.8099.2099.3998.5498.7099.5598.3099.11
Sugar Box99.2099.14-99.0999.3798.8999.7599.3299.5499.2099.28
Tomato Soup Can98.4099.6599.26-99.4098.8798.8699.6099.0099.1599.13
Mustard Bottle99.3098.9099.2699.90-98.8799.6898.9598.5598.9599.15
Tuna Fish Can98.9899.2899.4198.9897.95-99.1199.1398.9899.2899.01
Gelatin Box99.1999.4099.8899.5199.8999.33-98.8199.7899.0399.43
Potted Meat Can99.2098.7099.0399.7598.3099.3898.81-98.9098.2098.92
Mug98.8099.4599.4998.8098.7098.9299.5199.65-99.4599.20
Large Marker98.3098.1099.4399.2098.9599.2399.2499.2099.55-99.03
Average98.9499.1099.4399.3198.9699.0899.1399.1799.2298.9399.13
Complete Average98.9999.1099.3699.2299.0699.0499.2899.0599.2198.98 

Training Performance

With respect to the training process, the PROPS datset had an initial period of little improvent much longer than CLEVR relative to total training time - the figure below shows only the first 9th of CLEVR training and improves much faster relative to its convergence. We theorize that real world data has more noise, leading to increased time to learn proper object embeddings.

PropsPlotCLEVRPlot
CLEVR Dataset Results
PROPS Dataset Results

Other Dataset Use

We highly encourage you to checkout our code and train sornet on your own datasets! Checkout our example usage in our SORNET fork and our example implementaion for the PROPS dataset

To use this framework with another dataset, simply create a new file and overload the BaseRelationDataset class, following the exmaple in PropsRelationDataset. You need to overload each method that raises a “NotImplemented” error in the same manner as which PROPS does. If there is an existing dataset manager class, initialize it in the _init_parent_dataset() method of your derived class to make the implementation easier. Otherwise, load the appropriate file information in each class method and the base class should handle the relation information automatically, provided that the camera frame uses the standard notation.

If you would like to add further relations, simply overload the get_spatial_relations() method, and watch for any locations a “4” was hardcoded for the number of relations.

Citation

If you found our work helpful, consider citing us with the following BibTeX reference:

@article{aldrich2024propsrelation,
  title = {Automatic Data Generation for SORNet: PROPS Relation Dataset},
  author = {Jace Aldrich, Ariana Verges Alicea, Hannah Ho},
  year = {2024}
}

Contact

If you have any questions, feel free to contact Jace Aldrich, Ariana Verges and Hannah Ho.