Reading data from a Database#

Instead of reading data directly from the running robot or from bag files, it is also possible to retrieve them from a database. This is usually faster for development and can also help with tf lookup problems. We use MongoDB as our database system.

Please make also sure that you have downloaded the ROS bag file from the previous tutorials and noted the location where you stored it.

Prerequisites#

Make sure to install MongoDB on your machine. On Ubuntu, this can be done with:

sudo apt install mongodb

Storing data: Idea#

The overall idea of using the database storage and to prepare the necessary data import is as follows:

  • Start RoboKudo with the storage AE

  • Wait until it is fully initialized

  • Run the bag file with data you want to store in the DB OR get live sensor data from a real robot.

  • Close RoboKudo

Note

In the following, we will assume that you go through this tutorial with our test bag file. If you want to read data from other sensors, please look into the storage AE and adapt the CollectionReader and Camera config to your use case.

Storing data: Let’s try it out#

Start a roscore and RoboKudo and then feed in the sensor data that shall be recorded. Please note, that after starting RoboKudo, you should switch with the visualizer the arrow keys to show the ImagePreprocessor. This allows you to see the data that has been recorded.

roscore
rosrun robokudo main.py _ae=storage # wait until initialized. Go to ImagePreprocessor in the Visualizer window

# Get some sensor data. In this example, from a ros bag
rosbag play test.bag

When the data is captured, close RoboKudo. You can either see that the bag file play command has finished or you just cancel if you have recorded enough data for your liking.

Note

If you are not sure if data is read in, you can also execute rosrun rqt_py_trees rqy_py_trees to check that the Pipeline in the Behaviour Tree is changing and getting data in the CollectionReader.

Reading data#

Simply use any Analysis Engine (robokudo/descriptors/analysis_engines) and set it to use the Storage Interface in the CollectionReader.

Example:

rosrun robokudo main.py _ae=demo_from_storage

The key change in the AE mentioned above, is highlighted here:

cr_storage_camera_config = robokudo.descriptors.camera_configs.config_mongodb_playback.CameraConfig()
cr_storage_config = CollectionReaderAnnotator.Descriptor(
    camera_config=cr_storage_camera_config,
    camera_interface=robokudo.io.storage_reader_interface.StorageReaderInterface(cr_storage_camera_config))

seq = robokudo.pipeline.Pipeline("StoragePipeline")
seq.add_children(
    [
        robokudo.annotators.outputs.ClearAnnotatorOutputs(),
         # Please note the descriptor argument, which gets the storage config
        CollectionReaderAnnotator(descriptor=cr_storage_config),
        ....

Bonus: Store and read multiple scenes in your database#

If you are frequently switching between the scenes that you need to work on, you can parametrize the CollectionReaderAnnotator and the StorageWriter to use specific data in your database. An example for this could be, that you have multiple ROS bag files with different object setups or use cases and you want to store all of them in your database.

To record data to a different database, create a StorageWriter.Descriptor(), set parameters.db_name on it and pass the Descriptor to your StorageWriter initialization in the Analysis Engine (see our Tutorial on configuring your Annotator ). To read from a specific database, adjust the db_name parameter in the CameraConfig of config_mongodb_playback.py in robokudo/descriptors/camera_configs.