robokudo.utils.annotator_helper

Functions

transform_pose_from_cam_to_world(...)

Transform a RoboKudo PoseAnnotation from camera coordinates to world coordinates. Transform will be looked up from the CAS.

transform_cloud_from_cam_to_world(...)

Get the camera transform between camera and world and transform the incoming cloud from sensor coordinates to

transform_pose_from_world_to_cam(...)

Transform a RoboKudo PoseAnnotation from world coordinates to camera coordinates.

transform_cloud_from_world_to_cam(...)

Get the camera transform between world and the camera and transform the incoming cloud from world coordinates to

get_cam_to_world_transform_matrix(cas)

get_world_to_cam_transform_matrix(cas)

draw_bounding_boxes_from_object_hypotheses(img, ...)

Draw BoundingBoxes to an input image based on existing Object Hypotheses.

scale_cam_intrinsics(annotator)

If the color2depth ratio is not 1,1, we will usually scale the color image to the same size

get_color_image(annotator)

Load and potentially resize the color image from the CAS.

resize_mask(→ numpy.typing.NDArray)

The mask is potentially created after the input image has been scaled down

generate_source_name(→ str)

Helper to create the standard name for the 'source' field of Annotations.

Module Contents

robokudo.utils.annotator_helper.transform_pose_from_cam_to_world(cas: robokudo.cas.CAS, pose: robokudo.types.annotation.PoseAnnotation) robokudo.types.annotation.PoseAnnotation

Transform a RoboKudo PoseAnnotation from camera coordinates to world coordinates. Transform will be looked up from the CAS. Might throw when cam to world transform viewpoint is not in the CAS!

returns transformed pose

robokudo.utils.annotator_helper.transform_cloud_from_cam_to_world(cas: robokudo.cas.CAS, cloud: open3d.geometry.PointCloud, transform_inplace: bool = False) open3d.geometry.PointCloud

Get the camera transform between camera and world and transform the incoming cloud from sensor coordinates to world coordinates.

Might throw when viewpoint is not in the CAS!

Param:

transform_inplace This bool will control if the given ‘cloud’ is deepcopied before the transformation is done.

Set this value to True, if it is OK that the given cloud variable is changed. Set this to value to False (default), if a deepcopy should performed beforehand.

returns transformed cloud

robokudo.utils.annotator_helper.transform_pose_from_world_to_cam(cas: robokudo.cas.CAS, pose: robokudo.types.annotation.PoseAnnotation) robokudo.types.annotation.PoseAnnotation

Transform a RoboKudo PoseAnnotation from world coordinates to camera coordinates. Transform will be looked up from the CAS. Might throw when cam to world transform viewpoint is not in the CAS!

returns transformed pose

robokudo.utils.annotator_helper.transform_cloud_from_world_to_cam(cas: robokudo.cas.CAS, cloud: open3d.geometry.PointCloud, transform_inplace: bool = False) open3d.geometry.PointCloud

Get the camera transform between world and the camera and transform the incoming cloud from world coordinates to sensor coordinates

Might throw when viewpoint is not in the CAS!

Param:

transform_inplace This bool will control if the given ‘cloud’ is deepcopied before the transformation is done.

Set this value to True, if it is OK that the given cloud variable is changed. Set this to value to False (default), if a deepcopy should performed beforehand.

returns transformed cloud

robokudo.utils.annotator_helper.get_cam_to_world_transform_matrix(cas: robokudo.cas.CAS)
robokudo.utils.annotator_helper.get_world_to_cam_transform_matrix(cas: robokudo.cas.CAS)
robokudo.utils.annotator_helper.draw_bounding_boxes_from_object_hypotheses(img: numpy.typing.NDArray, object_hypotheses: list, text_function: Callable)

Draw BoundingBoxes to an input image based on existing Object Hypotheses.

Parameters:
  • img – A cv2 image represented as a numpy array

  • object_hypotheses – A list with robokudo Object Hypotheses

  • text_function – Use this parameter to create the label for each bounding box.

It will get each ObjectHypothesis passed. Must return a string.

robokudo.utils.annotator_helper.scale_cam_intrinsics(annotator: robokudo.annotators.core.BaseAnnotator)

If the color2depth ratio is not 1,1, we will usually scale the color image to the same size the depth image has. This also requires an adjustment of the cam intrinsics.

If color2depth ratio is not 1,1, we’ll scale the cam intrinsics in annotator. You need to have a self.cam_intrinsics variable!

Parameters:
  • annotator

  • cas

robokudo.utils.annotator_helper.get_color_image(annotator: robokudo.annotators.core.BaseAnnotator)

Load and potentially resize the color image from the CAS.

This is useful, as the resolution of the RGB image might differ from the resolution of the depth image. This function resizes the color image to the resolution of the depth image.

This is important later on when we want to create a point cloud from the depth image and the color image and crop the point cloud to the bounding box of the object hypothesis, which is defined in the color image coordinate system.

Returns:

The resized color image.

Raises:

RuntimeError: If no color to depth ratio is provided by the camera driver.

robokudo.utils.annotator_helper.resize_mask(annotator: robokudo.annotators.core.BaseAnnotator, mask: numpy.typing.NDArray) numpy.typing.NDArray

The mask is potentially created after the input image has been scaled down (see get_scaled_color_image_for_depth_image, we usually scale the RGB down). If that’s the case, we have to bring it back to the original resolution. This method checks some usual error cases and decides if we need to resize at all.

Parameters:

mask – A binary image.

Returns:

The scaled version of mask, according to the COLOR2DEPTH_RATIO.

robokudo.utils.annotator_helper.generate_source_name(annotator: robokudo.annotators.core.BaseAnnotator) str

Helper to create the standard name for the ‘source’ field of Annotations.

Parameters:

annotator – The annotator to generate the source name for

Returns:

The source name as string