robokudo.utils.o3d_helper ========================= .. py:module:: robokudo.utils.o3d_helper Functions --------- .. autoapisummary:: robokudo.utils.o3d_helper.put_obb_on_target_obb robokudo.utils.o3d_helper.transform_obb_relative_to_obb_center robokudo.utils.o3d_helper.get_obb_from_size_and_transform robokudo.utils.o3d_helper.get_2d_corner_points_from_3d_bb robokudo.utils.o3d_helper.get_2d_bounding_rect_from_3d_bb robokudo.utils.o3d_helper.draw_wireframe_of_obb_into_image robokudo.utils.o3d_helper.get_mask_from_pointcloud robokudo.utils.o3d_helper.scale_o3d_cam_intrinsics robokudo.utils.o3d_helper.o3d_cam_intrinsics_from_ros_cam_info robokudo.utils.o3d_helper.concatenate_clouds robokudo.utils.o3d_helper.get_cloud_from_rgb_depth_and_mask robokudo.utils.o3d_helper.create_line_for_visualization robokudo.utils.o3d_helper.create_sphere_from_translation Module Contents --------------- .. py:function:: put_obb_on_target_obb(input_obb, target_obb) We assume that positive z is 'up' .. py:function:: transform_obb_relative_to_obb_center(input_obb, target_obb, transform: numpy.typing.NDArray) .. py:function:: get_obb_from_size_and_transform(bb_size: numpy.typing.NDArray, transform: numpy.typing.NDArray) -> open3d.geometry.OrientedBoundingBox :param bb_size: Numpy array with 3 elements for BoundingBox size in X,Y,Z :param transform: 4x4 numpy array :return: o3d.geometry.OrientedBoundingBox .. py:function:: get_2d_corner_points_from_3d_bb(cas: robokudo.cas.CAS, object_bb: open3d.geometry.OrientedBoundingBox) -> cv2.boundingRect .. py:function:: get_2d_bounding_rect_from_3d_bb(cas: robokudo.cas.CAS, object_bb: open3d.geometry.OrientedBoundingBox) -> cv2.boundingRect Get a cv2.boundingRect which represents the space taken by a open3d boundingbox. Perspective projection and color2depth ratio is taken into account. :param cas: :param object_bb: :return: cv2.boundingRect .. py:function:: draw_wireframe_of_obb_into_image(cas: robokudo.cas.CAS, image: numpy.typing.NDArray, obb: open3d.geometry.OrientedBoundingBox) -> None .. py:function:: get_mask_from_pointcloud(input_cloud, ref_image, cam_intrinsics: open3d.camera.PinholeCameraIntrinsic, mask_scale_factor=None, crop_to_ref=None) Generate a binary mask image by projecting the input_cloud to the ref_image mask TODO: Use the actual cam info here and not the PrimeSenseDefault :param input_cloud: 3d points which should be projected to the result maskimage :param ref_image: A reference image for the dimensionality of the mask :param cam_intrinsics: Camera intrinsics that have been used to generate input_cloud :param mask_scale_factor: If your color and depth image size mismatch, l you might have to scale your depth-based mask image to the dimensions of your color image. Please provide that factor here. Example: Kinect has 1280x images were as the depth images are only 640x . This would require a scale factor of 2.0 :param crop_to_ref: This is usually set to True, if mask_scale_factor has been set. Crop the result of the scale operation to the size of ref_image (0:height) and (0:width) :return: A binary mask image for OpenCV .. py:function:: scale_o3d_cam_intrinsics(cam_intrinsic: open3d.camera.PinholeCameraIntrinsic, scalex: float, scaley: float) -> open3d.camera.PinholeCameraIntrinsic Create and return a new cam intrinsic by scaling an input cam_intrinsic based on a scale factor scalex and scaley. Scaling will be done by multipling the factor with the relevant properties. Example: new_height = height * scaley :param cam_intrinsic: :param scalex: Scale factor for width/x :param scaley: Scale factor for height/y .. py:function:: o3d_cam_intrinsics_from_ros_cam_info(cam_info) .. py:function:: concatenate_clouds(clouds: list) -> open3d.geometry.PointCloud Combine a list of clouds into a single PointCloud, respecting the set colors. :param clouds: List of open3d.geometry.PointCloud :return: A single PointCloud, containing the same points and colors .. py:function:: get_cloud_from_rgb_depth_and_mask(rgb_image: numpy.typing.NDArray, depth_image: numpy.typing.NDArray, mask: numpy.typing.NDArray, cam_intrinsics: open3d.camera.PinholeCameraIntrinsic, depth_truncate: float = 9.0, mask_true_val: int = 255) -> open3d.geometry.PointCloud Generate a pointcloud from an object mask, rgb and depth image. Important: The depth_image will be changed when executing this function. Provide a copy if necessary. Constraints: - RGB, Depth image and Mask must be of the same resolution. :param rgb_image: An image in RGB order. Please note, that cv2 normally uses BGR order. So you have to convert first. :param depth_image: A depth image :param mask: a numpy array of dtype np.uint8 :param cam_intrinsics: Camera intrinsics for the rgb and depth image :param depth_truncate: During cloud creation, ignore points beyond this many meters :param max_true_val: If the 'mask' contains a value equal to 'max_true_val', the corresponding value in 'depth' will be included. Otherwise it will be set to 0. .. py:function:: create_line_for_visualization(origin: list, target: list, color: list) -> open3d.geometry.LineSet Create a LineSet that you can use to visualize a line between two points :param origin: 3 element list: [x,y,z] :param target: 3 element list: [x,y,z] :param color: 3 element list: [r,g,b] :return: o3d.geometry.LineSet .. py:function:: create_sphere_from_translation(origin: numpy.ndarray, color: list, radius: float) Create an o3d sphere for visualization. :param origin: 3d translation vector :param color: 3 element list with entries 0-1 for RGB :param radius: radius of sphere in meters :return: o3d sphere