You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying out only converting a region-of-interest of a depth image into a point cloud here https://github.com/lucasw/image_manip/blob/fix_zero_frame_rate/image_manip_demo/scripts/open3d_viz.py (it's also a ROS node if anyone is interested) and it works fine, by offsetting the cx and cy intrinsic values the sub image is properly converted into 3D just like the full depth image (though geometry.PointCloud.create_from_depth_image() doesn't appear to be interpreting array views like depth_np[y0:y1, x0:x1] properly so I added the copy()- or is there a better way to do that without a copy?).
It looks like the length of the point cloud .points is a little less than the input image shape[0] * shape[1] by a little for some example data I have- I'm guessing because invalid points are discarded, but that means a pixel index valid in the image can't be used in the point cloud points.
I'm wondering if I could convert the whole image into a point cloud and then afterwards get sections of it bounded by 2d-array/pixel coordinates- or is the association with the original depth image array completely lost? The single full image conversion would be more performant than converting cropped depth values n times for n regions-of-interest in cases where I have many overlapping rectangle sections of the image I want to extract point clouds for.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I'm trying out only converting a region-of-interest of a depth image into a point cloud here https://github.com/lucasw/image_manip/blob/fix_zero_frame_rate/image_manip_demo/scripts/open3d_viz.py (it's also a ROS node if anyone is interested) and it works fine, by offsetting the cx and cy intrinsic values the sub image is properly converted into 3D just like the full depth image (though
geometry.PointCloud.create_from_depth_image()
doesn't appear to be interpreting array views likedepth_np[y0:y1, x0:x1]
properly so I added thecopy()
- or is there a better way to do that without a copy?).It looks like the length of the point cloud
.points
is a little less than the input image shape[0] * shape[1] by a little for some example data I have- I'm guessing because invalid points are discarded, but that means a pixel index valid in the image can't be used in the point cloud points.I'm wondering if I could convert the whole image into a point cloud and then afterwards get sections of it bounded by 2d-array/pixel coordinates- or is the association with the original depth image array completely lost? The single full image conversion would be more performant than converting cropped depth values n times for n regions-of-interest in cases where I have many overlapping rectangle sections of the image I want to extract point clouds for.
Beta Was this translation helpful? Give feedback.
All reactions