You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/services/vision/segmentation.md
+79
Original file line number
Diff line number
Diff line change
@@ -19,6 +19,7 @@ The types of segmenters supported are:
19
19
20
20
-[**Obstacles point cloud (`obstacles_pointcloud`)**](#configure-an-obstacles_pointcloud-segmenter): A segmenter that identifies well-separated objects above a flat plane.
21
21
-[**Object detector (`detector_3d_segmenter`)**](#configure-a-detector_3d_segmenter): This model takes 2D bounding boxes from an object detector and projects the pixels in the bounding box to points in 3D space.
22
+
-[**Obstacles depth (`obstacles_depth`)**](#configure-an-obstacles_depth-segmenter): A segmenter for depth cameras that returns the perceived obstacles as a set of 3-dimensional bounding boxes, each with a Pose as a vector.
22
23
23
24
## Configure an `obstacles_pointcloud` segmenter
24
25
@@ -200,6 +201,84 @@ The following parameters are available for a `detector_3d_segmenter`.
200
201
201
202
Click **Save config** and proceed to [test your segmenter](#test-your-segmenter).
202
203
204
+
## Configure an `obstacles_depth` segmenter
205
+
206
+
This segmenter model is for depth cameras, and is best for motion planning with transient obstacles.
207
+
Use the segmenter to identify well separated objects above a flat plane.
208
+
209
+
Configure an `obstacles_depth` segmenter:
210
+
211
+
{{< tabs >}}
212
+
{{% tab name="JSON Template" %}}
213
+
214
+
Add the following vision service object to the services array in your raw JSON configuration:
The following parameters are available for an `"obstacles_depth"` segmenter:
258
+
259
+
| Parameter | Inclusion | Description |
260
+
| --------- | --------- | ----------- |
261
+
|`with_geometries`| Required | Whether you would like multiple boxes, if `true`, or a single point, if `false`, returned within the `GeometryInFrame` object captured by this segmenter. <br> Example: `"false"` </br> |
262
+
|`h_min_m`| Optional | The minimum vertical height in meters for an object to be considered an obstacle. <br> Default: `0.0` </br> |
263
+
|`h_max_m`| Optional | The maximum vertical height in meters at which an object is considered an obstacle. <br> Default: `1.0` </br> |
264
+
|`theta_max_deg`| Optional | The maximum slope at which an object is still not an obstacle. <br> Default: `45` </br> |
265
+
|`return_pcds`| Optional | Whether you would like pointclouds to be included within the GeometryInFrame object captured by this segmenter. <br> Example: `"false"` </br> |
266
+
267
+
If you want to identify multiple boxes over the flat plane with your segmenter:
268
+
269
+
- First, [configure your frame system](/services/frame-system/#configuration) to configure the relative spatial orientation of the components of your robot, including your [camera](/components/camera/), within Viam's [frame system service](/services/frame-system/).
270
+
- After configuring your frame system, your camera will populate its own `Properties` with these spatial intrinsic parameters from the frame system.
271
+
- You can get those parameters from your camera through the [camera API](/components/camera/#getproperties).
272
+
- Next, set your segmenter's attribute `with_geometries: true`.
273
+
- The segmenter now returns multiple boxes within the `GeometryInFrame` object it captures.
274
+
275
+
If you choose not to configure the frame system, you can still identify single points over the flat plane with your segmenter:
276
+
277
+
- Set `with_geometries: false`.
278
+
- The segmenter now returns a single point within the `GeometryInFrame` object it captures.
279
+
280
+
Click **Save config** and proceed to [test your segmenter](#test-your-segmenter).
281
+
203
282
## Test your segmenter
204
283
205
284
The following code uses the [`GetObjectPointClouds`](/services/vision/#getobjectpointclouds) method to run a segmenter vision model on an image from the robot's camera `"cam1"`:
0 commit comments