Skip to content

Commit c9dc76c

Browse files
DOCS-982: Document obstacles_depth model of segmenter in vision service (#1721)
Co-authored-by: Naomi Pentrel <[email protected]>
1 parent 65ecdbd commit c9dc76c

File tree

1 file changed

+79
-0
lines changed

1 file changed

+79
-0
lines changed

docs/services/vision/segmentation.md

+79
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,7 @@ The types of segmenters supported are:
1919

2020
- [**Obstacles point cloud (`obstacles_pointcloud`)**](#configure-an-obstacles_pointcloud-segmenter): A segmenter that identifies well-separated objects above a flat plane.
2121
- [**Object detector (`detector_3d_segmenter`)**](#configure-a-detector_3d_segmenter): This model takes 2D bounding boxes from an object detector and projects the pixels in the bounding box to points in 3D space.
22+
- [**Obstacles depth (`obstacles_depth`)**](#configure-an-obstacles_depth-segmenter): A segmenter for depth cameras that returns the perceived obstacles as a set of 3-dimensional bounding boxes, each with a Pose as a vector.
2223

2324
## Configure an `obstacles_pointcloud` segmenter
2425

@@ -200,6 +201,84 @@ The following parameters are available for a `detector_3d_segmenter`.
200201

201202
Click **Save config** and proceed to [test your segmenter](#test-your-segmenter).
202203

204+
## Configure an `obstacles_depth` segmenter
205+
206+
This segmenter model is for depth cameras, and is best for motion planning with transient obstacles.
207+
Use the segmenter to identify well separated objects above a flat plane.
208+
209+
Configure an `obstacles_depth` segmenter:
210+
211+
{{< tabs >}}
212+
{{% tab name="JSON Template" %}}
213+
214+
Add the following vision service object to the services array in your raw JSON configuration:
215+
216+
``` json {class="line-numbers linkable-line-numbers"}
217+
"services": [
218+
{
219+
"name": "<segmenter_name>",
220+
"type": "vision",
221+
"model": "obstacles_depth"
222+
"attributes": {
223+
"h_min_m": <number>,
224+
"h_max_m": <number>,
225+
"theta_max_deg": <number>,
226+
"return_pcds": <boolean>,
227+
"with_geometries": <boolean>,
228+
}
229+
},
230+
... // Other services
231+
]
232+
```
233+
234+
{{% /tab %}}
235+
{{% tab name="JSON Example" %}}
236+
237+
```json {class="line-numbers linkable-line-numbers"}
238+
"services": [
239+
{
240+
"name": "rc_segmenter",
241+
"type": "vision",
242+
"model": "obstacles_depth",
243+
"attributes": {
244+
"h_min_m": 0.0,
245+
"h_max_m": 1.0,
246+
"theta_max_deg": 45,
247+
"return_pcds": "true",
248+
"with_geometries": "true"
249+
}
250+
}
251+
]
252+
```
253+
254+
{{% /tab %}}
255+
{{< /tabs >}}
256+
257+
The following parameters are available for an `"obstacles_depth"` segmenter:
258+
259+
| Parameter | Inclusion | Description |
260+
| --------- | --------- | ----------- |
261+
| `with_geometries` | Required | Whether you would like multiple boxes, if `true`, or a single point, if `false`, returned within the `GeometryInFrame` object captured by this segmenter. <br> Example: `"false"` </br> |
262+
| `h_min_m` | Optional | The minimum vertical height in meters for an object to be considered an obstacle. <br> Default: `0.0` </br> |
263+
| `h_max_m` | Optional | The maximum vertical height in meters at which an object is considered an obstacle. <br> Default: `1.0` </br> |
264+
| `theta_max_deg` | Optional | The maximum slope at which an object is still not an obstacle. <br> Default: `45` </br> |
265+
| `return_pcds` | Optional | Whether you would like pointclouds to be included within the GeometryInFrame object captured by this segmenter. <br> Example: `"false"` </br> |
266+
267+
If you want to identify multiple boxes over the flat plane with your segmenter:
268+
269+
- First, [configure your frame system](/services/frame-system/#configuration) to configure the relative spatial orientation of the components of your robot, including your [camera](/components/camera/), within Viam's [frame system service](/services/frame-system/).
270+
- After configuring your frame system, your camera will populate its own `Properties` with these spatial intrinsic parameters from the frame system.
271+
- You can get those parameters from your camera through the [camera API](/components/camera/#getproperties).
272+
- Next, set your segmenter's attribute `with_geometries: true`.
273+
- The segmenter now returns multiple boxes within the `GeometryInFrame` object it captures.
274+
275+
If you choose not to configure the frame system, you can still identify single points over the flat plane with your segmenter:
276+
277+
- Set `with_geometries: false`.
278+
- The segmenter now returns a single point within the `GeometryInFrame` object it captures.
279+
280+
Click **Save config** and proceed to [test your segmenter](#test-your-segmenter).
281+
203282
## Test your segmenter
204283

205284
The following code uses the [`GetObjectPointClouds`](/services/vision/#getobjectpointclouds) method to run a segmenter vision model on an image from the robot's camera `"cam1"`:

0 commit comments

Comments
 (0)