diff --git a/docs/data-ai/_index.md b/docs/data-ai/_index.md index c002e81b2e..e966f75431 100644 --- a/docs/data-ai/_index.md +++ b/docs/data-ai/_index.md @@ -58,7 +58,7 @@ You can also monitor your machines through teleop, power your application logic, {{% card link="/data-ai/ai/deploy/" noimage="true" %}} {{% card link="/data-ai/ai/run-inference/" noimage="true" %}} {{% card link="/data-ai/ai/alert/" noimage="true" %}} -{{% card link="/data-ai/ai/act/" noimage="true" %}} +{{% card link="/data-ai/ai/make-decisions-autonomously/" noimage="true" %}} {{< /cards >}} {{< /how-to-expand >}} diff --git a/docs/data-ai/ai/act.md b/docs/data-ai/ai/act.md deleted file mode 100644 index b66b36a3a7..0000000000 --- a/docs/data-ai/ai/act.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -linkTitle: "Act based on inferences" -title: "Act based on inferences" -weight: 70 -layout: "docs" -type: "docs" -description: "Use the vision service API to act based on inferences." -next: "/data-ai/train/upload-external-data/" ---- - -You can use the [vision service API](/dev/reference/apis/services/vision/) to get information about your machine's inferences and program behavior based on that. - -The following are examples of what you can do using a vision service alongside hardware: - -- [Line following robot](#program-a-line-following-robot): Using computer vision to follow objects or a pre-determined path -- [Accident prevention and quality assurance](#act-in-industrial-applications) - -## Program a line following robot - -For example, you can [program a line following robot](/tutorials/services/color-detection-scuttle/) that uses a vision service to follow a colored object. - -You can use the following code to detect and follow the location of a colored object: - -{{% expand "Click to view code" %}} - -```python {class="line-numbers linkable-line-numbers"} -async def connect(): - opts = RobotClient.Options.with_api_key( - # Replace "" (including brackets) with your machine's API key - api_key='', - # Replace "" (including brackets) with your machine's - # API key ID - api_key_id='' - ) - return await RobotClient.at_address("MACHINE ADDRESS", opts) - - -# Get largest detection box and see if it's center is in the left, center, or -# right third -def leftOrRight(detections, midpoint): - largest_area = 0 - largest = {"x_max": 0, "x_min": 0, "y_max": 0, "y_min": 0} - if not detections: - print("nothing detected :(") - return -1 - for d in detections: - a = (d.x_max - d.x_min) * (d.y_max-d.y_min) - if a > largest_area: - a = largest_area - largest = d - centerX = largest.x_min + largest.x_max/2 - if centerX < midpoint-midpoint/6: - return 0 # on the left - if centerX > midpoint+midpoint/6: - return 2 # on the right - else: - return 1 # basically centered - - -async def main(): - spinNum = 10 # when turning, spin the motor this much - straightNum = 300 # when going straight, spin motor this much - numCycles = 200 # run the loop X times - vel = 500 # go this fast when moving motor - - # Connect to robot client and set up components - machine = await connect() - base = Base.from_robot(machine, "my_base") - camera_name = "" - camera = Camera.from_robot(machine, camera_name) - frame = await camera.get_image(mime_type="image/jpeg") - - # Convert to PIL Image - pil_frame = viam_to_pil_image(frame) - - # Grab the vision service for the detector - my_detector = VisionClient.from_robot(machine, "my_color_detector") - - # Main loop. Detect the ball, determine if it's on the left or right, and - # head that way. Repeat this for numCycles - for i in range(numCycles): - detections = await my_detector.get_detections_from_camera(camera_name) - - answer = leftOrRight(detections, pil_frame.size[0]/2) - if answer == 0: - print("left") - await base.spin(spinNum, vel) # CCW is positive - await base.move_straight(straightNum, vel) - if answer == 1: - print("center") - await base.move_straight(straightNum, vel) - if answer == 2: - print("right") - await base.spin(-spinNum, vel) - # If nothing is detected, nothing moves - - await robot.close() - -if __name__ == "__main__": - print("Starting up... ") - asyncio.run(main()) - print("Done.") -``` - -{{% /expand%}} - -If you configured the color detector to detect red, your rover should detect and navigate towards any red objects that come into view of its camera. -Use something like a red sports ball or book cover as a target to follow to test your rover: - -
-{{
- -## Act in industrial applications - -You can also act based on inferences in an industrial context. -For example, you can program a robot arm to halt operations when workers enter dangerous zones, preventing potential accidents. - -The code for this would look like: - -```python {class="line-numbers linkable-line-numbers"} -detections = await detector.get_detections_from_camera(camera_name) -for d in detections: - if d.confidence > 0.6 and d.class_name == "PERSON": - arm.stop() -``` - -You can also use inferences of computer vision for quality assurance purposes. -For example, you can program a robot arm doing automated harvesting to use vision to identify ripe produce and pick crops selectively. - -The code for this would look like: - -```python {class="line-numbers linkable-line-numbers"} -classifications = await detector.get_classifications_from_camera( - camera_name, - 4) -for c in classifications: - if d.confidence > 0.6 and d.class_name == "RIPE": - arm.pick() -``` - -To get inferences programmatically, you will want to use the vision service API: - -{{< cards >}} -{{% card link="/dev/reference/apis/services/vision/" customTitle="Vision service API" noimage="True" %}} -{{< /cards >}} - -To implement industrial solutions in code, you can also explore the following component APIs: - -{{< cards >}} -{{< card link="/dev/reference/apis/components/arm/" customTitle="Arm API" noimage="True" >}} -{{< card link="/dev/reference/apis/components/base/" customTitle="Base API" noimage="True" >}} -{{< card link="/dev/reference/apis/components/camera/" customTitle="Camera API" noimage="True" >}} -{{< card link="/dev/reference/apis/components/gripper/" customTitle="Gripper API" noimage="True" >}} -{{< card link="/dev/reference/apis/components/motor/" customTitle="Motor API" noimage="True" >}} -{{< card link="/dev/reference/apis/components/sensor/" customTitle="Sensor API" noimage="True" >}} -{{< /cards >}} diff --git a/docs/data-ai/ai/make-decisions-autonomously.md b/docs/data-ai/ai/make-decisions-autonomously.md new file mode 100644 index 0000000000..7d901f0fb8 --- /dev/null +++ b/docs/data-ai/ai/make-decisions-autonomously.md @@ -0,0 +1,1059 @@ +--- +linkTitle: "Make decisions autonomously" +title: "Make decisions autonomously" +weight: 70 +layout: "docs" +type: "docs" +description: "Use the vision service API to act based on inferences." +next: "/data-ai/train/upload-external-data/" +aliases: + - /data-ai/ai/act/ +--- + +Use the [vision service API](/dev/reference/apis/services/vision/) to make inferences, then use [component APIs](/dev/reference/apis/#component-apis) to react to inferences with a machine. + +## Follow a line + +This module uses a vision service and a motor to program a machine to follow a line of a configurable color. + +### Prerequisites + +- An SBC, for example a Raspberry Pi 4 +- A wheeled base component such as a [SCUTTLE robot](https://www.scuttlerobot.org/shop/) +- A webcam +- Colored tape, to create a path for your robot + +### Configure your machine + +Follow the [setup guide](/operate/get-started/setup/) to create a new machine. + +Connect your SCUTTLE base to your SBC. +Add the following `components` configuration to create board, base, and motor components in Viam so you can control your SCUTTLE base: + +```json +{ + "name": "my-board", + "model": "pi", + "api": "rdk:component:board", + "attributes": {} +}, +{ + "name": "leftm", + "model": "gpio", + "api": "rdk:component:motor", + "attributes": { + "pins": { + "a": "15", + "b": "16" + }, + "board": "my-board", + "max_rpm": 200 + } +}, +{ + "name": "rightm", + "model": "gpio", + "api": "rdk:component:motor", + "attributes": { + "pins": { + "b": "11", + "dir": "", + "pwm": "", + "a": "12" + }, + "board": "my-board", + "max_rpm": 200 + } +}, +{ + "name": "scuttlebase", + "model": "wheeled", + "api": "rdk:component:base", + "attributes": { + "width_mm": 400, + "wheel_circumference_mm": 258, + "left": ["leftm"], + "right": ["rightm"] + } +} +``` + +Connect your webcam to your SBC. +Add the following `components` configuration for your webcam: + +```json +{ + "name": "my_camera", + "model": "webcam", + "api": "rdk:component:camera", + "attributes": { + "video_path": "" + } +} +``` + +Finally, add the following `services` configuration for your vision service, replacing the `detect_color` value with the color of your line: + +```json +{ + "name": "my_line_detector", + "api": "rdk:service:vision", + "model": "color_detector", + "attributes": { + "segment_size_px": 100, + "detect_color": "#19FFD9", // replace with the color of your line + "hue_tolerance_pct": 0.06 + } +} +``` + +### Create your module + +In a terminal, run the following command: + +```sh {id="terminal-prompt" class="command-line" data-prompt="$"} +viam module generate +``` + +Enter the following configuration for your new module: + +- **module name**: "autonomous_example_module" +- **language**: Python +- **visibility**: private +- **organization ID**: your organization ID, found on the Viam organization settings page +- **resource to be added to the module**: Generic Service +- **model name**: "line_follower" +- **Enable cloud build**: yes +- **Register module**: yes + +Create a file called reload.sh in the root directory of your newly-generated module. +Copy and paste the following code into reload.sh: + +```bash +#!/usr/bin/env bash + +# bash safe mode. look at `set --help` to see what these are doing +set -euxo pipefail + +cd $(dirname $0) +MODULE_DIR=$(dirname $0) +VIRTUAL_ENV=$MODULE_DIR/venv +PYTHON=$VIRTUAL_ENV/bin/python +./setup.sh + +# Be sure to use `exec` so that termination signals reach the python process, +# or handle forwarding termination signals manually +exec $PYTHON src/main.py $@ +``` + +In a terminal, run the following command to make reload.sh executable: + +```sh {id="terminal-prompt" class="command-line" data-prompt="$"} +chmod +x reload.sh +``` + +Edit your meta.json, replacing the `"entrypoint"`, `"build"`, and `"path"` fields as follows: + +```json {class="line-numbers linkable-line-numbers" data-start="13" data-line="1, 4, 6" } + "entrypoint": "reload.sh", + "first_run": "", + "build": { + "build": "rm -f module.tar.gz && tar czf module.tar.gz requirements.txt src/*.py src/models/*.py meta.json setup.sh reload.sh", + "setup": "./setup.sh", + "path": "module.tar.gz", + "arch": [ + "linux/amd64", + "linux/arm64" + ] + } +``` + +### Code + +Replace the contents of src/models/line_follower.py with the following code. +Replace the `` placeholder with your organization namespace. + +```python {class="line-numbers linkable-line-numbers"} +import asyncio +from typing import Any, Mapping, Sequence, Tuple +from typing_extensions import Self + +from viam.components.base import Base, ResourceBase, Vector3 +from viam.components.camera import Camera +from viam.logging import getLogger +from viam.module.module import Module +from viam.resource.types import Model, ModelFamily +from viam.resource.registry import Registry, ResourceCreatorRegistration +from viam.proto.app.robot import ComponentConfig +from viam.proto.common import ResourceName +from viam.services.vision import VisionClient + +class LineFollower(Module, ResourceBase): + MODEL = Model( + ModelFamily("", "autonomous_example_module"), "line-follower") + LOGGER = getLogger(__name__) + + + def __init__(self, name: str): + super().__init__(name) + self.camera: Camera = None + self.base: Base = None + self.detector: VisionClient = None + self._running_loop = False + self._loop_task = None + self.linear_power = 0.35 + self.angular_power = 0.3 + + @classmethod + def new_resource(cls, + config: ComponentConfig, + dependencies: Mapping[ResourceName, ResourceBase]) -> Self: + instance = cls(config.name) + instance.reconfigure(config, dependencies) + return instance + + @classmethod + def validate(cls, config: ComponentConfig) -> Tuple[Sequence[str], Sequence[str]]: + camera_name = config.attributes.fields["camera_name"].string_value + detector_name = config.attributes.fields["detector_name"].string_value + base_name = config.attributes.fields["base_name"].string_value + + dependencies = [camera_name, detector_name, base_name] + return dependencies, [] + + def reconfigure(self, + config: ComponentConfig, + dependencies: Mapping[ResourceName, ResourceBase]): + self.camera_name = config.attributes.fields["camera_name"].string_value + self.detector_name = config.attributes.fields["detector_name"].string_value + self.detector_name = config.attributes.fields["base_name"].string_value + + for dependency_name, dependency in dependencies.items(): + if (dependency_name.subtype == "camera" + and dependency_name.name == self.camera_name): + self.camera = dependency + elif (dependency_name.subtype == "vision" + and dependency_name.name == self.detector_name): + self.detector = dependency + elif (dependency_name.subtype == "base" + and dependency_name.name == self.base_name): + self.base = dependency + + if not self.camera: + raise ValueError(f"Camera '{self.camera_name}' dependency not found.") + if not self.detector: + raise ValueError(f"Vision service '{self.detector_name}' dependency not found.") + if not self.base: + raise ValueError(f"Base '{self.base_name}' dependency not found.") + + LineFollower.LOGGER.info("Reconfigured.") + + async def start(self): + LineFollower.LOGGER.info("Starting color following...") + await self._start_color_following_internal() + + async def close(self): + LineFollower.LOGGER.info("Stopping color following...") + await self._stop_color_following_internal() + LineFollower.LOGGER.info("Stopped.") + + async def _color_following_loop(self): + LineFollower.LOGGER.info("Color following loop started.") + + while self._running_loop: + try: + # Check for color in front + if await self._is_color_in_front(): + LineFollower.LOGGER.info("Moving forward.") + await self.base.set_power(Vector3(y=self.linear_power), Vector3()) + # Check for color to the left + elif await self._is_color_there("left"): + LineFollower.LOGGER.info("Turning left.") + await self.base.set_power(Vector3(), Vector3(z=self.angular_power)) + # Check for color to the right + elif await self._is_color_there("right"): + LineFollower.LOGGER.info("Turning right.") + await self.base.set_power(Vector3(), Vector3(z=-self.angular_power)) + else: + LineFollower.LOGGER.info("No color detected. Stopping.") + await self.base.stop() + + except Exception as e: + LineFollower.LOGGER.error(f"Error in color following loop: {e}") + + await asyncio.sleep(0.05) + + LineFollower.LOGGER.info("Color following loop finished.") + await self.base.stop() + + async def _start_color_following_internal(self): + if not self._running_loop: + self._running_loop = True + self._loop_task = asyncio.create_task(self._color_following_loop()) + LineFollower.LOGGER.info("Requested to start color following loop.") + else: + LineFollower.LOGGER.info("Color following loop is already running.") + + async def _stop_color_following_internal(self): + if self._running_loop: + self._running_loop = False + if self._loop_task: + await self._loop_task + self._loop_task = None + LineFollower.LOGGER.info("Requested to stop color following loop.") + + async def _is_color_in_front(self) -> bool: + frame = await self.camera.get_image() + detections = await self.detector.get_detections(frame) + return any(detection.class_name == "target_color" for detection in detections) + + async def _is_color_there(self, location: str) -> bool: + frame = await self.camera.get_image() + if location == "left": + # Crop logic for left side + pass + elif location == "right": + # Crop logic for right side + pass + # Implement detection logic here + detections = await self.detector.get_detections(frame) + return any(detection.class_name == "target_color" for detection in detections) + +# Register your module +Registry.register_resource_creator( + LineFollower.MODEL, + ResourceCreatorRegistration(LineFollower.new_resource, LineFollower.validate) +) + +async def main(): + """ + Main entry point for the Viam module. + """ + await Module.serve() + +if __name__ == "__main__": + asyncio.run(main()) + LineFollower.LOGGER.info("Done.") +``` + +### Run your module + +Find the [Part ID](/dev/reference/apis/fleet/#find-part-id) for your machine. +To deploy your module on your machine, run the following command, replacing `` with your Part ID: + +```sh {id="terminal-prompt" class="command-line" data-prompt="$"} +viam module reload --part-id +``` + +Add the following `services` configuration for your new module: + +```json +{ + "name": "generic-1", + "api": "rdk:service:generic", + "model": ":autonomous_example_module:line_follower", + "attributes": { + "detector_name": "my_line_detector", + "camera_name": "my_camera" + } +} +``` + +Give your machine a few moments to load the new configuration, and you can begin testing your module. + +## Follow a colored object + +This module uses a vision service and a motor to program a machine to follow an object of a configurable color. + +### Prerequisites + +- An SBC, for example a Raspberry Pi 4 +- A wheeled base component such as a [SCUTTLE robot](https://www.scuttlerobot.org/shop/) +- A webcam +- Colored tape, to create a path for your robot + +### Configure your machine + +Follow the [setup guide](/operate/get-started/setup/) to create a new machine. + +Connect your SCUTTLE base to your SBC. +Add the following `components` configuration to create board, base, and motor components in Viam so you can control your SCUTTLE base: + +```json +{ + "name": "my-board", + "model": "pi", + "api": "rdk:component:board", + "attributes": {} +}, +{ + "name": "leftm", + "model": "gpio", + "api": "rdk:component:motor", + "attributes": { + "pins": { + "a": "15", + "b": "16" + }, + "board": "my-board", + "max_rpm": 200 + } +}, +{ + "name": "rightm", + "model": "gpio", + "api": "rdk:component:motor", + "attributes": { + "pins": { + "b": "11", + "dir": "", + "pwm": "", + "a": "12" + }, + "board": "my-board", + "max_rpm": 200 + } +}, +{ + "name": "my_base", + "model": "wheeled", + "api": "rdk:component:base", + "attributes": { + "width_mm": 400, + "wheel_circumference_mm": 258, + "left": ["leftm"], + "right": ["rightm"] + } +} +``` + +Connect your webcam to your SBC. +Add the following `components` configuration for your webcam: + +```json +{ + "name": "my_camera", + "model": "webcam", + "api": "rdk:component:camera", + "attributes": { + "video_path": "" + } +} +``` + +Add the following `services` configuration, replacing the `detect_color` value with the color of your object: + +```json +{ + "name": "my_object_detector", + "api": "rdk:service:vision", + "model": "my_object_detector", + "attributes": { + "segment_size_px": 100, + "detect_color": "#a13b4c", // replace with the color of your object + "hue_tolerance_pct": 0.06 + } +} +``` + +### Create your module + +In a terminal, run the following command: + +```sh {id="terminal-prompt" class="command-line" data-prompt="$"} +viam module generate +``` + +Enter the following configuration for your new module: + +- **module name**: "autonomous_example_module" +- **language**: Python +- **visibility**: private +- **organization ID**: your organization ID, found on the Viam organization settings page +- **resource to be added to the module**: Generic Service +- **model name**: "object_follower" +- **Enable cloud build**: yes +- **Register module**: yes + +Create a file called reload.sh in the root directory of your newly-generated module. +Copy and paste the following code into reload.sh: + +```bash +#!/usr/bin/env bash + +# bash safe mode. look at `set --help` to see what these are doing +set -euxo pipefail + +cd $(dirname $0) +MODULE_DIR=$(dirname $0) +VIRTUAL_ENV=$MODULE_DIR/venv +PYTHON=$VIRTUAL_ENV/bin/python +./setup.sh + +# Be sure to use `exec` so that termination signals reach the python process, +# or handle forwarding termination signals manually +exec $PYTHON src/main.py $@ +``` + +In a terminal, run the following command to make reload.sh executable: + +```sh {id="terminal-prompt" class="command-line" data-prompt="$"} +chmod +x reload.sh +``` + +Edit your meta.json, replacing the `"entrypoint"`, `"build"`, and `"path"` fields as follows: + +```json {class="line-numbers linkable-line-numbers" data-start="13" data-line="1, 4, 6" } + "entrypoint": "reload.sh", + "first_run": "", + "build": { + "build": "rm -f module.tar.gz && tar czf module.tar.gz requirements.txt src/*.py src/models/*.py meta.json setup.sh reload.sh", + "setup": "./setup.sh", + "path": "module.tar.gz", + "arch": [ + "linux/amd64", + "linux/arm64" + ] + } +``` + +### Code + +Replace the contents of src/models/object_follower.py with the following code. +Replace the `` placeholder with your organization namespace. + +```python {class="line-numbers linkable-line-numbers"} +import asyncio +from typing import Any, Mapping, List, Literal, Sequence, Tuple +from typing_extensions import Self + +from viam.components.base import Base, ResourceBase +from viam.components.camera import Camera +from viam.services.vision import VisionClient +from viam.media.utils.pil import pil_to_viam_image, viam_to_pil_image +from viam.module.module import Module +from viam.resource.types import Model, Subtype +from viam.resource.registry import Registry, ResourceCreatorRegistration +from viam.proto.app.v1 import ComponentConfig +from viam.services.vision import Detection + +class ObjectFollower(Module): + MODEL = Model( + ModelFamily("", "autonomous_example_module"), "object_follower") + + + def __init__(self, name: str): + super().__init__(name) + self.base: Base = None + self.camera: Camera = None + self.detector: VisionClient = None + + self._running_loop = False + self._loop_task = None + + self.spin_num = 10 + self.straight_num = 300 + self.vel = 500 + self.num_cycles = 200 + + @classmethod + def new_resource(cls, + config: ComponentConfig, + dependencies: Mapping[str, ResourceBase]) -> Self: + instance = cls(config.name) + instance.reconfigure(config, dependencies) + return instance + + @classmethod + def validate(cls, + config: ComponentConfig) -> Tuple[Sequence[str], Sequence[str]]: + camera_name = config.attributes.fields["camera_name"].string_value + detector_name = config.attributes.fields["detector_name"].string_value + base_name = config.attributes.fields["base_name"].string_value + + dependencies = [camera_name, detector_name, base_name] + return dependencies, [] + + def reconfigure(self, + config: ComponentConfig, + dependencies: Mapping[ResourceName, ResourceBase]): + self.camera_name = config.attributes.fields["camera_name"].string_value + self.detector_name = config.attributes.fields["detector_name"].string_value + self.detector_name = config.attributes.fields["base_name"].string_value + + for dependency_name, dependency in dependencies.items(): + if (dependency_name.subtype == "camera" + and dependency_name.name == self.camera_name): + self.camera = dependency + elif (dependency_name.subtype == "vision" + and dependency_name.name == self.detector_name): + self.detector = dependency + elif (dependency_name.subtype == "base" + and dependency_name.name == self.base_name): + self.base = dependency + + if not self.camera: + raise ValueError(f"Camera '{self.camera_name}' dependency not found.") + if not self.detector: + raise ValueError(f"Vision service '{self.detector_name}' dependency not found.") + if not self.base: + raise ValueError(f"Base '{self.base_name}' dependency not found.") + + LineFollower.LOGGER.info("Reconfigured.") + + async def start(self): + """ + Called when the module starts. Get references to components. + """ + ObjectFollower.LOGGER.info(f"'{self.name}' starting...") + await self.start_object_tracking() + ObjectFollower.LOGGER.info(f"'{self.name}' started.") + + async def close(self): + """ + Called when the module is shutting down. Clean up tasks. + """ + ObjectFollower.LOGGER.info(f"'{self.name}' closing...") + await self.stop_object_tracking() + ObjectFollower.LOGGER.info(f"'{self.name}' closed.") + + def left_or_right(self, + detections: List[Detection], + midpoint: float) -> Literal[0, 1, 2, -1]: + """ + Get largest detection box and see if its center is in the left, center, or right third. + Returns 0 for left, 1 for center, 2 for right, -1 if nothing detected. + """ + largest_area = 0 + largest_detection: Detection = None + + if not detections: + return -1 + + for d in detections: + area = (d.x_max - d.x_min) * (d.y_max - d.y_min) + if area > largest_area: + largest_area = area + largest_detection = d + + if largest_detection is None: + return -1 + + centerX = largest_detection.x_min + (largest_detection.x_max - largest_detection.x_min) / 2 + + if centerX < midpoint - midpoint / 6: + return 0 # on the left + elif centerX > midpoint + midpoint / 6: + return 2 # on the right + else: + return 1 # basically centered + + async def _object_tracking_loop(self): + """ + The core object tracking and base control logic loop. + """ + ObjectFollower.LOGGER.info("Object tracking control loop started.") + + initial_frame = await self.camera.get_image(mime_type="image/jpeg") + pil_initial_frame = viam_to_pil_image(initial_frame) + midpoint = pil_initial_frame.size[0] / 2 + + cycle_count = 0 + while (self._running_loop + and (self.num_cycles == 0 or cycle_count < self.num_cycles)): + try: + detections = await self.detector.get_detections_from_camera(self.camera_name) + + answer = self.left_or_right(detections, midpoint) + + if answer == 0: + ObjectFollower.LOGGER.info("Detected object on left, spinning left.") + await self.base.spin(self.spin_num, self.vel) + await self.base.move_straight(self.straight_num, self.vel) + elif answer == 1: + ObjectFollower.LOGGER.info("Detected object in center, moving straight.") + await self.base.move_straight(self.straight_num, self.vel) + elif answer == 2: + ObjectFollower.LOGGER.info("Detected object on right, spinning right.") + await self.base.spin(-self.spin_num, self.vel) + await self.base.move_straight(self.straight_num, self.vel) + else: + ObjectFollower.LOGGER.info("No object detected, stopping base.") + await self.base.stop() + + except Exception as e: + ObjectFollower.LOGGER.info(f"Error in object tracking loop: {e}") + + cycle_count += 1 + await asyncio.sleep(0.1) + + ObjectFollower.LOGGER.info( + "Object tracking loop finished or stopped.") + await self.base.stop() + self._running_loop = False + + async def start_object_tracking(self): + """ + Starts the background loop for object tracking and base control. + """ + if not self._running_loop: + self._running_loop = True + self._loop_task = asyncio.create_task(self._object_tracking_loop()) + ObjectFollower.LOGGER.info("Requested to start object tracking loop.") + else: + ObjectFollower.LOGGER.info("Object tracking loop is already running.") + + async def stop_object_tracking(self): + """ + Stops the background loop for object tracking and base control. + """ + if self._running_loop: + self._running_loop = False + if self._loop_task: + await self._loop_task # complete current iteration, exit + self._loop_task = None + ObjectFollower.LOGGER.info("Requested to stop object tracking loop.") + else: + ObjectFollower.LOGGER.info("Object tracking loop is not running.") + +# Register your module +Registry.register_resource_creator( + ObjectFollower.MODEL, + ResourceCreatorRegistration( + ObjectFollower.new_resource, ObjectFollower.validate) +) + + +async def main(): + """ + Main entry point for the Viam module. + """ + await Module.serve() + +if __name__ == "__main__": + asyncio.run(main()) + ObjectFollower.LOGGER.info("Done.") +``` + +### Run your module + +Find the [Part ID](/dev/reference/apis/fleet/#find-part-id) for your machine. +To deploy your module on your machine, run the following command, replacing `` with your Part ID: + +```sh {id="terminal-prompt" class="command-line" data-prompt="$"} +viam module reload --part-id +``` + +Add the following `services` configuration for your new model: + +```json +{ + "name": "generic-1", + "api": "rdk:service:generic", + "model": ":autonomous_example_module:line_follower", + "attributes": { + "camera_name": "my_camera", + "detector_name": "my_object_detector", + "base_name": "my_base" + } +} +``` + +Give your machine a few moments to load the new configuration, and you can begin testing your module. + +## Notify when a certain object appears in a video feed + +This module uses a vision service to program a machine to send a notification when a vision service detects an object. +This example detects people wearing hard hats, but you can use a different ML model or vision service to detect any object with the same logic. + +### Prerequisites + +- An SBC, for example a Raspberry Pi 4 +- An webcam + +### Configure your machine + +Follow the [setup guide](/operate/get-started/setup/) to create a new machine. + +Connect your camera to your SBC. +Add the following `components` configuration for your camera: + +```json +{ + "name": "my_camera", + "model": "webcam", + "api": "rdk:component:camera", + "attributes": { + "video_path": "" + } +} +``` + +Add the following `services` configuration: + +```json +{ + "name": "hard_hat_detector_vision_service", + "api": "rdk:service:vision", + "model": "viam-labs:vision:yolov8", + "attributes": { + "model_location": "keremberke/yolov8n-hard-hat-detection" + } +} +``` + +### Create your module + +In a terminal, run the following command: + +```sh {id="terminal-prompt" class="command-line" data-prompt="$"} +viam module generate +``` + +Enter the following configuration for your new module: + +- **module name**: "autonomous_example_module" +- **language**: Python +- **visibility**: private +- **organization ID**: your organization ID, found on the Viam organization settings page +- **resource to be added to the module**: Generic Service +- **model name**: "email_notifier" +- **Enable cloud build**: yes +- **Register module**: yes + +Create a file called reload.sh in the root directory of your newly-generated module. +Copy and paste the following code into reload.sh: + +```bash +#!/usr/bin/env bash + +# bash safe mode. look at `set --help` to see what these are doing +set -euxo pipefail + +cd $(dirname $0) +MODULE_DIR=$(dirname $0) +VIRTUAL_ENV=$MODULE_DIR/venv +PYTHON=$VIRTUAL_ENV/bin/python +./setup.sh + +# Be sure to use `exec` so that termination signals reach the python process, +# or handle forwarding termination signals manually +exec $PYTHON src/main.py $@ +``` + +In a terminal, run the following command to make reload.sh executable: + +```sh {id="terminal-prompt" class="command-line" data-prompt="$"} +chmod +x reload.sh +``` + +Edit your meta.json, replacing the `"entrypoint"`, `"build"`, and `"path"` fields as follows: + +```json {class="line-numbers linkable-line-numbers" data-start="13" data-line="1, 4, 6" } + "entrypoint": "reload.sh", + "first_run": "", + "build": { + "build": "rm -f module.tar.gz && tar czf module.tar.gz requirements.txt src/*.py src/models/*.py meta.json setup.sh reload.sh", + "setup": "./setup.sh", + "path": "module.tar.gz", + "arch": [ + "linux/amd64", + "linux/arm64" + ] + } +``` + +### Code + +Replace the contents of src/models/email_notifier.py with the following code. +Replace the `` placeholder with your organization namespace. + +```python +import asyncio +import os +from typing import List, Mapping, Any + +from viam.robot.client import RobotClient +from viam.components.camera import Camera +from viam.services.vision import VisionClient +from viam.module.module import Module +from viam.resource.types import Model +from viam.resource.registry import Registry, ResourceCreatorRegistration +from viam.proto.app.v1 import ComponentConfig +from viam.services.generic import Generic +import smtplib +from email.mime.text import MIMEText + +class EmailNotifier(Module, Generic): + MODEL = Model( + ModelFamily("", "autonomous_example_module"), "email_notifier") + + + def __init__(self, name: str): + super().__init__(name) + self.camera: Camera = None + self.detector: VisionClient = None + self.notification_sent: bool = False + + # Email configuration + self.sender_email: str = os.getenv("SENDER_EMAIL", "your_email@example.com") + self.sender_password: str = os.getenv("SENDER_PASSWORD", "your_email_password") + self.receiver_email: str = os.getenv("RECEIVER_EMAIL", "recipient_email@example.com") + self.smtp_server: str = os.getenv("SMTP_SERVER", "smtp.example.com") + self.smtp_port: int = int(os.getenv("SMTP_PORT", 587)) + + self._running_loop = False + self._loop_task = None + + @classmethod + def new_resource(cls, config: ComponentConfig): + module = cls(config.name) + if "camera_name" in config.attributes.fields: + module.camera_name = config.attributes.fields["camera_name"].string_value + if "detector_name" in config.attributes.fields: + module.camera_name = config.attributes.fields["detector_name"].string_value + if "sender_email" in config.attributes.fields: + module.sender_email = config.attributes.fields["sender_email"].string_value + if "sender_password" in config.attributes.fields: + module.sender_password = config.attributes.fields["sender_password"].string_value + if "receiver_email" in config.attributes.fields: + module.receiver_email = config.attributes.fields["receiver_email"].string_value + if "smtp_server" in config.attributes.fields: + module.smtp_server = config.attributes.fields["smtp_server"].string_value + if "smtp_port" in config.attributes.fields: + module.smtp_port = int(config.attributes.fields["smtp_port"].number_value) + + return module + + async def start(self): + EmailNotifier.LOGGER.info(f"'{self.name}' starting...") + self.camera = await Camera.from_robot(self.robot, self.camera_name) + self.detector = await VisionClient.from_robot(self.robot, self.detector_name) + EmailNotifier.LOGGER.info(f"'{self.name}' started. Monitoring for detections.") + + async def close(self): + EmailNotifier.LOGGER.info(f"'{self.name}' closing...") + await self._stop_detection_monitoring_internal() + EmailNotifier.LOGGER.info(f"'{self.name}' closed.") + + def _send_email(self, subject: str, body: str): + try: + msg = MIMEText(body) + msg['Subject'] = subject + msg['From'] = self.sender_email + msg['To'] = self.receiver_email + + with smtplib.SMTP(self.smtp_server, self.smtp_port) as server: + server.starttls() + server.login(self.sender_email, self.sender_password) + server.send_message(msg) + EmailNotifier.LOGGER.info(f"Email sent successfully to {self.receiver_email}: '{subject}'") + self.notification_sent = True + except Exception as e: + EmailNotifier.LOGGER.info(f"Failed to send email: {e}") + self.notification_sent = False + + async def _detection_monitoring_loop(self): + EmailNotifier.LOGGER.info("Detection monitoring loop started.") + + while self._running_loop: + try: + detections = + await self.detector.get_detections_from_camera(self.camera_name) + + if detections and not self.notification_sent: + subject = "Viam Module Alert: Detection Found!" + body = "A detection was found by the vision service." + EmailNotifier.LOGGER.info("Detection found. Sending email notification...") + self._send_email(subject, body) + elif not detections and self.notification_sent: + EmailNotifier.LOGGER.info("No detections found. Resetting notification status.") + self.notification_sent = False + elif detections and self.notification_sent: + EmailNotifier.LOGGER.info("Detection still present, but notification already sent.") + else: + EmailNotifier.LOGGER.info("No detections.") + + except Exception as e: + EmailNotifier.LOGGER.info(f"Error in detection monitoring loop: {e}") + + await asyncio.sleep(5) + + EmailNotifier.LOGGER.info("Detection monitoring loop finished or stopped.") + self.notification_sent = False + + async def _start_detection_monitoring_internal(self): + if not self._running_loop: + self._running_loop = True + self._loop_task = asyncio.create_task(self._detection_monitoring_loop()) + EmailNotifier.LOGGER.info("Requested to start detection monitoring loop.") + return {"status": "started"} + else: + EmailNotifier.LOGGER.info("Detection monitoring loop is already running.") + return {"status": "already_running"} + + async def _stop_detection_monitoring_internal(self): + if self._running_loop: + self._running_loop = False + if self._loop_task: + await self._loop_task + self._loop_task = None + EmailNotifier.LOGGER.info("Requested to stop detection monitoring loop.") + return {"status": "stopped"} + else: + EmailNotifier.LOGGER.info("Detection monitoring loop is not running.") + return {"status": "not_running"} + + async def do_command(self, + command: Mapping[str, Any], *, + timeout: float | None = None, **kwargs) -> Mapping[str, Any]: + if "start_monitoring" in command: + EmailNotifier.LOGGER.info("Received 'start_monitoring' command via do_command.") + return await self._start_detection_monitoring_internal() + elif "stop_monitoring" in command: + EmailNotifier.LOGGER.info("Received 'stop_monitoring' command via do_command.") + return await self._stop_detection_monitoring_internal() + else: + raise NotImplementedError(f"Command '{command}' not recognized.") + +# Register your module +Registry.register_resource_creator( + Generic.SUBTYPE, + EmailNotifier.MODEL, + ResourceCreatorRegistration(EmailNotifier.new_resource, EmailNotifier.validate_config) +) + +async def main(): + await Module.serve() + +if __name__ == "__main__": + asyncio.run(main()) + EmailNotifier.LOGGER.info("Done.") +``` + +### Run your module + +Find the [Part ID](/dev/reference/apis/fleet/#find-part-id) for your machine. +To deploy your module on your machine, run the following command, replacing `` with your Part ID: + +```sh {id="terminal-prompt" class="command-line" data-prompt="$"} +viam module reload --part-id +``` + +Add the following `services` configuration for your new model: + +```json +{ + "name": "generic-1", + "api": "rdk:service:generic", + "model": ":autonomous_example_module:email_notifier", + "attributes": { + "detector_name": "hard_hat_detector_vision_service", + "camera_name": "my_camera" + } +} +``` + +Define the `sender_email`, `sender_password`, `receiver_email`, `smtp_server`, and `smtp_port` variables in the model attributes or using environment variables on your machine. + +Give your machine a few moments to load the new configuration, and you can begin testing your module. diff --git a/docs/data-ai/reference/apis/_index.md b/docs/data-ai/reference/apis/_index.md new file mode 100644 index 0000000000..4e279ce2f8 --- /dev/null +++ b/docs/data-ai/reference/apis/_index.md @@ -0,0 +1,8 @@ +--- +linkTitle: "APIs" +title: "APIs" +weight: 30 +layout: "empty" +type: "docs" +empty_node: true +--- diff --git a/docs/data-ai/reference/data-client.md b/docs/data-ai/reference/apis/data-client.md similarity index 69% rename from docs/data-ai/reference/data-client.md rename to docs/data-ai/reference/apis/data-client.md index e283551be4..aa18f91fd1 100644 --- a/docs/data-ai/reference/data-client.md +++ b/docs/data-ai/reference/apis/data-client.md @@ -1,8 +1,10 @@ --- title: "Upload and retrieve data with Viam's data client API" -linkTitle: "Data client API" +linkTitle: "Data client" weight: 30 type: "docs" layout: "empty" canonical: "/dev/reference/apis/data-client/" +aliases: + - /data-ai/reference/data-client/ --- diff --git a/docs/data-ai/reference/data-management-client.md b/docs/data-ai/reference/apis/data-management-client.md similarity index 59% rename from docs/data-ai/reference/data-management-client.md rename to docs/data-ai/reference/apis/data-management-client.md index 9b25725d0b..47e51a7588 100644 --- a/docs/data-ai/reference/data-management-client.md +++ b/docs/data-ai/reference/apis/data-management-client.md @@ -1,8 +1,10 @@ --- title: "Data management API" -linkTitle: "Data management API" +linkTitle: "Data management" weight: 30 type: "docs" layout: "empty" canonical: "/dev/reference/apis/services/data/" +aliases: + - /data-ai/reference/data-management-client/ --- diff --git a/docs/data-ai/reference/ml-model-client.md b/docs/data-ai/reference/apis/ml-model-client.md similarity index 62% rename from docs/data-ai/reference/ml-model-client.md rename to docs/data-ai/reference/apis/ml-model-client.md index beeb82c808..9fb5ae2e0b 100644 --- a/docs/data-ai/reference/ml-model-client.md +++ b/docs/data-ai/reference/apis/ml-model-client.md @@ -1,8 +1,10 @@ --- title: "ML model API" -linkTitle: "ML model API" +linkTitle: "ML model" weight: 30 type: "docs" layout: "empty" canonical: "/dev/reference/apis/services/ml/" +aliases: + - /data-ai/reference/ml-model-client/ --- diff --git a/docs/data-ai/reference/ml-training-client.md b/docs/data-ai/reference/apis/ml-training-client.md similarity index 65% rename from docs/data-ai/reference/ml-training-client.md rename to docs/data-ai/reference/apis/ml-training-client.md index 60053e550e..fa05383b2d 100644 --- a/docs/data-ai/reference/ml-training-client.md +++ b/docs/data-ai/reference/apis/ml-training-client.md @@ -1,8 +1,10 @@ --- title: "Work with ML training jobs with Viam's ML training API" -linkTitle: "ML training client API" +linkTitle: "ML training client" weight: 40 type: "docs" layout: "empty" canonical: "/dev/reference/apis/services/ml/" +aliases: + - /data-ai/reference/ml-training-client/ --- diff --git a/docs/data-ai/reference/vision-client.md b/docs/data-ai/reference/apis/vision-client.md similarity index 62% rename from docs/data-ai/reference/vision-client.md rename to docs/data-ai/reference/apis/vision-client.md index d28e356b3e..13d2fce799 100644 --- a/docs/data-ai/reference/vision-client.md +++ b/docs/data-ai/reference/apis/vision-client.md @@ -1,8 +1,10 @@ --- title: "Vision service API" -linkTitle: "Vision service API" +linkTitle: "Vision service" weight: 30 type: "docs" layout: "empty" canonical: "/dev/reference/apis/services/vision/" +aliases: + - /data-ai/reference/vision-client/ --- diff --git a/docs/data-ai/reference/architecture.md b/docs/data-ai/reference/architecture.md index 17dfcd3acc..a91182e328 100644 --- a/docs/data-ai/reference/architecture.md +++ b/docs/data-ai/reference/architecture.md @@ -1,7 +1,7 @@ --- linkTitle: "Machine-cloud architecture" title: "Viam architecture" -weight: 1000 +weight: 20 layout: "docs" type: "docs" layout: "empty" diff --git a/docs/data-ai/reference/mlmodel-design.md b/docs/data-ai/reference/mlmodel-design.md index 44c8a54384..2b5598c304 100644 --- a/docs/data-ai/reference/mlmodel-design.md +++ b/docs/data-ai/reference/mlmodel-design.md @@ -1,7 +1,7 @@ --- title: "Design your ML models for vision" linkTitle: "ML model service design" -weight: 60 +weight: 10 type: "docs" tags: ["data management", "ml", "model training", "vision"] description: "Design your ML Model service to work with Viam's vision services." diff --git a/docs/data-ai/reference/triggers-configuration.md b/docs/data-ai/reference/triggers-configuration.md index aa094b7d25..aa37fb6220 100644 --- a/docs/data-ai/reference/triggers-configuration.md +++ b/docs/data-ai/reference/triggers-configuration.md @@ -1,7 +1,7 @@ --- title: "Trigger configuration" linkTitle: "Trigger configuration" -weight: 60 +weight: 20 type: "docs" tags: ["data management", "trigger", "webhook"] description: "Detailed information about how to configure triggers and webhooks." diff --git a/docs/data-ai/train/train-tflite.md b/docs/data-ai/train/train-tflite.md index 30045ec4d6..4b5de1ab4c 100644 --- a/docs/data-ai/train/train-tflite.md +++ b/docs/data-ai/train/train-tflite.md @@ -167,8 +167,9 @@ To capture images of edge cases and re-train your model using those images, comp ## Next steps -Now your machine can make inferences about its environment. -The next step is to [deploy](/data-ai/ai/deploy/) the ML model and then [act](/data-ai/ai/act/) or [alert](/data-ai/ai/alert/) based on these inferences. +Now you can [deploy](/data-ai/ai/deploy/) your ML model. +Once deployed, you can use your ML model to make inferences on your machine. +Then, you [alert](/data-ai/ai/alert/) or even [make decisions](/data-ai/ai/make-decisions-autonomously/) based on these inferences. See the following tutorials for examples of using machine learning models to make your machine do things based on its inferences about its environment: diff --git a/docs/data-ai/train/upload-external-data.md b/docs/data-ai/train/upload-external-data.md index 457ef41077..658602b1d1 100644 --- a/docs/data-ai/train/upload-external-data.md +++ b/docs/data-ai/train/upload-external-data.md @@ -15,7 +15,7 @@ aliases: - /data-ai/ai/advanced/ date: "2024-12-04" description: "Upload data to Viam from your local computer or mobile device using the data client API, Viam CLI, or Viam mobile app." -prev: "/data-ai/ai/act/" +prev: "/data-ai/ai/make-decisions-autonomously/" --- When you configure the data management service, Viam automatically uploads data from the default directory `~/.viam/capture` and any directory you configured. diff --git a/docs/operate/mobility/use-input-to-act.md b/docs/operate/mobility/use-input-to-act.md index c99ca5416f..ec02227cbb 100644 --- a/docs/operate/mobility/use-input-to-act.md +++ b/docs/operate/mobility/use-input-to-act.md @@ -51,7 +51,7 @@ readings = await my_sensor.get_readings() Other common inputs include the methods of a [board](/dev/reference/apis/components/board/) (`GetGPIO`, `GetPWM`, `PWMFrequency`, `GetDigitalInterruptValue`, and `ReadAnalogReader`), or a [power sensor](/dev/reference/apis/components/power-sensor/) (`GetVoltage`, `GetCurrent`, `GetPower`, and `GetReadings`). You can also use camera input, for example to detect objects and pick them up with an arm. -See [Act based on inferences](/data-ai/ai/act/) for relevant examples. +See [Make decisions autonomously](/data-ai/ai/make-decisions-autonomously/) for relevant examples. If you want to send alerts based on computer vision or captured data, see [Alert on inferences](/data-ai/ai/alert/) or [Alert on data](/data-ai/data/advanced/alert-data/). diff --git a/docs/tutorials/services/webcam-line-follower-robot.md b/docs/tutorials/services/webcam-line-follower-robot.md index dd73fdc79e..be3416e4a2 100644 --- a/docs/tutorials/services/webcam-line-follower-robot.md +++ b/docs/tutorials/services/webcam-line-follower-robot.md @@ -219,46 +219,46 @@ Next, navigate to the **CONFIGURE** tab of your machine's page. 1. **Add a vision service.** -Next, add a vision service [detector](/dev/reference/apis/services/vision/#detections): + Next, add a vision service [detector](/dev/reference/apis/services/vision/#detections): -Click the **+** (Create) icon next to your machine part in the left-hand menu and select **Component or service**. -Select type `vision` and model `color detector`. -Enter `green_detector` for the name, then click **Create**. - -In your vision service’s panel, select the color your vision service will be detecting, as well as a hue tolerance and a segment size (in pixels). -Use a color picker like [colorpicker.me](https://colorpicker.me/) to approximate the color of your line and get the corresponding rgb or hex value. -We used `rgb(25,255,217)` or `#19FFD9` to match the color of our green electrical tape, and specified a segment size of 100 pixels with a tolerance of 0.06, but you can tweak these later to fine tune your line follower. - -2. Click **Save** in the top right corner of the screen. + Click the **+** (Create) icon next to your machine part in the left-hand menu and select **Component or service**. + Select type `vision` and model `color detector`. + Enter `green_detector` for the name, then click **Create**. -3. (optional) **Add a `transform` camera as a visualizer** + In your vision service’s panel, select the color your vision service will be detecting, as well as a hue tolerance and a segment size (in pixels). + Use a color picker like [colorpicker.me](https://colorpicker.me/) to approximate the color of your line and get the corresponding rgb or hex value. + We used `rgb(25,255,217)` or `#19FFD9` to match the color of our green electrical tape, and specified a segment size of 100 pixels with a tolerance of 0.06, but you can tweak these later to fine tune your line follower. -If you'd like to see the bounding boxes that the color detector identifies in a live stream, you'll need to configure a [transform camera](/operate/reference/components/camera/transform/). -This isn't another piece of hardware, but rather a virtual "camera" that takes in the stream from the webcam we just configured and outputs a stream overlaid with bounding boxes representing the color detections. +1. Click **Save** in the top right corner of the screen. -Click the **+** (Create) icon next to your machine part in the left-hand menu and select **Component or service**. -Add a [transform camera](/operate/reference/components/camera/transform/) with type `camera` and model `transform`. -Name it `transform_cam` and click **Create**. +1. (optional) **Add a `transform` camera as a visualizer** -Click **{}** (Switch to advanced) in the top right of the camera's configuration panel to switch to advanced mode. -Replace the attributes JSON object (`{}`) with the following object which specifies the camera source that the `transform` camera will be using and defines a pipeline that adds the defined `detector`: + If you'd like to see the bounding boxes that the color detector identifies in a live stream, you'll need to configure a [transform camera](/operate/reference/components/camera/transform/). + This isn't another piece of hardware, but rather a virtual "camera" that takes in the stream from the webcam we just configured and outputs a stream overlaid with bounding boxes representing the color detections. -```json -{ - "source": "my_camera", - "pipeline": [ - { - "type": "detections", - "attributes": { - "detector_name": "green_detector", - "confidence_threshold": 0.6 - } - } - ] -} -``` + Click the **+** (Create) icon next to your machine part in the left-hand menu and select **Component or service**. + Add a [transform camera](/operate/reference/components/camera/transform/) with type `camera` and model `transform`. + Name it `transform_cam` and click **Create**. + + Click **{}** (Switch to advanced) in the top right of the camera's configuration panel to switch to advanced mode. + Replace the attributes JSON object (`{}`) with the following object which specifies the camera source that the `transform` camera will be using and defines a pipeline that adds the defined `detector`: + + ```json + { + "source": "my_camera", + "pipeline": [ + { + "type": "detections", + "attributes": { + "detector_name": "green_detector", + "confidence_threshold": 0.6 + } + } + ] + } + ``` -4. Click **Save** in the top right corner of the screen. +1. Click **Save** in the top right corner of the screen. {{% /tab %}} {{% tab name="JSON" %}} @@ -393,7 +393,7 @@ To make your rover follow your line, you need to install Python and the Viam Pyt python3 --version ``` -2. Install the [Viam Python SDK](https://python.viam.dev/) by running +1. Install the [Viam Python SDK](https://python.viam.dev/) by running ```sh {class="command-line" data-prompt="$"} pip install viam-sdk @@ -429,7 +429,7 @@ To make your rover follow your line, you need to install Python and the Viam Pyt 1. In your Pi terminal, navigate to the directory where you’d like to save your code. Run, nano rgb_follower.py (or replace rgb_follower with the your desired filename). - 2. Paste all your code into this file. + 1. Paste all your code into this file. Press **CTRL + X** to close the file. Type **Y** to confirm file modification, then press enter to finish. @@ -520,11 +520,11 @@ The code you are using has several functions: The `main` function connects to the robot and initializes each component, then performs the following tasks: 1. If the color of the line is detected in the top center of the camera frame, the rover drives forward. -2. If the color is not detected in the top center, it checks the left side of the camera frame for the color. +1. If the color is not detected in the top center, it checks the left side of the camera frame for the color. If it detects the color on the left, the robot turns left. If it doesn’t detect the color on the left, it checks the right side of the camera frame, and turns right if it detects the color. -3. Once the line is back in the center front of the camera frame, the rover continues forward. -4. When the rover no longer sees any of the line color anywhere in the front portion of the camera frame, it stops and the program ends. +1. Once the line is back in the center front of the camera frame, the rover continues forward. +1. When the rover no longer sees any of the line color anywhere in the front portion of the camera frame, it stops and the program ends. ```python {class="line-numbers linkable-line-numbers"} async def main(): @@ -577,7 +577,7 @@ async def main(): To run the program: 1. Position the rover so that its camera can see the colored line. -2. If you have saved the code on your Pi, SSH into it by running: +1. If you have saved the code on your Pi, SSH into it by running: ```sh {class="command-line" data-prompt="$"} ssh @.local @@ -604,8 +604,8 @@ Along the way, you have learned how to configure a wheeled base, camera, and col If you are wondering what to do next, why not try one of the following ideas: 1. Automatically detect what color line the robot is on and follow that. -2. Use two differently colored lines that intersect and make the robot switch from one line to the other. -3. Put two rovers on intersecting lines and write code to keep them from crashing into each other. +1. Use two differently colored lines that intersect and make the robot switch from one line to the other. +1. Put two rovers on intersecting lines and write code to keep them from crashing into each other. ## Troubleshooting