You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/content/tutorials/en/speak-with-your-hands.mdx
+24-24
Original file line number
Diff line number
Diff line change
@@ -39,16 +39,16 @@ You Will Need:
39
39
40
40
In past tutorials with interactivity, we learned how to use tools like the mouse and keyboard (physical objects) to interact with the computer. These tools have helped us talk with computers for many years. But now, we don't always need them. In this tutorial, we're going to learn how to control our p5.js drawings directly with our hands, like a magician waving her wand!
41
41
42
-
To do this, we will use the [ml5.js library](https://ml5js.org/) and a *machine learning model* called [HandPose](https://learn.ml5js.org/#/reference/handpose?id=handpose). The ml5.js HandPose model figures out the positions of your hand on the computer screen.
42
+
To do this, we will use the [ml5.js library](https://ml5js.org/) and a *machine learning model* called [HandPose](https://docs.ml5js.org/#/reference/handpose). The ml5.js HandPose model figures out the positions of your hand on the computer screen.
43
43
44
44
*Machine learning* is like teaching a computer to learn and make choices by showing it lots of examples. The code looks at the examples and creates connections, kind of like how we learn. If we want to teach it to know the difference between cats and dogs, we show it lots of pictures and tell it which ones are cats and which ones are dogs. The more examples you show the machine learning model, the better it gets. You could then show it a new picture, and it would be able to tell if it's a cat or a dog. That's machine learning in action. For more examples of machine learning, watch [this video](https://www.youtube.com/watch?v=5q87K1WaoFI\&t=665s) on Youtube.
45
45
46
-
The [ml5.js](https://ml5js.org/)[HandPose](https://learn.ml5js.org/#/reference/handpose?id=handpose) machine learning model can recognize a hand in a picture and identify points on each finger. When we create interactive sketches with the mouse, we extract a mouse cursor position and use built-in variables like `mouseX` and `mouseY` to control things on the screen. The ml5.js lets us do something similar, but with your hand movements via a webcam.
46
+
The [ml5.js](https://ml5js.org/)[HandPose](https://docs.ml5js.org/#/reference/handpose) machine learning model can recognize a hand in a picture and identify points on each finger. When we create interactive sketches with the mouse, we extract a mouse cursor position and use built-in variables like `mouseX` and `mouseY` to control things on the screen. The ml5.js lets us do something similar, but with your hand movements via a webcam.
47
47
48
48
49
49
### Step 1 - Set up ml5.js
50
50
51
-
- Open the [ml5.js HandPose Image](https://editor.p5js.org/ml5/sketches/Handpose_Image) example in the p5.js Web Editor. Make a copy and name it something like "Handpose Sketch".
51
+
- Open the [ml5.js HandPose Image](https://editor.p5js.org/AsukaMinato/sketches/KI4OqvacU) example in the p5.js Web Editor. Make a copy and name it something like "Handpose Sketch".
52
52
53
53

54
54
@@ -113,7 +113,7 @@ function drawKeypoints() {
113
113
114
114

Let's dive into how [ml5.js](https://ml5js.org/) works, especially with the [HandPose](https://learn.ml5js.org/#/reference/handpose?id=handpose) model. [ml5.js](https://ml5js.org/) is a tool made to help us all use machine learning easily. It works great with p5.js, which means our computer can "see, hear, and understand" stuff using a camera, just like we do!
125
+
Let's dive into how [ml5.js](https://ml5js.org/) works, especially with the [HandPose](https://docs.ml5js.org/#/reference/handpose) model. [ml5.js](https://ml5js.org/) is a tool made to help us all use machine learning easily. It works great with p5.js, which means our computer can "see, hear, and understand" stuff using a camera, just like we do!
126
126
127
-
The [HandPose](https://learn.ml5js.org/#/reference/handpose) model uses image recognition algorithms to recognize your hand. It can spot your palm and fingers and keep track of them as you move your hand around in front of the camera. It can only detect one one hand at a time, but it can identify 21 different key-points on the hand in 3D space. This means that it gives us the x-y-z coordinates of each point. These key-points show us key parts of your palm and fingers.
127
+
The [HandPose](https://docs.ml5js.org/#/reference/handpose) model uses image recognition algorithms to recognize your hand. It can spot your palm and fingers and keep track of them as you move your hand around in front of the camera. It can only detect one one hand at a time, but it can identify 21 different key-points on the hand in 3D space. This means that it gives us the x-y-z coordinates of each point. These key-points show us key parts of your palm and fingers.
128
128
129
129
Now let’s start with the static image to see this in action.
130
130
@@ -224,7 +224,7 @@ function drawKeypoints() {
224
224
225
225

- First, upload a small .png file (100x100 pixels max) that has a transparent background. We create a new variable to store this image and then load it in the preload function.
454
454
- Next, we place the image at the same location as each ellipse in the previous code example.
455
-
- Next, we adjust the points to center the image at each fingertip point. The image will render so that the fingertip point is the left bottom corner of the hat image, representing the (0,0) point. To center the hat image on that point, we subtract half the width of the image from the x-coordinate of the fingertip point and subtract half the height of the image from the y-coordinate of the fingertip point.
455
+
- Next, we adjust the points to center the image at each fingertip point. The image will render so that the fingertip point is the left bottom corner of the hat image, representing the (0,0) point. To center the hat image on that point, we subtract half the width of the image from the x-coordinate of the fingertip point and subtract half the height of the image from the y-coordinate of the fingertip point.
456
456
- Finally, we render the hatImage instead of drawing the ellipses at the end of the `drawKeypoints()` function.
457
457
458
458
459
459
#### Example: Using a live Webcam feed
460
460
461
-
In this example we replace the static image with a live webcam feed.
461
+
In this example, we replace the static image with a live webcam feed.
462
462
463
463
Let’s go back to the ml5.js Handpose, and open up the [Webcam reference for p5.js web editor](https://editor.p5js.org/ml5/sketches/Handpose_Webcam). Make a copy and name it something like “Handpose Webcam Sketch”.
464
464
@@ -510,7 +510,7 @@ function drawKeypoints() {
510
510
511
511


@@ -630,14 +630,14 @@ Give each finger a different hat!
630
630
631
631
### Step 3 - Adding interactivity
632
632
633
-
Now that we understand the data that the ml5.js HandPose model provides to us, we can use it to interact with our sketches. There are some fun examples of interacting with sketches in the [Conditionals and Interactivity](/tutorials/conditionals-and-interactivity) tutorial. Let’s start with some of these examples - but use our hands instead of the mouse.
633
+
Now that we understand the data that the ml5.js HandPose model provides to us, we can use it to interact with our sketches. There are some fun examples of interacting with sketches in the [Conditionals and Interactivity](https://p5js.org/tutorials/conditionals-and-interactivity/) tutorial. Let’s start with some of these examples - but use our hands instead of the mouse.
634
634
635
635
When we interacted with our sketches using the mouse, we used the `mouseX` and `mouseY` variables to track the x-y coordinates of the cursor. Now, we will use the x-y coordinates of the tip of a finger.
636
636
637
637
638
638
#### Example: Move a ball with your index finger
639
639
640
-
In this example we control a circle on the screen with our index finger.
640
+
In this example, we control a circle on the screen with our index finger.
641
641
642
642
Let’s start with the “Using a live Webcam feed” example.
643
643
@@ -709,7 +709,7 @@ function drawObject() {
709
709
710
710



@@ -1054,7 +1054,7 @@ Sketch by [Akif Kazi](https://www.instagram.com/designer_akifkazi), Student at K
1054
1054
1055
1055
### Next Steps
1056
1056
1057
-
Explore the [PoseNet](https://learn.ml5js.org/#/reference/posenet) and [Facemesh](https://learn.ml5js.org/#/reference/facemesh) models in the ml5.js library. The PoseNet model detects points on the whole body, in the exact same way that HandPose detects it for the hand. The Facemesh model returns points on the face. You should understand the data that these models return and use them in your sketches.
1057
+
Explore the [PoseNet](https://archive-docs.ml5js.org/#/reference/posenet) and [Facemesh](https://docs.ml5js.org/#/reference/facemesh) models in the ml5.js library. The PoseNet model detects points on the whole body, in the exact same way that HandPose detects it for the hand. The Facemesh model returns points on the face. You should understand the data that these models return and use them in your sketches.
1058
1058
1059
1059
<Callout>
1060
1060
@@ -1068,6 +1068,6 @@ Using the Facemesh model, create the following:
0 commit comments