Smart mirror helper with speech recognition, led control and MagicMirror written in C#
To configure different aspect of application you can use appsettings.json
.
Here is an example of settings file
{
"MagicMirrorRunner": {
"WorkingDirectory": "<MagicMirror root folder path on target device>"
},
"LED": {
"Count": <Total count of LED lights>,
"Missing": [<Index of lights that is missing/not working in LED strip>],
"BusId": <Physical SPI bus id (0)>,
"ChipSelectLine": <SPI line (0)>
},
"SpeechRecognition": {
"ActivationRecognitionTablePath": "<path to .table file for keyword recognition>",
"SubscriptionKey": "<ApiKey from Azure Portal>",
"Region": "<Region from Azure Portal>"
},
"ButtonControls": {
"LedGPIO": <GPIO pin number>,
"DisplayGPIO": <GPIO pin number>
}
}
For debugging and deployment you can check deployment readme located here. I'm using raspberry as target.
To control what is going on with your SmartMirror you can use commands - either using your voice or keyboard input. For now, application supports these commands:
light on
enable led lightslight off
disable led lightscolor {colorName or code}
set led color to specified in commandsound test
play Success sound (used for test purposes)
You can use a keyboard to pass commands listed above to SmartMirror app. It uses the KeyboardListener
background service to listen for commands in the background, appropriate responses will be printed in the console.
SmartMirror app designed to work with Ws2812b LED strip
In order to control LED strip, app uses ws2812b
binding from this NuGet, through SPI interface.
Each pixel is set separately, so in order to change color of the whole LED strip - each pixel should be set to that color.
App uses some 3rd party external processes to perform some of the actions, for example, MagicMirror is run from the inside of the application.
Current active speech recognition technology is Microsoft one, but there also possible to use DeepSpeech (not tested)
For speech recognition app uses Microsoft Cognitive Services (GitHub)
Speech recognition is split into two parts - keyword recognition and actual command recognition.
Keyword recognition works offline and uses Assets/mirror_activation.table
table created in Azure Portal, trained to use Hey mirror
as an activation word.
After the application recognizes the keyword - it will start listening for an actual command that is described above and if the command is recognized successfully by the Microsoft Azure Cognitive Services - it will be executed.
All actions will be confirmed with appropriate sound
Be aware, Microsoft Cognitive Services nuget works only with ARM64. Tested on Ubuntu server 18.04 and 20.04, probably will work the same with on latest 22.04
Another option for speech recognition is to use DeepSpeech, it works offline and does not requires active internet connection.
In order to use DeepSpeech - custom C# wrapper should be created that supports .NET6 (PR for .NET Core support)
This approach is not tested in real life.
Application source code contains commented DeepSpeech service as an direction how to start
Service that starts and manages MagicMirror electron application.
It uses C# Process
to execute npm run start
command to start MagicMirror.
You can set current working directory in appsettings.json
MagicMirror itself should be configured separately by following guides
App uses two types of audio players - one for linux and one for windows
Simple playback for a specific file. SmartMirror app has currently two sounds to play:
- Recognitions success (
Assets/success.wav
) - Recognition failed (
Assets/error.wav
)
SmartMirror app provides a file path to platform-specific player which does all the magic for the playback.
On Linux app uses aplay.
On Windows app uses NAudio