Privacy-Friendly Face Recognition On Constrained Devices
Three entities are involved in the testing: a master device, server (simulating a drone) and client (simulating a mobile device). They should be located in the same network so that master can ssh
into server and client and run SFE on them.
The master device does not need to be a very powerful machine, as it only orchestrates the operations. Its specifications do not influence test results. A Raspberry Pi 2 was performing just fine.
I was running:
master: RPi Model 3 B (aarch64) DietPi v8.20.1 Debian GNU/Linux 12 (bookworm)
client and server: RPi Model 3 B+ Raspbian OS Operating System: Debian GNU/Linux 11 (bullseye) Kernel: Linux 6.1.21-v8+ Architecture: arm64
First, follow the setup guide. Then execute the main script pyscripts/master.py
at the master device. It handles the communication with the server and client.
The master device:
- reads the config file
- prepares images from the image database
- creates face embedding shares and sends them to server and client
- orchestrates SFE execution between server and client
- saves results
There is logging:
INFO level logging to stdout and DEBUG level logging to a file in log/
.
The results are saved in a .csv
file as a pandas
Dataframe in the dfs/
folder. For the format of the saved data, see pffrocd.columns
in pyscripts/pffrocd.py
. Also, the Jupyter Notebooks in plotting/
and results/
can give an overview of how to visualize the data.
The testing flow is as follows:
Master
- Reads config file
- Tests bandwidth between sever and client (by remotely executing iperf3 tests on both devices)
- Prepares the database images (choose appropriate people from the DB and their images)
- Sets image x of person p as the reference image (the image stored at the Service Provider)
- Sends share of x to client and server
- Runs tests for other images of p, namely for each image i:
- Makes the server extract embedding of i
- Sends embeddings to client and server
- Runs SFE on client and server
- If indicated, reruns SFE this time gathering energy data from Powertop
- Saves results
For all three devices
- Install required packages:
sudo apt update && sudo apt install time python3 python3-venv iperf3 g++ make cmake libgmp-dev libssl-dev libboost-all-dev ffmpeg libsm6 libxext6 git powertop -y
- Generate SSH keys and add them as deploy keys to the git repo (to be able to clone the repo)
ssh-keygen
- Clone the repo and cd into it
git clone [email protected]:Musialke/pffrocd.git
cd pffrocd
For server and client:
- Create the ABY build directory
mkdir ABY/build/ && cd ABY/build/
- Use CMake to configure the build (example applications on by default):
cmake ..
- Call
make
in the build directory. You can find the build executables and libraries in thebin/
andlib/
directories, respectively.
make
-
To be able to run a process with higher priority, modify limits.conf as explained here: https://unix.stackexchange.com/a/358332
-
Calibrate Powertop to get power estimate readings
sudo powertop --calibrate
This takes a while, turns peripherals on and off and reboots the system.
ADDITIONALLY for master and server:
Since the server and master need to extract embeddings, they need the database of pictures and Python.
- Change the directory back to the repo root folder and unpack the picture database:
cat lfw.tgz.parta* | tar -xzv
- Create a new virtual environment, activate it and install the required packages
python3 -m venv env
. env/bin/activate
pip install -vr requirements.txt
- Copy the SFace weights where deepface can find them:
mkdir -p ~/.deepface/weights/ && cp face_recognition_sface_2021dec.onnx ~/.deepface/weights/
ADDITIONALLY for master
You need to specify config options, and master needs to be able to ssh into server and client.
-
Rename the
config.ini.example
file toconfig.ini
and modify it accordingly -
Copy the SSH keys to the server and client using ssh-copy-id
ssh-copy-id user@ip_address
All done! You can now run the main script in the background on the master machine
nohup python3 pyscripts/master.py </dev/null >/dev/null 2>&1 &
Follow the logs to know what stage the program is at:
tail -f log/<logfile>
The logs are saved in the log/
directory and the test results are appended to a csv file in dfs/
after each run.
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
Fix:
sudo apt update && sudo apt install ffmpeg libsm6 libxext6 -y
v2.error: OpenCV(4.7.0) /io/opencv/modules/dnn/src/onnx/onnx_importer.cpp:275: error: (-210:Unsupported format or combination of formats) Failed to parse ONNX model: /home/dietpi/.deepface/weights/face_recognition_sface_2021dec.onnx in function 'ONNXImporter'
The link to weights for SFace needs to be included. Fix:
mkdir -p ~/.deepface/weights/ && cp face_recognition_sface_2021dec.onnx ~/.deepface/weights/