Skip to content

Privacy-Friendly Face Recognition On Contrained Devices

Notifications You must be signed in to change notification settings

Musialke/pffrocd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

pffrocd

Privacy-Friendly Face Recognition On Constrained Devices

Test explanation

Devices needed

Three entities are involved in the testing: a master device, server (simulating a drone) and client (simulating a mobile device). They should be located in the same network so that master can ssh into server and client and run SFE on them.

Untitled Diagram(18) drawio

The master device does not need to be a very powerful machine, as it only orchestrates the operations. Its specifications do not influence test results. A Raspberry Pi 2 was performing just fine.

I was running:

master: RPi Model 3 B (aarch64) DietPi v8.20.1 Debian GNU/Linux 12 (bookworm)

client and server: RPi Model 3 B+ Raspbian OS Operating System: Debian GNU/Linux 11 (bullseye) Kernel: Linux 6.1.21-v8+ Architecture: arm64

Running tests

First, follow the setup guide. Then execute the main script pyscripts/master.py at the master device. It handles the communication with the server and client.

The master device:

  • reads the config file
  • prepares images from the image database
  • creates face embedding shares and sends them to server and client
  • orchestrates SFE execution between server and client
  • saves results

There is logging: INFO level logging to stdout and DEBUG level logging to a file in log/.

The results are saved in a .csv file as a pandas Dataframe in the dfs/ folder. For the format of the saved data, see pffrocd.columns in pyscripts/pffrocd.py. Also, the Jupyter Notebooks in plotting/ and results/ can give an overview of how to visualize the data.

Testing flow

The testing flow is as follows:

Master

  1. Reads config file
  2. Tests bandwidth between sever and client (by remotely executing iperf3 tests on both devices)
  3. Prepares the database images (choose appropriate people from the DB and their images)
  4. Sets image x of person p as the reference image (the image stored at the Service Provider)
  5. Sends share of x to client and server
  6. Runs tests for other images of p, namely for each image i:
  7. Makes the server extract embedding of i
  8. Sends embeddings to client and server
  9. Runs SFE on client and server
  10. If indicated, reruns SFE this time gathering energy data from Powertop
  11. Saves results

image

Setup Guide:

For all three devices

  1. Install required packages:
sudo apt update && sudo apt install time python3 python3-venv iperf3 g++ make cmake libgmp-dev libssl-dev libboost-all-dev ffmpeg libsm6 libxext6 git powertop -y
  1. Generate SSH keys and add them as deploy keys to the git repo (to be able to clone the repo)
ssh-keygen
  1. Clone the repo and cd into it
git clone [email protected]:Musialke/pffrocd.git
cd pffrocd

For server and client:

  1. Create the ABY build directory
mkdir ABY/build/ && cd ABY/build/
  1. Use CMake to configure the build (example applications on by default):
cmake ..
  1. Call make in the build directory. You can find the build executables and libraries in the bin/ and lib/ directories, respectively.
make
  1. To be able to run a process with higher priority, modify limits.conf as explained here: https://unix.stackexchange.com/a/358332

  2. Calibrate Powertop to get power estimate readings

sudo powertop --calibrate

This takes a while, turns peripherals on and off and reboots the system.

ADDITIONALLY for master and server:

Since the server and master need to extract embeddings, they need the database of pictures and Python.

  1. Change the directory back to the repo root folder and unpack the picture database:
cat lfw.tgz.parta* | tar -xzv
  1. Create a new virtual environment, activate it and install the required packages
python3 -m venv env
. env/bin/activate
pip install -vr requirements.txt
  1. Copy the SFace weights where deepface can find them:
mkdir -p ~/.deepface/weights/ && cp face_recognition_sface_2021dec.onnx ~/.deepface/weights/

ADDITIONALLY for master

You need to specify config options, and master needs to be able to ssh into server and client.

  1. Rename the config.ini.example file to config.ini and modify it accordingly

  2. Copy the SSH keys to the server and client using ssh-copy-id

ssh-copy-id user@ip_address

All done! You can now run the main script in the background on the master machine

nohup python3 pyscripts/master.py </dev/null >/dev/null 2>&1 &

Follow the logs to know what stage the program is at:

tail -f log/<logfile>

The logs are saved in the log/ directory and the test results are appended to a csv file in dfs/ after each run.

Possible errors and solutions:

ImportError: libGL.so.1: cannot open shared object file: No such file or directory Fix:

sudo apt update && sudo apt install ffmpeg libsm6 libxext6  -y

v2.error: OpenCV(4.7.0) /io/opencv/modules/dnn/src/onnx/onnx_importer.cpp:275: error: (-210:Unsupported format or combination of formats) Failed to parse ONNX model: /home/dietpi/.deepface/weights/face_recognition_sface_2021dec.onnx in function 'ONNXImporter' The link to weights for SFace needs to be included. Fix:

mkdir -p ~/.deepface/weights/ && cp face_recognition_sface_2021dec.onnx ~/.deepface/weights/

About

Privacy-Friendly Face Recognition On Contrained Devices

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published