Skip to content

Commit

Permalink
Merge branch 'Cascade'
Browse files Browse the repository at this point in the history
  • Loading branch information
MatteoDrago committed Sep 10, 2018
2 parents b72dcd0 + 344a5f8 commit dfadf59
Show file tree
Hide file tree
Showing 111 changed files with 28,173 additions and 7,775 deletions.
7 changes: 3 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,12 @@
# hda-report
Human Data Analytics project - Matteo Drago, Riccardo Lincetto

## Deep Learning Techniques for Gersute Recognition: Dealing with Inactivity
## Deep Learning Techniques for Activity Recognition: Dealing with Inactivity

To run the code download the OPPORTUNITY activity recognition dataset at:
https://archive.ics.uci.edu/ml/datasets/opportunity+activity+recognition <br />
https://archive.ics.uci.edu/ml/datasets/opportunity+activity+recognition
The position of the dataset then has to be provided to the code in preprocessing phase.

The repository is organised as follows:
- code: this folder contains our code to perform activity recognition. There are two matlab files for preprocessing, we suggest using 'preprocessing_full.m' (otherwise the code has to be updated with the correct files location). Please note that 'file.root' variable needs to be provided the location of the dataset. Once preprocessing has been done, one can decide to run 'main.py' and 'main_multiuser.py' to get all the results at once (it takes some time, since 120 different models are trained), or to execute the code for a single configuraion in 'HAR_system.ipynb'. Then there is also a notebook with the purpose of visualising results, 'Evaluation.ipynb': to run this it is not necessary to run the complete code, because a set of results is already provided in the repository;
- code: this folder contains our code to perform activity recognition. There are two matlab files for preprocessing, we suggest using 'preprocessing_full.m' (otherwise the code has to be updated with the correct files location). Please note that 'file.root' variable needs to be provided the location of the dataset. Once preprocessing has been done, one can decide to run 'main.py' to get all the results at once (it takes some time, since 120 different models are trained), or to execute the code for a single configuraion in 'HAR_system.ipynb'. Then there is also a notebook with the purpose of visualising results, 'Evaluation.ipynb': to run this it is not necessary to run the complete code, because a set of results is already provided in the repository;
- presentation: this folder contains a set of slides that we used to present our project;
- report: this folder contains our report, named 'HDA_MDRL.pdf'.
1,331 changes: 1,011 additions & 320 deletions code/.ipynb_checkpoints/Evaluation-checkpoint.ipynb

Large diffs are not rendered by default.

1,361 changes: 1,025 additions & 336 deletions code/Evaluation.ipynb

Large diffs are not rendered by default.

Binary file added code/__pycache__/launch.cpython-36.pyc
Binary file not shown.
Binary file modified code/__pycache__/models.cpython-36.pyc
Binary file not shown.
Binary file modified code/__pycache__/preprocessing.cpython-36.pyc
Binary file not shown.
Binary file modified code/__pycache__/utils.cpython-36.pyc
Binary file not shown.
Loading

0 comments on commit dfadf59

Please sign in to comment.