You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 24, 2024. It is now read-only.
To plot the results, configure the variables in `plot_attribution` of `src/audiofakedetect/integrated_gradients.py` and execute it using `scripts/attribution.py`.
147
153
148
154
155
+
#### Example Misclassifications per Model
156
+
157
+
It is possible to extract misclassified audio fakes by comparing the correct classifications of different models. This can be done by using the `"get_details": [True]` config in your `gridsearch_config.py`. Set `"only_testing": [True]` as well if you have trained your models already. Using this configuration each test loop for each individual experiment will produce an output numpy file in the log dir with a file name starting with `true_ind...`.
158
+
159
+
To compare to experiment results (e.g. of a model with dilation and one without), use the `scripts/analyze_model_diffs.py` script to extract 10 sample audios which are correctly classified by the first given model and incorrectly classified by the second given model. You might need to adjust the corresponding file paths in the script to point to the result from the testing process and also specify a save path for the audios.
160
+
161
+
In the following we provide some examplary audios that were correctly classified as deep fake by our DCNN and misclassified (as real) by the same model without dilation:
0 commit comments