You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the paper you mention you validate the hyperparameters for the input processing (the FGSM magnitude) and the feature ensemble using adversarial samples (right part of Table 2 in the paper). I think this validation makes more sense than validation using OOD samples, since as you say these samples are often inaccessible a priori.
I cannot seem to find the part in the code for this validation, and was just wondering specifically how you validate the FGSM magnitude when you use the adversarial samples, since the in-distribution samples will also be preprocessed with FGSM in the same way as the adversarial samples, correct? Then I guess the only difference between in-dist and adv samples is that the adv samples are processed with one extra FGSM optimization step?
If you could clarify or point me to the code section, that would be great.
BTW nice work!
The text was updated successfully, but these errors were encountered:
nom
changed the title
Validation on adversarial samples
Validation on adversarial samples for OOD detection
Jan 12, 2019
I also have some questions about this experiment ("Comparison of robustness" part of Section 3.1):
At this point should we assume that the M(x) models have already been "trained" in that we have already computed $\mu_c$ and $\Sigma$ for each layer using only Cifar10-Train-Clean data?
When training the feature ensemble weights do you use Cifar10-Test-Clean as the positive samples and Cifar10-Test-FGSM as the negative samples? Or, do you train the ensemble weights using Cifar10-Train-Clean and Cifar10-Train-FGSM data?
What epsilon do you use for FGSM step?
How critical is the input preprocessing step to this method? Is the performance of the feature ensemble still pretty good when we do validation on FGSM samples but do not do the input preprocessing step?
In the paper you mention you validate the hyperparameters for the input processing (the FGSM magnitude) and the feature ensemble using adversarial samples (right part of Table 2 in the paper). I think this validation makes more sense than validation using OOD samples, since as you say these samples are often inaccessible a priori.
I cannot seem to find the part in the code for this validation, and was just wondering specifically how you validate the FGSM magnitude
when you use the adversarial samples, since the in-distribution samples will also be preprocessed with FGSM in the same way as the adversarial samples, correct? Then I guess the only difference between in-dist and adv samples is that the adv samples are processed with one extra FGSM optimization step?
If you could clarify or point me to the code section, that would be great.
BTW nice work!
The text was updated successfully, but these errors were encountered: