Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Validation on adversarial samples for OOD detection #1

Open
nom opened this issue Jan 12, 2019 · 1 comment
Open

Validation on adversarial samples for OOD detection #1

nom opened this issue Jan 12, 2019 · 1 comment

Comments

@nom
Copy link

nom commented Jan 12, 2019

In the paper you mention you validate the hyperparameters for the input processing (the FGSM magnitude) and the feature ensemble using adversarial samples (right part of Table 2 in the paper). I think this validation makes more sense than validation using OOD samples, since as you say these samples are often inaccessible a priori.

I cannot seem to find the part in the code for this validation, and was just wondering specifically how you validate the FGSM magnitude when you use the adversarial samples, since the in-distribution samples will also be preprocessed with FGSM in the same way as the adversarial samples, correct? Then I guess the only difference between in-dist and adv samples is that the adv samples are processed with one extra FGSM optimization step?

If you could clarify or point me to the code section, that would be great.

BTW nice work!

@nom nom changed the title Validation on adversarial samples Validation on adversarial samples for OOD detection Jan 12, 2019
@inkawhich
Copy link

inkawhich commented Nov 12, 2019

I also have some questions about this experiment ("Comparison of robustness" part of Section 3.1):

  1. At this point should we assume that the M(x) models have already been "trained" in that we have already computed $\mu_c$ and $\Sigma$ for each layer using only Cifar10-Train-Clean data?

  2. When training the feature ensemble weights do you use Cifar10-Test-Clean as the positive samples and Cifar10-Test-FGSM as the negative samples? Or, do you train the ensemble weights using Cifar10-Train-Clean and Cifar10-Train-FGSM data?

  3. What epsilon do you use for FGSM step?

  4. How critical is the input preprocessing step to this method? Is the performance of the feature ensemble still pretty good when we do validation on FGSM samples but do not do the input preprocessing step?

Thanks in advance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants