Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: Uncertainty Types #9

Open
ayeshans opened this issue Jan 25, 2022 · 1 comment
Open

Question: Uncertainty Types #9

ayeshans opened this issue Jan 25, 2022 · 1 comment

Comments

@ayeshans
Copy link

Hey @nikitadurasov! 👋

When using the Masksembles method, I want to make a distinction between the different types of uncertainty (model/epistemic, label/aleatoric, and distributional/dataset shift). I noticed it wasn't explicitly stated what kind/s of uncertainty were being estimated through this method, but since the experiments focus on OOD detection, would it be fair to say this is mostly for dataset shift?

@ayeshans ayeshans changed the title Uncertainty Types Question: Uncertainty Types Jan 25, 2022
@nikitadurasov
Copy link
Owner

Hey @ayeshans!

Yes, primarily we're benchmarkzing against dataset shift, image corruptions and aleatoric uncertainty. Though we aren't providing any clean way to distinguish different types of uncertainties (eg epistemic and data shift), our model still works rather good with standard methods to approach that: 1) for aleatoric classification uncertainty, we can use entropy of mean distribution 2) for epistemic, we calculate Mutual Information as described in the paper.

Hope that helps! :)

Nikita

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants