Skip to content

Conversation

@Red-Portal
Copy link
Member

@Red-Portal Red-Portal commented Dec 12, 2025

This addresses #224 .

The PR essentially drops the last batch if it has a size different from batchsize. This is enforced only for estimating the gradients. So the behavior of estimate_objective is unchanged. Dropping the last batch could introduce a slight bias in the algorithms, but it should be minimal as long as the number of batches is sufficiently large (which is the only case where subsampling makes sense anyway.)

@github-actions
Copy link
Contributor

AdvancedVI.jl documentation for PR #228 is available at:
https://TuringLang.github.io/AdvancedVI.jl/previews/PR228/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants