Skip to content

Understanding input to mu and logsd layers #5

@nathanin

Description

@nathanin

Hi thanks for the great tutorial. I have trouble understanding math. What is the reason to pass in encode3 to logsd before the nonlinearity is applied? Why not give encode3neur to both mu and logsd? I would ask if it's a typo, but running the reference prototxt, I can make it converge.

screen shot 2017-06-09 at 4 05 49 pm

I have combined the VAE layers with convolution and deconvolution layers, and am having trouble training MNIST with this new architecture. (Using Sigmoid neurons instead of ReLU, if that matters).

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions