Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproducing the numbers #13

Open
Laubeee opened this issue Dec 26, 2023 · 1 comment
Open

Reproducing the numbers #13

Laubeee opened this issue Dec 26, 2023 · 1 comment

Comments

@Laubeee
Copy link

Laubeee commented Dec 26, 2023

Hi, I am interested in reproducing the numbers you reported on NSynth. With the models from HuggingFace I do get close, but not quite to what you report (0.4 - 0.8 lower for the models I tried, which are 330M, 95M-public and data2vec). May I ask, did you use the settings in the MARBLE-Benchmark repository to achieve these numbers? (i.e. train one hidden layer of 512 units and 128 outputs, for max 50 epochs with early stopping and LR reduction, batch size 64, 5 runs with different LR)

@annabeth97c
Copy link

Hello! I have also been struggling to replicate the performance reported on the MARBLE Benchmark, but on the MTG dataset tasks (Mood, Genre and Instrument). I also tried to use the same setup as the MARBlE repository, which is very similar to the one described by @Laubeee

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants