Conversation
|
Hello @levje, Thank you for updating ! There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻 Comment last updated at 2024-11-08 16:45:44 UTC |
|
@EmmaRenauld if you have an idea on how I could implement commit f0973ff differently, please let me know. It's a little "à bric-à-brac". |
|
After a few rounds of cleanup and after our discussion this Thursday about the utility of visualizing the current latent space evolution, I stripped the code to only print the latent space of the best epoch. So, each time we get a new best epoch, the latent space will be plotted and saved. This simplifies the code a lot and makes it simpler and more reusable. To plot "when a new best epoch found", I figured it would be a lot cleaner if I just have a function that can be called within the BestEpochMonitor, which is the newest addition to the modifications we talked about. It should also now be fine whenever we don't specify any data_per_streamline in the HDF5 file (there will only be one color) and it will also work if you don't specify the bundle_index. @arnaudbore, from what I tested, Fibercup should be working fine now, let me know otherwise. Finally, just to make it clear, I modified the structure of the HDF5 to have a group '<subject-id>/target/data_per_streamline/bundle_index'. Each entry/dataset within the data_per_streamline group in the HDF5 will be loaded in the dictionary as a numpy array to be included in the returned sft. |
arnaudbore
left a comment
There was a problem hiding this comment.
Not a fan of having the color class in the latent space one, also looking at the output I feel like the colors could also be better chosen. Apart from that LGTM !
[WIP] Add Francois code to use more diverse colors
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs, but you will still be able to re-open it. Thank you for your contributions. |
Description
It was asked to be able to visualize the latent space based on #220 w.r.t to FINTA from Legarreta et al (2021). As in the original paper, we are projecting the latent space coming out of the auto-encoder into 2D using t-SNE which preserves a smaller distance for similar streamlines and a higher distance for different streamlines.
The class latent_streamlines.py:BundlesLatentSpaceVisualizer is the bulk of the changes done and was made to potentially be reused for other data that needs to be projected and plotted in 2D. Each time we reach an epoch where the loss is minimal compared to what we encountered so far, we plot the latent space of that new epoch.
(Having a future PR adding hooks everywhere in the trainer/models in a similar fashion to LightningAI or PyTorch.nn.Module would add more flexibility to the library in my opinion!)
Scripts:
--viz_latent_space.Testing data and script
ae_train_model.py \ $experiments \ $experiment_name \ $o_hdf5 \ target \ -v INFO \ --batch_size_training 1200 \ --batch_size_units nb_streamlines \ --nb_subjects_per_batch 5 \ --learning_rate 0.001 \ --weight_decay 0.13 \ --optimizer Adam \ --max_epochs 1000 \ --max_batches_per_epoch_training 20 \ --comet_workspace <comet_workpace> \ --comet_project dwi_ml-ae-fibercup \ --patience 100 \ --viz_latent_space \ --color_by 'dps_bundle_index' --bundles_mapping <file with a mapping to bundles>Have you
People this PR concerns
@arnaudbore @AntoineTheb