Could you provide a little more detailed explanation of the model's outputs? I'm not sure if I got everything right :(
- content_input*.nii.gz: 'content' of an image, not related to classification
- gen_clc*.nii.gz / gen_recon*.nii.gz: image being translated to the other class (?), they should be very similar, eg healthy image -> reconstructed healthy image & AD image -> reconstructed AD image
- gen_fake*.nii.gz / gen_random*.nii.gz: image being translated to the other class, but using different translating strategy
- gen_diff*.nii.gz: feature maps
Another question is: suppose I have a patient's image, and I wanna see how ICAM interprets this image. It seems that I have to also give ICAM a healthy image so that ICAM can conduct the translation, is that right? Is it possible to give ICAM only the patient's image without a control image during testing/prediction and get the feature map? This problem is important to me since some patients in my scenario are still cognitively normal but have subject memory complaints, and I wanna see if they truly have some imaging alternations compared to healthy volunteers.
Thanks!
Could you provide a little more detailed explanation of the model's outputs? I'm not sure if I got everything right :(
Another question is: suppose I have a patient's image, and I wanna see how ICAM interprets this image. It seems that I have to also give ICAM a healthy image so that ICAM can conduct the translation, is that right? Is it possible to give ICAM only the patient's image without a control image during testing/prediction and get the feature map? This problem is important to me since some patients in my scenario are still cognitively normal but have subject memory complaints, and I wanna see if they truly have some imaging alternations compared to healthy volunteers.
Thanks!