Skip to content

Commit 4fb6de4

Browse files
committed
Added explanatory notebook and other fixes
1 parent 823ec84 commit 4fb6de4

37 files changed

+1876
-46
lines changed
Binary file not shown.

Documents/Elastic.Deformation.pdf

255 KB
Binary file not shown.
Binary file not shown.

Documents/Mac AI update.pptx

676 KB
Binary file not shown.
Binary file not shown.

Documents/dimensionality_reduction.md

+35
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
## Dealing with massive image files
2+
3+
### If a single image can fit into GPU memory
4+
- Use distributed processing to load 1 image on each GPU, use multiple GPUs (at least, TensorFlow supports this). [link](https://www.tensorflow.org/guide/distributed_training)
5+
- Fit an autoencoder and train using the internal representation.
6+
- Potentially interesting if a single image modality fits, but not all 4 at once
7+
- I tried this before and it didn't take that long even with batch size=1
8+
- Use early strided convolution layers to reduce dimensionality. Used in U-net. [link](https://arxiv.org/abs/1505.04597)
9+
- Image fusion
10+
- principal component analysis (this also works for image compression if you do it differently)
11+
- frequency-domain image fusion such as various shearlet transforms (I don't understand these, but here's a paper [link](https://journals.sagepub.com/doi/full/10.1177/1748301817741001))
12+
- I guess you could probably also use an autoencoder for this
13+
- This should reduce our 4-channel (4 neuroimaging types) image to have less channels containing the same information
14+
15+
### Works even if a single image can't fit into GPU memory
16+
- Cropping
17+
- This probably works better if the images are registered to approximately the same space
18+
- Slicing [Cameron's review with some of these](https://www.sciencedirect.com/science/article/pii/S187705091632587X)
19+
- Use 2-dimensional slices of 3D image, which each definitely fit in memory
20+
- (probably) can train models for each modality separately and average/use a less-GPU intensive model to combine them?
21+
- (probably) split image into smaller 3D patches for segmentation
22+
- Downsampling: [this paper](https://nvlpubs.nist.gov/nistpubs/ir/2013/NIST.IR.7839.pdf) is not about neuroimaging at all but maybe has some insights?
23+
- Spectral truncation
24+
- Compute fast Fourier transform, reduce sampling rate, compute inverse FFT
25+
- I'm going to add wavelet transform here for similar reasons
26+
- Average pooling (take the average of 2x2x2 voxels)
27+
- Max pooling (take the maximum of 2x2x2 voxels)
28+
- Decimation/Gaussian blur with decimation (take every other line)
29+
- Use a convolutional neural network that works on spectrally compressed images [link](https://www.sciencedirect.com/science/article/abs/pii/S0925231219310148)
30+
- probably really stupid
31+
- compute FFT, discrete cosine transform, or whatever
32+
- clip the spectrum to get rid of irrelevant high frequency noise
33+
- use a spectral convolutional neural network to compute everything in frequency domain
34+
- transform back to image domain
35+

Documents/robex_tutorial.docx

42.9 KB
Binary file not shown.

Examples/training_example.py

+5-3
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,9 @@
55

66
training_transform = tio.Compose([
77
tio.ZNormalization(masking_method=tio.ZNormalization.mean),
8-
tio.RandomBiasField(p=0.5),
8+
tio.RandomNoise(p=0.5),
9+
tio.RandomGamma(log_gamma=(-0.3, 0.3)),
10+
tio.RandomElasticDeformation(),
911
tio.CropOrPad((240, 240, 160)),
1012
tio.OneHot(num_classes=5),
1113

@@ -20,7 +22,7 @@
2022

2123
run_training(
2224
input_data_path = '../brats_new/BraTS2020_TrainingData/MICCAI_BraTS2020_TrainingData',
23-
output_model_path = './Models/test_train_randbias_1e-3.pt',
25+
output_model_path = './Models/test_train_many_1e-3.pt',
2426
training_transform = training_transform,
2527
validation_transform = validation_transform,
2628
max_epochs=10,
@@ -34,6 +36,6 @@
3436
precision=16,
3537
wandb_logging = True,
3638
wandb_project_name = 'macai',
37-
wandb_run_name = 'randbias_1e-3',
39+
wandb_run_name = 'many_1e-3',
3840

3941
)
File renamed without changes.
File renamed without changes.

Notebooks/old/all_zip.ipynb

+507
Large diffs are not rendered by default.

0 commit comments

Comments
 (0)