Commit with big change: 11.04.2025 - configured new noise filters based on the Continuity fix 3D filters
# Parse2022:
None
# PipeForge3D - OBJ:
mesh_scale = 0.25
voxel_size = 1.0
# PipeForge3D - PCD:
points_scale = 0.25
voxel_size = 1.0
# Hospital CUP:
points_scale = 25.0
voxel_size = 1.0
The repository contains all the scripts that were used to get the novel results available on the paper:
Learning Thin Structure Reconstruction from Sliding-Box 2D Projections
See the following file for full guidance: manual_requirements.txt
In case that you have a dataset of 3D input files with holes and 3D ground truth files
(For example we used Parse2022 data and
used SOTA model results as our input data, along with the dataset labels as our ground truth data).
- Put the
parse2022
dataset with thelabels
andpreds
data on the path:./data/parse2022
labels
- The 3D ground truthpreds
- The 3D input
- Create config file in
.yaml
format and put it inconfigs
folder (see example: parse2022_SC_32.yaml). - Build the
dataset_2d
(alsodataset_1d
) by runningdataset_2d_creator
script: dataset_2d_creator.py - Build the
dataset_3d
by runningdataset_3d_creator
script: dataset_3d_creator.py
In case you have only a dataset of 3D files, and you want to create random holes in them
(For example we used our own PipeForge3D dataset generator to create 3D model, and apply random holes on them).
- Put the
PipeForge3D
dataset with theoriginals
data on the path:./data/PipeForge3D
originals
- The raw data, before voxelization
- Run either of the following scripts depending on your data type:
- generate_3d_preds_from_mesh.py - For mesh like data types:
_mesh.ply
,.obj
,.nii.gz
(annotations) - generate_3d_preds_from_pcd.py - For point cloud like data types:
_pcd.ply
,.pcd
- generate_3d_preds_from_mesh.py - For mesh like data types:
- Repeat the steps in
How to create the dataset
.
There are 3 types of model supported in this repo:
1D Model
- Predicts a binary label ofTrue/False
if the given input contains interesting holes. The model purpose is to filter out the samples that will be sent to the2D Model
.2D Model
- Detect and fills holes on the given 2D orthographic depth projections.3D Model
- Detect and fill occluded holes that couldn't be detected by the2D Model
- To train the
model_1d
runmain_1d
script: main_1d.py (CurrentlyNOT USED
for Full Pipeline) - To train the
model_2d
runmain_2d
script: main_2d.py - To train the
model_3d
runmain_3d
script: main_3d.py (CurrentlyDISABLED
for Full Pipeline)
Notice: all scripts are calling the generic main_base.py scripts that call the matching training
scripts in the files, based on the model_type
parameter in the relevant main script:
- Run the
predict_pipeline
script: predict_pipeline.py
- [TBD] Run the
online_pipeline
script: online_pipeline.py
- parse2022
labels
,preds
-> Values: binary {0, 1}, Dim: 3 - parse2022
preds_compnents
-> Values: grayscale (0-255), Dim: 3 - cropped 2d
labels
,preds
-> Values: grayscale (0-255), Dim: 2 - cropped 2d
components
-> Values: RGB (0-255, 0-255, 0-255), Dim: 2 - cropped 3d
labels
,preds
-> Values: binary {0, 1}, Dim: 3 - cropped 3d
components
-> Values: grayscale (0-255), Dim: 3
Given the 3d ground truth
and the 3d predicted labels
:
-
(Main Core Flow) Option 1 Flows:
-
Crop and Project both the
3d ground truth
and3d predicted labels
:- 6 views of
2d ground truth
- 6 views of
2d predicted labels
- 6 views of
-
Use
model1
as follows (2D to 2D):- Train with the 6
2d predicted labels
to repair and get 62d ground truth
- Predict with the 6
2d predicted labels
to get the 62d fixed labels
- Train with the 6
-
Use
model2
as follows (6-2D to 3D):- Train with the 6
2d ground truth
to reconstruct and get the3d ground truth
- Predict with the 6
2d fixed labels
to get the3d fixed labels
- Train with the 6
-
Use all the
3d fixed label
to fix the3d predicted labels
-
-
(Secondary Core Flow) Option 2 Flows:
- Crop and Project both the
3d ground truth
and3d predicted labels
tocropped
:- 6 views of
2d ground truth
- 6 views of
2d predicted labels
- 6 views of
- Use
model1
as follows (2D to 2D):- Train with the 6
2d predicted labels
to repair and get 62d ground truth
- Predict with the 6
2d predicted labels
to get the 62d fixed labels
- 2 approaches available with the same data:
- Option 1: work with batches of 1 view (size: (b, 1, w, h))
- Option 2: work with batches of 6 views (size: (b, 6, 1, w, h))
- Train with the 6
- Perform direct projection (using
logical or
) for all the data:- Option 1:
- From
2d ground truth
to get thepre-3d ground truth
- From
2d predicted labels
to get thepre-3d predicted labels
- From
2d fixed labels
to get thepre-3d fixed labels
- From
- Option 2:
- From
2d ground truth
to get thepre-3d ground truth
- From
2d predicted labels
to get thepre-3d predicted labels
- From
2d fixed labels
to get thepre-3d fixed labels
- Merge
3d predicted labels
andpre-3d ground truth
to get thefused pre-3d ground truth
- Merge
3d predicted labels
andpre-3d fixed labels
to get thefused pre-3d fixed labels
- From
- Option 1:
- Use
model2
as follows (3D to 3D):- Option 1 (Fill the whole internal space):
- Train with the
pre-3d ground truth
to reconstruct and get the3d ground truth
- Predict with the
pre-3d fixed labels
to get the3d fixed labels
- Train with the
- Option 2 (Fill only the predicted labels):
- Train with the
fused pre-3d ground truth
to reconstruct and get the3d ground truth
- Predict with the
fused pre-3d fixed labels
to get the3d fixed labels
- Train with the
- Option 1 (Fill the whole internal space):
- Use all the
3d fixed label
to fix the3d predicted labels
- Crop and Project both the
-
(Direct Repair Flow) Option 3 Flows:
- Crop both the
3d ground truth
and3d predicted labels
:cropped 3d ground truth
cropped 3d predicted labels
- Use
model
as follows (3D to 3D):- Train with the
cropped 3d predicted labels
to thecropped 3d predicted labels
to reconstruct and get thecropped 3d ground truth
- Predict with the
cropped 3d predicted labels
to get thecropped 3d fixed labels
- Train with the
- Use all the
cropped 3d fixed label
to fix the3d predicted labels
- Crop both the
- Use model 1 on the
parse_preds_mini_cropped
- Save the results in
parse_fixed_mini_cropped
- Perform direct
logical or
onparse_fixed_mini_cropped
to getparse_prefixed_mini_cropped_3d
- Use model 2 on the
parse_prefixed_mini_cropped_3d
- Save the results in
parse_fixed_mini_cropped_3d
- Run steps 1-5 for mini cubes and combine all the results to get the final result
- Perform cleanup on the final result (delete small connected components)
https://github.com/dariocazzani/pytorch-AE
https://github.com/crowsonkb/vgg_loss/tree/master
https://3dthis.com/photocube.htm
See the following question on Stack Overflow.
https://github.com/networkx/grave
https://github.com/deyuan/random-graph-generator
https://github.com/mlimbuu/random-graph-generator