Conversation
manya0033
left a comment
There was a problem hiding this comment.
Hey Kavita, I've gone through both notebooks and the PR. The work itself is really solid -both notebooks are well-structured with clear markdown explanations at each step, the code is clean, and the segmentation-to-classification fallback logic is a nice design choice. A few things need fixing before I can approve though, based on the PR checklist:
-
PR title format - The title needs to include your team's name, the project name matching your Trello card, and the completion percentage.
-
PR source - This is coming from kavita57: master (your personal fork) rather than a branch on the company repo. PRs should come from a dedicated branch after cloning the Chameleon-company repository (e.g. Chameleon-company:kavita_crack_detection). Could you redo this from a branch on the company repo?
-
Australian English - The first notebook (01_crack_segmentation_benchmark) has a few American English spellings that need updating: visualization - visualisation, optimize - optimise. The second notebook is fine.
-
Dataset access - Both notebooks load data from local directories (./dataset, ./data/crack_segmentation). The checklist requires datasets to be accessed via API v2.1.
-
Model weight files - yolo11n-cls.pt and yolo26n.pt are committed to the repo. Binary model weights are large files that bloat the repository. These should be excluded (add them to .gitignore) and either downloaded at runtime or documented so others know where to get them.
-
Reviewer turnaround - I noticed the review was self-requested and approved within 29 minutes of the PR being opened. With 6,904 lines across two ML notebooks, it might be worth having the second reviewer do a more thorough pass to catch things like the points above.
The notebooks themselves are genuinely well-built, Once the checklist items are addressed this should be good to go. Tag me when you push the updates.
|
opened the the thread on #1783 |
Summary
This PR captures the notebook-based crack-detection work in
Playground/project_6a. The project now documents two workflows: a crack segmentation benchmark that falls back to binary classification when masks are unavailable, and a YOLO11-based cracked-vs-uncracked classification pipeline built on SDNET2018.What changed
resnet18andefficientnet_b0.artifacts/andruns/.User impact
This gives the team a reproducible training and evaluation workflow for crack detection on SDNET2018. It also preserves the benchmark outputs so results can be reviewed, compared, and rerun with the same project structure.
Testing
best.ptfile.