Add disc injection-moulding cooling simulation notebook#1279
Add disc injection-moulding cooling simulation notebook#1279Tuesdaythe13th wants to merge 2 commits intoNVIDIA:mainfrom
Conversation
Adds notebooks/disc_cooling_sim.ipynb with an Open-in-Colab badge, covering 2-D axisymmetric heat diffusion, Avrami crystallinity kinetics, warp-risk scoring, 2-D field visualisations, radial profile plots, and a mould-temperature parameter sweep. https://claude.ai/code/session_016zF8WWzQUxkQpC2hmiRkuB Signed-off-by: Claude <noreply@anthropic.com>
📝 WalkthroughWalkthroughA new GPU-accelerated disc cooling simulation notebook built on Warp is introduced, featuring configurable geometry and material properties, explicit finite-difference heat diffusion in cylindrical coordinates, Avrami-style crystallisation updates, warp-risk scoring, and parameter sweep visualization capabilities. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User/Notebook
participant Setup as Setup Phase
participant GPU as GPU Memory
participant Kernels as Warp Kernels
participant Sim as ArtifexCoolingSim
participant Viz as Visualization
User->>Setup: Define DiscConfig, CoolingParams
Setup->>GPU: Allocate temperature, crystallinity, warp_risk arrays
GPU-->>Sim: Initialize arrays
Sim->>Kernels: Call init_temperature
Kernels->>GPU: Set initial T field
Kernels->>GPU: Return initialized state
loop Time-stepping loop
Sim->>Kernels: Call step_temperature
Kernels->>GPU: Compute heat diffusion (FD in cylindrical coords)
Kernels->>GPU: Apply boundary conditions
Kernels->>GPU: Update temperature field
Sim->>Kernels: Call update_crystallinity
Kernels->>GPU: Compute Avrami crystallisation update
Kernels->>GPU: Update crystallinity field
Sim->>Kernels: Call compute_warp_risk
Kernels->>GPU: Score warp risk from gradients
Kernels->>GPU: Update warp_risk field (mid-plane only)
end
Sim->>GPU: Fetch results (metrics, fields)
GPU-->>Sim: Return results dictionary
Sim->>Viz: Pass final fields and metrics
Viz->>User: Display temperature, crystallinity, warp-risk plots and profiles
User->>User: Execute parameter sweep (mould temp vs warp risk)
User->>Viz: Generate bar plots and pass/fail analysis
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (5)
notebooks/disc_cooling_sim.ipynb (5)
638-660: Parameter sweep creates newArtifexCoolingSiminstance per iteration.Each sweep iteration allocates new GPU arrays. For memory efficiency and speed, consider reusing the simulation instance and just re-initializing the arrays.
♻️ Suggested improvement
T_mold_range = np.linspace(288, 353, 12) # 15 °C – 80 °C in K sweep_results = [] +sim = ArtifexCoolingSim(config, device=DEVICE) # Reuse instance for T_mold_K in T_mold_range: p = CoolingParams() # ... parameter setup ... - s = ArtifexCoolingSim(config, device=DEVICE) - res = s.simulate_cooling(p) + res = sim.simulate_cooling(p) sweep_results.append({...})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@notebooks/disc_cooling_sim.ipynb` around lines 638 - 660, The loop creates a new ArtifexCoolingSim on each T_mold_K which allocates GPU arrays each iteration; instead, instantiate one ArtifexCoolingSim before the loop and reuse it by reinitializing per-run state via CoolingParams (set p.T_mold, p.T_init, etc.) and calling simulate_cooling on the same simulator instance, ensuring any simulator-level buffers or CUDA arrays are reset or reallocated only when shape changes; update the code to move "s = ArtifexCoolingSim(config, device=DEVICE)" outside the for-loop and keep the existing per-iteration creation of CoolingParams and call s.simulate_cooling(p), reusing sweep_results as before.
686-689: Late import and fragile legend merging.The
Patchimport on line 686 should be at the top of the notebook with other imports. The legend handle concatenation assumesax.legend()was previously called; consider building the complete legend handles list upfront.📝 Suggested improvement
+# Move to imports cell (line ~78) +from matplotlib.patches import Patch + # In the sweep cell: -# Legend patch -from matplotlib.patches import Patch legend_els = [Patch(facecolor="green", label="Pass"), Patch(facecolor="red", label="Fail")] -for ax in axes: - ax.legend(handles=ax.get_legend().legend_handles + legend_els) +for ax in axes: + handles, labels = ax.get_legend_handles_labels() + ax.legend(handles=handles + legend_els)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@notebooks/disc_cooling_sim.ipynb` around lines 686 - 689, The Patch import and legend construction are fragile: move "from matplotlib.patches import Patch" into the notebook's top imports, then stop relying on ax.get_legend() being present; instead for each axis (axes) build the full handles list by collecting existing handles via ax.get_legend_handles_labels() (or ax.get_legend_handles_labels()[0]) and concatenating your legend_els = [Patch(facecolor="green", label="Pass"), Patch(facecolor="red", label="Fail")] before calling ax.legend(handles=full_handles) so the legend is created deterministically even if no prior legend call exists.
403-403: Hardcoded quality thresholds reduce reusability.The
is_okcheck uses hardcoded values0.15and15.0. Consider making these configurable viaCoolingParamsor as method arguments for different quality requirements.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@notebooks/disc_cooling_sim.ipynb` at line 403, The quality check currently hardcodes thresholds in the expression that sets is_ok (avg_chi_groove < 0.15 and max_warp_risk < 15.0); make these thresholds configurable by adding fields to CoolingParams (e.g., max_avg_chi_groove and max_warp_risk) or by accepting them as arguments to the function/method that computes is_ok, then replace the literals with references to those fields/arguments (e.g., use cooling_params.max_avg_chi_groove and cooling_params.max_warp_risk or the passed-in parameters) so different quality requirements can be supplied without changing the code.
266-291: Crystallinity update formula differs from standard Avrami.Line 287:
chi = chi + params.dt * params.avrami_n * ratedoesn't match standard Avrami kinetics (which involves time explicitly). The docstring correctly notes this is a placeholder, but consider renamingavrami_k0/avrami_nto avoid confusion with the actual Avrami equation when real kinetics are implemented.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@notebooks/disc_cooling_sim.ipynb` around lines 266 - 291, The update_crystallinity kernel uses a simplified heuristic but keeps Avrami-like names that are misleading; rename the parameters avrami_k0 and avrami_n (and their uses in update_crystallinity) to explicit names like growth_prefactor and growth_exponent (or similar) in the CoolingParams dataclass, update the kernel to use those new names (e.g., rate = growth_prefactor * x * (1.0 - chi / params.chi_max) and chi += params.dt * growth_exponent * rate), and update the docstring to state these are heuristic growth prefactor/exponent rather than true Avrami kinetics so future readers won’t assume standard Avrami behavior.
463-467: Stability comment is slightly misleading but implementation is correct.The comment references 1D stability criteria (
dt < dr²/(2α)), but for 2D diffusion the combined Fourier number matters. The implementation correctly usesmin(dr, dz)²with a 0.4 safety factor, which ensuresFo < 0.5in the limiting direction.📝 Suggested comment clarification
-# Fourier stability: dt < dr²/(2α) and dt < dz²/(2α) +# Fourier stability for 2D explicit diffusion: Fo = α·dt/Δx² < 0.5 +# Using min(dr,dz)² ensures stability in the limiting direction.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@notebooks/disc_cooling_sim.ipynb` around lines 463 - 467, The comment above the stability calculation is misleadingly framed as 1D (dt < dr²/(2α)); update the comment to state the 2D diffusion stability context and clarify that you enforce the limiting direction by using dt_max = 0.4 * min(dr, dz)**2 / alpha (variables alpha, dt_max, dr, dz, config) so the Fourier number in the smallest grid spacing remains below 0.5 with a safety factor; keep the implementation (alpha = config.k/(config.rho*config.cp) and the dt_max computation) unchanged but replace the 1D formula text with a short note about using the minimum grid spacing for multi-dimensional stability.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@notebooks/disc_cooling_sim.ipynb`:
- Around line 396-403: max_delta_t currently compares Dirichlet boundary slices
T_np[:, 0] and T_np[:, -1] (both set to T_mold) so it is ~0; change the
computation in the block using T_np (and related variables) to use the first
interior cells adjacent to the boundaries (e.g., T_np[:, 1] and T_np[:, -2])
instead of indices 0 and -1 so the through-thickness thermal gradient uses
interior values; update the use of T_np, the expression computing max_delta_t,
and any dependent logic (is_ok thresholding) to reflect the new interior-indexed
gradient measurement.
---
Nitpick comments:
In `@notebooks/disc_cooling_sim.ipynb`:
- Around line 638-660: The loop creates a new ArtifexCoolingSim on each T_mold_K
which allocates GPU arrays each iteration; instead, instantiate one
ArtifexCoolingSim before the loop and reuse it by reinitializing per-run state
via CoolingParams (set p.T_mold, p.T_init, etc.) and calling simulate_cooling on
the same simulator instance, ensuring any simulator-level buffers or CUDA arrays
are reset or reallocated only when shape changes; update the code to move "s =
ArtifexCoolingSim(config, device=DEVICE)" outside the for-loop and keep the
existing per-iteration creation of CoolingParams and call s.simulate_cooling(p),
reusing sweep_results as before.
- Around line 686-689: The Patch import and legend construction are fragile:
move "from matplotlib.patches import Patch" into the notebook's top imports,
then stop relying on ax.get_legend() being present; instead for each axis (axes)
build the full handles list by collecting existing handles via
ax.get_legend_handles_labels() (or ax.get_legend_handles_labels()[0]) and
concatenating your legend_els = [Patch(facecolor="green", label="Pass"),
Patch(facecolor="red", label="Fail")] before calling
ax.legend(handles=full_handles) so the legend is created deterministically even
if no prior legend call exists.
- Line 403: The quality check currently hardcodes thresholds in the expression
that sets is_ok (avg_chi_groove < 0.15 and max_warp_risk < 15.0); make these
thresholds configurable by adding fields to CoolingParams (e.g.,
max_avg_chi_groove and max_warp_risk) or by accepting them as arguments to the
function/method that computes is_ok, then replace the literals with references
to those fields/arguments (e.g., use cooling_params.max_avg_chi_groove and
cooling_params.max_warp_risk or the passed-in parameters) so different quality
requirements can be supplied without changing the code.
- Around line 266-291: The update_crystallinity kernel uses a simplified
heuristic but keeps Avrami-like names that are misleading; rename the parameters
avrami_k0 and avrami_n (and their uses in update_crystallinity) to explicit
names like growth_prefactor and growth_exponent (or similar) in the
CoolingParams dataclass, update the kernel to use those new names (e.g., rate =
growth_prefactor * x * (1.0 - chi / params.chi_max) and chi += params.dt *
growth_exponent * rate), and update the docstring to state these are heuristic
growth prefactor/exponent rather than true Avrami kinetics so future readers
won’t assume standard Avrami behavior.
- Around line 463-467: The comment above the stability calculation is
misleadingly framed as 1D (dt < dr²/(2α)); update the comment to state the 2D
diffusion stability context and clarify that you enforce the limiting direction
by using dt_max = 0.4 * min(dr, dz)**2 / alpha (variables alpha, dt_max, dr, dz,
config) so the Fourier number in the smallest grid spacing remains below 0.5
with a safety factor; keep the implementation (alpha =
config.k/(config.rho*config.cp) and the dt_max computation) unchanged but
replace the 1D formula text with a short note about using the minimum grid
spacing for multi-dimensional stability.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 43722938-c610-4e52-8b7e-eeadd8243506
📒 Files selected for processing (1)
notebooks/disc_cooling_sim.ipynb
Greptile SummaryThis PR adds a new notebook ( Key issues identified:
Confidence Score: 2/5
Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[Setup DiscConfig & CoolingParams] --> B[ArtifexCoolingSim.__init__\nAllocate T_a, T_b, chi_a, chi_b, warp_risk on GPU]
B --> C[simulate_cooling\ninit_temperature kernel → T_a\ninit_scalar kernel → chi_a = 0]
C --> D{for each timestep}
D --> E[step_temperature\nT_a → T_b\nExplicit FD cylindrical heat eq.\nDirichlet top/bot, Neumann axis/edge]
E --> F[swap T_a ↔ T_b]
F --> G[update_crystallinity\nT_a, chi_a → chi_b\nAvrami kinetics in T_g < T < T_m window]
G --> H[swap chi_a ↔ chi_b]
H --> D
D --> |done| I[compute_warp_risk\nT_a, chi_a → warp_risk\nOnly mid-plane threads write]
I --> J[Copy to NumPy\nT_np, chi_np, risk_np]
J --> K[Compute metrics\nmax_delta_t, avg_chi_groove, max_warp_risk]
K --> L{Quality gate\nchi_groove < 0.15 AND risk < 15?}
L --> |Pass| M[is_ok = True]
L --> |Fail| N[is_ok = False]
M & N --> O[Return results dict]
O --> P[Visualise 2-D fields & radial profiles]
O --> Q[Parameter sweep\n12× mould temperatures\nnew ArtifexCoolingSim per iteration]
Last reviewed commit: "Merge branch 'main' ..." |
| "# Fourier stability: dt < dr²/(2α) and dt < dz²/(2α)\n", | ||
| "alpha = config.k / (config.rho * config.cp)\n", | ||
| "dt_max = 0.4 * min(dr, dz) ** 2 / alpha\n", |
There was a problem hiding this comment.
2D stability limit is too permissive for square-ish grids
The comment claims the condition is dt < dr²/(2α) AND dt < dz²/(2α), but satisfying each 1D criterion individually does not guarantee stability in 2D. The correct von Neumann stability condition for the 2D explicit scheme is:
α · dt · (1/dr² + 1/dz²) ≤ 0.5
which gives:
dt_max = 0.5 / (α · (1/dr² + 1/dz²))
For the default parameters (dr = 1 mm, dz = 0.06 mm), dz completely dominates so the current approximation happens to be safe. But if a user increases NZ (e.g., to NZ = 20 with NX = 20), the grid becomes roughly square (dr ≈ dz = h) and the current formula allows dt_max ≈ 0.4 h²/α, whereas the correct 2D limit is 0.25 h²/α. With the 0.9 safety factor applied afterward, the scheme would run at Fourier number ≈ 0.72, which exceeds the 0.5 stability threshold and will diverge.
| "# Fourier stability: dt < dr²/(2α) and dt < dz²/(2α)\n", | |
| "alpha = config.k / (config.rho * config.cp)\n", | |
| "dt_max = 0.4 * min(dr, dz) ** 2 / alpha\n", | |
| # Fourier stability (2-D): α·dt·(1/dr² + 1/dz²) ≤ 0.5 | |
| alpha = config.k / (config.rho * config.cp) | |
| dt_max = 0.5 / (alpha * (1.0 / dr**2 + 1.0 / dz**2)) |
Description
This PR adds a comprehensive GPU-accelerated physics simulation notebook for disc cooling after injection moulding, built with NVIDIA Warp. The notebook demonstrates:
The simulation uses two Warp structs (
DiscConfigandCoolingParams) to keep kernel signatures concise, and includes aArtifexCoolingSimclass that manages GPU memory and orchestrates the time-stepping loop. The notebook is designed for Colab with GPU acceleration and includes stability analysis for the explicit time-stepping scheme.Key features:
Future enhancements (documented in the notebook):
Checklist
Test plan
The notebook is self-contained and executable end-to-end in Google Colab or any Jupyter environment with Warp installed. Verification includes:
https://claude.ai/code/session_016zF8WWzQUxkQpC2hmiRkuB
Summary by CodeRabbit
Release Notes