Skip to content

Add disc injection-moulding cooling simulation notebook#1279

Open
Tuesdaythe13th wants to merge 2 commits intoNVIDIA:mainfrom
Tuesdaythe13th:claude/disc-config-struct-ndXr3
Open

Add disc injection-moulding cooling simulation notebook#1279
Tuesdaythe13th wants to merge 2 commits intoNVIDIA:mainfrom
Tuesdaythe13th:claude/disc-config-struct-ndXr3

Conversation

@Tuesdaythe13th
Copy link
Copy Markdown

@Tuesdaythe13th Tuesdaythe13th commented Mar 10, 2026

Description

This PR adds a comprehensive GPU-accelerated physics simulation notebook for disc cooling after injection moulding, built with NVIDIA Warp. The notebook demonstrates:

  • 2-D axisymmetric heat diffusion in cylindrical coordinates with explicit finite-difference time stepping
  • Avrami-based crystallinity evolution for a PET + CSR + PTFE polymer blend
  • Warp-risk scoring that combines thermal gradients and crystallinity asymmetry to predict deflection risk
  • Interactive parameter exploration including a mould-temperature sweep to optimize cooling conditions
  • Comprehensive visualizations of temperature fields, crystallinity profiles, and warp-risk distributions

The simulation uses two Warp structs (DiscConfig and CoolingParams) to keep kernel signatures concise, and includes a ArtifexCoolingSim class that manages GPU memory and orchestrates the time-stepping loop. The notebook is designed for Colab with GPU acceleration and includes stability analysis for the explicit time-stepping scheme.

Key features:

  • Dirichlet boundary conditions on mould walls (top/bottom) and Neumann (zero-flux) on axis and outer radius
  • Fourier stability checking with automatic dt calculation
  • Quality gates based on thermal gradients and crystallinity thresholds
  • Parameter sweep demonstrating how mould temperature affects warp risk

Future enhancements (documented in the notebook):

  • Replace placeholder Avrami kinetics with Nakamura model fitted to DSC data
  • Add thermoelastic plate solver to convert warp-risk score to actual deflection (mm)
  • Support asymmetric mould cooling channels
  • Adaptive time-stepping

Checklist

  • I am familiar with the Contributing Guidelines.
  • New notebook includes self-contained example with clear documentation.
  • Code follows Warp conventions (kernel definitions, struct usage, device management).

Test plan

The notebook is self-contained and executable end-to-end in Google Colab or any Jupyter environment with Warp installed. Verification includes:

  • Successful kernel compilation and execution on both CPU and CUDA devices
  • Numerical stability of the explicit finite-difference scheme (Fourier number < 0.4)
  • Reasonable physical outputs (temperature decay from melt to mould temperature, crystallinity growth in the crystallization window)
  • Parameter sweep producing expected trends (higher mould temperature → lower warp risk)

https://claude.ai/code/session_016zF8WWzQUxkQpC2hmiRkuB

Summary by CodeRabbit

Release Notes

  • New Features
    • Added GPU-accelerated disc cooling simulation with temperature distribution, crystallinity evolution, and warp-risk assessment.
    • Parameter sweep functionality to explore mould temperature effects on final warp risk.
    • Comprehensive visualization suite: temperature, crystallinity, and warp-risk field plots plus profile analysis.
    • Automated pass/fail evaluation for cooling scenarios.

Adds notebooks/disc_cooling_sim.ipynb with an Open-in-Colab badge,
covering 2-D axisymmetric heat diffusion, Avrami crystallinity
kinetics, warp-risk scoring, 2-D field visualisations, radial
profile plots, and a mould-temperature parameter sweep.

https://claude.ai/code/session_016zF8WWzQUxkQpC2hmiRkuB
Signed-off-by: Claude <noreply@anthropic.com>
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Mar 10, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 10, 2026

📝 Walkthrough

Walkthrough

A new GPU-accelerated disc cooling simulation notebook built on Warp is introduced, featuring configurable geometry and material properties, explicit finite-difference heat diffusion in cylindrical coordinates, Avrami-style crystallisation updates, warp-risk scoring, and parameter sweep visualization capabilities.

Changes

Cohort / File(s) Summary
GPU-Accelerated Cooling Simulation
notebooks/disc_cooling_sim.ipynb
Adds complete notebook pipeline: DiscConfig and CoolingParams data structures; Warp kernels for temperature initialization, heat diffusion with boundary conditions, crystallinity updates, and warp-risk computation; ArtifexCoolingSim orchestration class managing GPU arrays; setup cells for geometry/material/process parameters; simulation execution; visualization of temperature, crystallinity, warp-risk fields and radial/axial profiles; and parameter sweep section for mould temperature vs warp risk analysis with pass/fail coloring.

Sequence Diagram(s)

sequenceDiagram
    participant User as User/Notebook
    participant Setup as Setup Phase
    participant GPU as GPU Memory
    participant Kernels as Warp Kernels
    participant Sim as ArtifexCoolingSim
    participant Viz as Visualization

    User->>Setup: Define DiscConfig, CoolingParams
    Setup->>GPU: Allocate temperature, crystallinity, warp_risk arrays
    GPU-->>Sim: Initialize arrays
    
    Sim->>Kernels: Call init_temperature
    Kernels->>GPU: Set initial T field
    Kernels->>GPU: Return initialized state
    
    loop Time-stepping loop
        Sim->>Kernels: Call step_temperature
        Kernels->>GPU: Compute heat diffusion (FD in cylindrical coords)
        Kernels->>GPU: Apply boundary conditions
        Kernels->>GPU: Update temperature field
        
        Sim->>Kernels: Call update_crystallinity
        Kernels->>GPU: Compute Avrami crystallisation update
        Kernels->>GPU: Update crystallinity field
        
        Sim->>Kernels: Call compute_warp_risk
        Kernels->>GPU: Score warp risk from gradients
        Kernels->>GPU: Update warp_risk field (mid-plane only)
    end
    
    Sim->>GPU: Fetch results (metrics, fields)
    GPU-->>Sim: Return results dictionary
    Sim->>Viz: Pass final fields and metrics
    Viz->>User: Display temperature, crystallinity, warp-risk plots and profiles
    User->>User: Execute parameter sweep (mould temp vs warp risk)
    User->>Viz: Generate bar plots and pass/fail analysis
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title 'Add disc injection-moulding cooling simulation notebook' accurately and concisely describes the main change: adding a new notebook for GPU-accelerated disc cooling simulation.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
notebooks/disc_cooling_sim.ipynb (5)

638-660: Parameter sweep creates new ArtifexCoolingSim instance per iteration.

Each sweep iteration allocates new GPU arrays. For memory efficiency and speed, consider reusing the simulation instance and just re-initializing the arrays.

♻️ Suggested improvement
 T_mold_range = np.linspace(288, 353, 12)  # 15 °C – 80 °C in K
 sweep_results = []
+sim = ArtifexCoolingSim(config, device=DEVICE)  # Reuse instance
 
 for T_mold_K in T_mold_range:
     p                = CoolingParams()
     # ... parameter setup ...
 
-    s   = ArtifexCoolingSim(config, device=DEVICE)
-    res = s.simulate_cooling(p)
+    res = sim.simulate_cooling(p)
     sweep_results.append({...})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@notebooks/disc_cooling_sim.ipynb` around lines 638 - 660, The loop creates a
new ArtifexCoolingSim on each T_mold_K which allocates GPU arrays each
iteration; instead, instantiate one ArtifexCoolingSim before the loop and reuse
it by reinitializing per-run state via CoolingParams (set p.T_mold, p.T_init,
etc.) and calling simulate_cooling on the same simulator instance, ensuring any
simulator-level buffers or CUDA arrays are reset or reallocated only when shape
changes; update the code to move "s = ArtifexCoolingSim(config, device=DEVICE)"
outside the for-loop and keep the existing per-iteration creation of
CoolingParams and call s.simulate_cooling(p), reusing sweep_results as before.

686-689: Late import and fragile legend merging.

The Patch import on line 686 should be at the top of the notebook with other imports. The legend handle concatenation assumes ax.legend() was previously called; consider building the complete legend handles list upfront.

📝 Suggested improvement
+# Move to imports cell (line ~78)
+from matplotlib.patches import Patch
+
 # In the sweep cell:
-# Legend patch
-from matplotlib.patches import Patch
 legend_els = [Patch(facecolor="green", label="Pass"), Patch(facecolor="red", label="Fail")]
-for ax in axes:
-    ax.legend(handles=ax.get_legend().legend_handles + legend_els)
+for ax in axes:
+    handles, labels = ax.get_legend_handles_labels()
+    ax.legend(handles=handles + legend_els)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@notebooks/disc_cooling_sim.ipynb` around lines 686 - 689, The Patch import
and legend construction are fragile: move "from matplotlib.patches import Patch"
into the notebook's top imports, then stop relying on ax.get_legend() being
present; instead for each axis (axes) build the full handles list by collecting
existing handles via ax.get_legend_handles_labels() (or
ax.get_legend_handles_labels()[0]) and concatenating your legend_els =
[Patch(facecolor="green", label="Pass"), Patch(facecolor="red", label="Fail")]
before calling ax.legend(handles=full_handles) so the legend is created
deterministically even if no prior legend call exists.

403-403: Hardcoded quality thresholds reduce reusability.

The is_ok check uses hardcoded values 0.15 and 15.0. Consider making these configurable via CoolingParams or as method arguments for different quality requirements.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@notebooks/disc_cooling_sim.ipynb` at line 403, The quality check currently
hardcodes thresholds in the expression that sets is_ok (avg_chi_groove < 0.15
and max_warp_risk < 15.0); make these thresholds configurable by adding fields
to CoolingParams (e.g., max_avg_chi_groove and max_warp_risk) or by accepting
them as arguments to the function/method that computes is_ok, then replace the
literals with references to those fields/arguments (e.g., use
cooling_params.max_avg_chi_groove and cooling_params.max_warp_risk or the
passed-in parameters) so different quality requirements can be supplied without
changing the code.

266-291: Crystallinity update formula differs from standard Avrami.

Line 287: chi = chi + params.dt * params.avrami_n * rate doesn't match standard Avrami kinetics (which involves time explicitly). The docstring correctly notes this is a placeholder, but consider renaming avrami_k0/avrami_n to avoid confusion with the actual Avrami equation when real kinetics are implemented.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@notebooks/disc_cooling_sim.ipynb` around lines 266 - 291, The
update_crystallinity kernel uses a simplified heuristic but keeps Avrami-like
names that are misleading; rename the parameters avrami_k0 and avrami_n (and
their uses in update_crystallinity) to explicit names like growth_prefactor and
growth_exponent (or similar) in the CoolingParams dataclass, update the kernel
to use those new names (e.g., rate = growth_prefactor * x * (1.0 - chi /
params.chi_max) and chi += params.dt * growth_exponent * rate), and update the
docstring to state these are heuristic growth prefactor/exponent rather than
true Avrami kinetics so future readers won’t assume standard Avrami behavior.

463-467: Stability comment is slightly misleading but implementation is correct.

The comment references 1D stability criteria (dt < dr²/(2α)), but for 2D diffusion the combined Fourier number matters. The implementation correctly uses min(dr, dz)² with a 0.4 safety factor, which ensures Fo < 0.5 in the limiting direction.

📝 Suggested comment clarification
-# Fourier stability: dt < dr²/(2α) and dt < dz²/(2α)
+# Fourier stability for 2D explicit diffusion: Fo = α·dt/Δx² < 0.5
+# Using min(dr,dz)² ensures stability in the limiting direction.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@notebooks/disc_cooling_sim.ipynb` around lines 463 - 467, The comment above
the stability calculation is misleadingly framed as 1D (dt < dr²/(2α)); update
the comment to state the 2D diffusion stability context and clarify that you
enforce the limiting direction by using dt_max = 0.4 * min(dr, dz)**2 / alpha
(variables alpha, dt_max, dr, dz, config) so the Fourier number in the smallest
grid spacing remains below 0.5 with a safety factor; keep the implementation
(alpha = config.k/(config.rho*config.cp) and the dt_max computation) unchanged
but replace the 1D formula text with a short note about using the minimum grid
spacing for multi-dimensional stability.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@notebooks/disc_cooling_sim.ipynb`:
- Around line 396-403: max_delta_t currently compares Dirichlet boundary slices
T_np[:, 0] and T_np[:, -1] (both set to T_mold) so it is ~0; change the
computation in the block using T_np (and related variables) to use the first
interior cells adjacent to the boundaries (e.g., T_np[:, 1] and T_np[:, -2])
instead of indices 0 and -1 so the through-thickness thermal gradient uses
interior values; update the use of T_np, the expression computing max_delta_t,
and any dependent logic (is_ok thresholding) to reflect the new interior-indexed
gradient measurement.

---

Nitpick comments:
In `@notebooks/disc_cooling_sim.ipynb`:
- Around line 638-660: The loop creates a new ArtifexCoolingSim on each T_mold_K
which allocates GPU arrays each iteration; instead, instantiate one
ArtifexCoolingSim before the loop and reuse it by reinitializing per-run state
via CoolingParams (set p.T_mold, p.T_init, etc.) and calling simulate_cooling on
the same simulator instance, ensuring any simulator-level buffers or CUDA arrays
are reset or reallocated only when shape changes; update the code to move "s =
ArtifexCoolingSim(config, device=DEVICE)" outside the for-loop and keep the
existing per-iteration creation of CoolingParams and call s.simulate_cooling(p),
reusing sweep_results as before.
- Around line 686-689: The Patch import and legend construction are fragile:
move "from matplotlib.patches import Patch" into the notebook's top imports,
then stop relying on ax.get_legend() being present; instead for each axis (axes)
build the full handles list by collecting existing handles via
ax.get_legend_handles_labels() (or ax.get_legend_handles_labels()[0]) and
concatenating your legend_els = [Patch(facecolor="green", label="Pass"),
Patch(facecolor="red", label="Fail")] before calling
ax.legend(handles=full_handles) so the legend is created deterministically even
if no prior legend call exists.
- Line 403: The quality check currently hardcodes thresholds in the expression
that sets is_ok (avg_chi_groove < 0.15 and max_warp_risk < 15.0); make these
thresholds configurable by adding fields to CoolingParams (e.g.,
max_avg_chi_groove and max_warp_risk) or by accepting them as arguments to the
function/method that computes is_ok, then replace the literals with references
to those fields/arguments (e.g., use cooling_params.max_avg_chi_groove and
cooling_params.max_warp_risk or the passed-in parameters) so different quality
requirements can be supplied without changing the code.
- Around line 266-291: The update_crystallinity kernel uses a simplified
heuristic but keeps Avrami-like names that are misleading; rename the parameters
avrami_k0 and avrami_n (and their uses in update_crystallinity) to explicit
names like growth_prefactor and growth_exponent (or similar) in the
CoolingParams dataclass, update the kernel to use those new names (e.g., rate =
growth_prefactor * x * (1.0 - chi / params.chi_max) and chi += params.dt *
growth_exponent * rate), and update the docstring to state these are heuristic
growth prefactor/exponent rather than true Avrami kinetics so future readers
won’t assume standard Avrami behavior.
- Around line 463-467: The comment above the stability calculation is
misleadingly framed as 1D (dt < dr²/(2α)); update the comment to state the 2D
diffusion stability context and clarify that you enforce the limiting direction
by using dt_max = 0.4 * min(dr, dz)**2 / alpha (variables alpha, dt_max, dr, dz,
config) so the Fourier number in the smallest grid spacing remains below 0.5
with a safety factor; keep the implementation (alpha =
config.k/(config.rho*config.cp) and the dt_max computation) unchanged but
replace the 1D formula text with a short note about using the minimum grid
spacing for multi-dimensional stability.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

Run ID: 43722938-c610-4e52-8b7e-eeadd8243506

📥 Commits

Reviewing files that changed from the base of the PR and between 3af8dfa and d5b8b8c.

📒 Files selected for processing (1)
  • notebooks/disc_cooling_sim.ipynb

@greptile-apps
Copy link
Copy Markdown

greptile-apps bot commented Mar 10, 2026

Greptile Summary

This PR adds a new notebook (notebooks/disc_cooling_sim.ipynb) that demonstrates GPU-accelerated disc cooling simulation using NVIDIA Warp. The notebook implements 2-D axisymmetric heat diffusion in cylindrical coordinates with Avrami-based crystallinity evolution and a warp-risk scoring mechanism, wrapped in an ArtifexCoolingSim class with two Warp structs (DiscConfig and CoolingParams). It fits the existing notebooks/ examples directory and follows Warp kernel/struct conventions.

Key issues identified:

  • Incorrect 2D Fourier stability criterion (P1): The stability formula dt_max = 0.4 * min(dr, dz)² / α is derived from the 1D condition applied independently to each axis. The correct 2D von Neumann criterion is α · dt · (1/dr² + 1/dz²) ≤ 0.5, i.e., dt_max = 0.5 / (α · (1/dr² + 1/dz²)). For the default geometry (dr ≫ dz) the notebook is incidentally stable, but for a user who modifies the grid to be more square (e.g., equal NX/NZ), the scheme will diverge silently.
  • dT_thickness always zero (flagged in a prior thread): top and bot are the Dirichlet-constrained mould-wall nodes, so the metric never contributes to warp-risk.
  • Colab badge links to author's fork branch (flagged in a prior thread): should point to nvidia/warp main.
  • Unused import math (flagged in a prior thread).
  • Redundant ArtifexCoolingSim allocation on every sweep iteration (flagged in a prior thread): 60 unnecessary GPU allocations across the 12-point sweep.

Confidence Score: 2/5

  • Not safe to merge without addressing the incorrect 2D stability criterion and the previously flagged issues.
  • The simulation logic has a latent numerical correctness bug (2D Fourier stability computed using the 1D formula) that will silently produce diverging results if a user modifies the grid to have comparable dr/dz spacing. Combined with the previously flagged issues — the dT_thickness metric being permanently zero, the broken Colab badge URL pointing to a fork branch, and the redundant GPU allocations in the sweep — the notebook needs revisions before it is ready to merge as a reference example.
  • notebooks/disc_cooling_sim.ipynb — stability criterion, warp-risk kernel, and Colab badge all need fixes.

Important Files Changed

Filename Overview
notebooks/disc_cooling_sim.ipynb New notebook implementing a 2-D axisymmetric disc cooling simulation with Warp kernels; contains a latent numerical instability in the Fourier stability criterion (incorrect 2D condition) and several previously-flagged issues around the Colab badge URL, unused import, zero dT_thickness, and redundant GPU allocations in the parameter sweep.

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[Setup DiscConfig & CoolingParams] --> B[ArtifexCoolingSim.__init__\nAllocate T_a, T_b, chi_a, chi_b, warp_risk on GPU]
    B --> C[simulate_cooling\ninit_temperature kernel → T_a\ninit_scalar kernel → chi_a = 0]
    C --> D{for each timestep}
    D --> E[step_temperature\nT_a → T_b\nExplicit FD cylindrical heat eq.\nDirichlet top/bot, Neumann axis/edge]
    E --> F[swap T_a ↔ T_b]
    F --> G[update_crystallinity\nT_a, chi_a → chi_b\nAvrami kinetics in T_g < T < T_m window]
    G --> H[swap chi_a ↔ chi_b]
    H --> D
    D --> |done| I[compute_warp_risk\nT_a, chi_a → warp_risk\nOnly mid-plane threads write]
    I --> J[Copy to NumPy\nT_np, chi_np, risk_np]
    J --> K[Compute metrics\nmax_delta_t, avg_chi_groove, max_warp_risk]
    K --> L{Quality gate\nchi_groove < 0.15 AND risk < 15?}
    L --> |Pass| M[is_ok = True]
    L --> |Fail| N[is_ok = False]
    M & N --> O[Return results dict]
    O --> P[Visualise 2-D fields & radial profiles]
    O --> Q[Parameter sweep\n12× mould temperatures\nnew ArtifexCoolingSim per iteration]
Loading

Last reviewed commit: "Merge branch 'main' ..."

@Tuesdaythe13th Tuesdaythe13th marked this pull request as draft March 10, 2026 18:07
@Tuesdaythe13th Tuesdaythe13th marked this pull request as ready for review March 10, 2026 18:07
Copy link
Copy Markdown
Author

@Tuesdaythe13th Tuesdaythe13th left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

checjk

@Tuesdaythe13th Tuesdaythe13th marked this pull request as draft March 10, 2026 18:09
@Tuesdaythe13th Tuesdaythe13th marked this pull request as ready for review March 10, 2026 18:09
Comment on lines +463 to +465
"# Fourier stability: dt < dr²/(2α) and dt < dz²/(2α)\n",
"alpha = config.k / (config.rho * config.cp)\n",
"dt_max = 0.4 * min(dr, dz) ** 2 / alpha\n",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 2D stability limit is too permissive for square-ish grids

The comment claims the condition is dt < dr²/(2α) AND dt < dz²/(2α), but satisfying each 1D criterion individually does not guarantee stability in 2D. The correct von Neumann stability condition for the 2D explicit scheme is:

α · dt · (1/dr² + 1/dz²) ≤ 0.5

which gives:

dt_max = 0.5 / (α · (1/dr² + 1/dz²))

For the default parameters (dr = 1 mm, dz = 0.06 mm), dz completely dominates so the current approximation happens to be safe. But if a user increases NZ (e.g., to NZ = 20 with NX = 20), the grid becomes roughly square (dr ≈ dz = h) and the current formula allows dt_max ≈ 0.4 h²/α, whereas the correct 2D limit is 0.25 h²/α. With the 0.9 safety factor applied afterward, the scheme would run at Fourier number ≈ 0.72, which exceeds the 0.5 stability threshold and will diverge.

Suggested change
"# Fourier stability: dt < dr²/(2α) and dt < dz²/(2α)\n",
"alpha = config.k / (config.rho * config.cp)\n",
"dt_max = 0.4 * min(dr, dz) ** 2 / alpha\n",
# Fourier stability (2-D): α·dt·(1/dr² + 1/dz²) ≤ 0.5
alpha = config.k / (config.rho * config.cp)
dt_max = 0.5 / (alpha * (1.0 / dr**2 + 1.0 / dz**2))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants