Skip to content

Seauagain/GBCT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GBCT: An Output Scaling Layer Boosts Deep Neural Networks for Multiscale ODE Systems

Task Arxiv License visitors

An effective plug-and-play scaling module for solving stiff ODEs

[ArXiv] | [Quick Start]

Updates

  • [2025.12.01] GBCT code is now open-source!

Resources

We have open-sourced the full datasets, pre-trained weights (checkpoints), and training/inference scripts used in the paper. (coming soon)

Case Dataset Checkpoint
Chemical Reaction Training Set | Test Set Download
Nuclear Reaction Dataset Download
Robertson Problem Dataset Download

Introduction

We propose the Generalized Box–Cox Transformation (GBCT), a nonlinear scaling method that alleviates multiscale stiffness by compressing multi-magnitude data. Integrated into our data-driven framework, DeePODE, GBCT shows improved performance across diverse stiff ODE and PDE benchmarks.

Key Features

  • Plug-and-Play: Can be easily integrated into deep learning-based frameworks.
  • 6x Faster Training: Achieves comparable performance to baselines with only ~1/6 of the training epochs.
  • Long-Term Stability: Effectively suppresses error accumulation in long-term time integration (up to 50,000 steps).
  • Versatility: Validated on chemical kinetics, nuclear reactions, and the corresponding reaction-diffusion benchmarks.

Major Results

We evaluated GBCTNet across six representative benchmarks, ranging from 0D stiff ODEs to 2D reacting flows.

1. Stiff Reaction Kinetics (0D)

In the DRM19 (Methane reaction kinetics) and 13-isotope nuclear networks, GBCTNet significantly outperforms the BCTNet baseline.

  • Error Reduction: Relative error for equilibrium temperature reduced from ~50% (Baseline) to 0.1% (GBCTNet).
  • Stability: Maintains trajectory stability over 50,000 time steps.

long_term_evolution

Figure: Long-term temperature evolution comparison.

2. Turbulent Combustion (2D)

In a 2D turbulent methane/air ignition case, GBCTNet accurately captures the flame structure and intermediate radicals.

  • Radical Prediction: The maximum relative error for unstable radicals (e.g., H) is reduced from 274% (Baseline) to 100% (GBCTNet).
  • Temperature Accuracy: Relative error at $t=2ms$ is 4.8% vs 17.8% for the baseline.

2d_combustion

Figure: 2D Turbulent flame snapshots.

3. Nuclear Flame Propagation

In simulating a white dwarf's internal deflagration (nuclear flame):

  • Wave Speed: GBCTNet accurately predicts the flame propagation velocity (~200 m/s) and ignition timing.
  • Morphology: In the 2D wedge-shaped flame test, GBCTNet captures the sharp flame front with errors confined to a narrow band.

nuclear

Figure: 1D unsteady nuclear flame propagation.

Training Efficiency

GBCTNet converges significantly faster. It achieves low generalization error (RMSE) in the early training stages, requiring only 1/6 of the epochs needed by the baseline to reach comparable accuracy.

training_efficiency

Figure: Training efficiency.

Quick Start

(option 1) Install via conda

conda install pytorch
conda install --channel cantera cantera==2.6.0 -y
conda install numpy matplotlib seaborn scikit-learn pandas -y
pip install easydict scienceplots meshio -i https://pypi.tuna.tsinghua.edu.cn/simple
conda install -c conda-forge mpi4py openmpi

(option 2) Install via docker

docker pull ckode/deepck:1.0.0_pytorch1.12_cuda11.3

About

The source code for the paper "An output scaling layer boosts deep neural networks for multiscale ODE systems"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors