Skip to content

Centrum-IntelliPhysics/PDEControl_DPC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Learning to Control PDEs with Differentiable Predictive Control and Time-Integrated Neural Operators

arXiv Python 3.8+

Dibakar Roy Sarkar (droysar1@jh.edu) Ján Drgoňa (jdrgona1@jh.edu) Somdatta Goswami (somdatta@jhu.edu)

Johns Hopkins Whiting School of Engineering, Baltimore 3400 N Charles St, MD 21218, United States

*Advised equally.


Official implementation of "Learning to Control PDEs with Differentiable Predictive Control and Time-Integrated Neural Operators".


Abstract

We present an end-to-end learning to control framework for partial differential equations (PDEs). Our approach integrates Time-Integrated Deep Operator Networks (TI-DeepONets) as differentiable PDE surrogate models within Differentiable Predictive Control (DPC)—a self-supervised learning framework for constrained neural control policies.

The TI-DeepONet architecture learns temporal derivatives and couples them with numerical integrators, thus preserving the temporal causality of infinite-dimensional PDEs while reducing error accumulation in long-horizon predictions. Within DPC, we leverage automatic differentiation to compute policy gradients by backpropagating the expectations of optimal control loss through the learned TI-DeepONet, enabling efficient offline optimization of neural policies without the need for online optimization or supervisory controllers.

We empirically demonstrate that the proposed method learns feasible parametric policies across diverse PDE systems, including the heat equation, the nonlinear Burgers' equation, and the reaction-diffusion equation. The learned policies achieve target tracking, constraint satisfaction, and curvature minimization objectives, while generalizing across distributions of initial conditions and problem parameters. These results highlight the promise of combining operator learning with DPC for scalable, model-based self-supervised learning in PDE-constrained optimal control.


Key Contributions

  • We introduce a novel learning-to-control framework that integrates Time-Integrated Deep Operator Networks (TI-DeepONets) with Differentiable Predictive Control (DPC), modeling an end-to-end differentiable closed-loop system with neural policies and learned PDE dynamics
  • Scalable offline policy learning for PDEs. Our framework eliminates the need for online optimization or supervisory controllers, producing parametric neural policies that satisfy constraints and generalize across distributions of initial conditions and parameters
  • Empirical validation on canonical PDEs. We demonstrate our approach on three representative systems showing accurate target tracking, shock mitigation, and constraint satisfaction
  • Open-source implementation. All code and trained models are made publicly available to support reproducibility and further research in differentiable control of PDEs.

Methodology

Methodology Schematic Figure: Overview of the proposed framework combining TI-DeepONet with Differentiable Predictive Control (DPC).

Our framework consists of two main stages:

1. TI-DeepONet Training

  • Learn a differentiable surrogate model of the PDE dynamics
  • Predict temporal derivatives and integrate using numerical schemes (Euler/RK4)
  • Preserve temporal causality while minimizing error accumulation

2. DPC Policy Learning

  • Define optimal control objectives (tracking, constraints, regularization)
  • Backpropagate gradients through TI-DeepONet rollouts
  • Learn parametric neural policies via self-supervised optimization
  • Deploy learned policies in closed-loop control

Installation

Requirements

  • Python 3.8+
  • JAX 0.4.0+
  • Flax 0.6.0+
  • Optax 0.1.0+
  • PyTorch 1.10+ (for data loading utilities only)
  • NumPy
  • SciPy
  • Matplotlib
  • Pandas
  • scikit-learn
  • GPy
  • termcolor
  • Jupyter Notebook

Setup

# Clone the repository
git clone https://github.com/Centrum-IntelliPhysics/PDEControl_DPC.git
cd PDEControl_DPC

# Create virtual environment (optional but recommended)
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install jax jaxlib flax optax torch numpy scipy matplotlib pandas scikit-learn GPy termcolor jupyter

Dataset

The training and testing datasets for all PDE systems are available at this link.


Quick Start

Each PDE system has its own directory with training and control notebooks:

1. Heat Equation (Target Tracking)

cd HE_TT
jupyter notebook
  • Open Train_TI_DON_heat.ipynb to train the TI-DeepONet surrogate model
  • Open dpc_heat.ipynb to learn and evaluate control policies
  • Results are saved in result_32/ and results_dpc_run_32/

2. Burgers' Equation (Shock Control)

cd BE_shock
jupyter notebook
  • Open Train_TIL_DON_burgers.ipynb to train the TI-DeepONet surrogate model
  • Open burgers_dpc_paper.ipynb to learn and evaluate control policies
  • Results are saved in result_32_paper/ and results_dpc_32_paper/

3. Reaction-Diffusion (Target Tracking)

cd RD_TT
jupyter notebook
  • Open Train_TI_DON_RD.ipynb to train the TI-DeepONet surrogate model
  • Open dpc_RD.ipynb to learn and evaluate control policies
  • Results are saved in results2_RD_32/ and RD_results_dpc_32/

Repository Structure

PDEControl_DPC/
│
├── README.md                          # This file
├── methodology_schematic.pdf          # Methodology overview diagram
│
├── HE_TT/                            # Heat Equation experiments
│   ├── Train_TI_DON_heat.ipynb       # TI-DeepONet training
│   ├── dpc_heat.ipynb                # DPC policy learning
│   ├── result_32/                    # Surrogate model results
│   └── results_dpc_run_32/           # Control policy results
│
├── BE_shock/                         # Burgers' Equation experiments
│   ├── Train_TIL_DON_burgers.ipynb   # TI-DeepONet training
│   ├── burgers_dpc_paper.ipynb       # DPC policy learning
│   ├── result_32_paper/              # Surrogate model results
│   └── results_dpc_32_paper/         # Control policy results
│
└── RD_TT/                            # Reaction-Diffusion experiments
    ├── Train_TI_DON_RD.ipynb         # TI-DeepONet training
    ├── dpc_RD.ipynb                  # DPC policy learning
    ├── results2_RD_32/               # Surrogate model results
    └── RD_results_dpc_32/            # Control policy results

Experiments

Heat Equation (1D)

  • Task: Target state tracking with distributed control
  • Objective: Minimize tracking error
  • Results: Learned policies achieve accurate tracking across varying initial conditions

Burgers' Equation (1D, Nonlinear)

  • Task: Shock wave control with boundary actuation
  • Objective: Minimize state curvature
  • Results: Successfully controls shock formation and propagation

Reaction-Diffusion (2D)

  • Task: Pattern control in coupled PDE system
  • Objective: Target pattern tracking with spatial control
  • Results: Achieves complex pattern formation and stabilization

See individual experiment directories for detailed results and visualizations.


Citation

If you find this work useful, please cite:

@misc{sarkar2025learningcontrolpdesdifferentiable,
      title={Learning to Control PDEs with Differentiable Predictive Control and Time-Integrated Neural Operators}, 
      author={Dibakar Roy Sarkar and Ján Drgoňa and Somdatta Goswami},
      year={2025},
      eprint={2511.08992},
      archivePrefix={arXiv},
      primaryClass={cs.CE},
      url={https://arxiv.org/abs/2511.08992}, 
}

About

No description, website, or topics provided.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors