Lifting the Structural Morphing forWide-Angle Images Rectification: Unified Content and Boundary Modeling (ICCV 2025)
This is the official implementation for Lifting the Structural Morphing forWide-Angle Images Rectification: Unified Content and Boundary Modeling (ICCV 2025)
The mainstream approach for correcting distortions in wide-angle images typically involves a cascading process of rectification followed by rectangling. These tasks address distorted image content and irregular boundaries separately, using two distinct pipelines. However, this independent optimization prevents the two stages from benefiting each other. It increases susceptibility to error accumulation and misaligned optimization, ultimately degrading the quality of the rectified image and the performance of downstream vision tasks. In contrast, our one-stage approach avoids unnecessary boundary losses by directly learning and morphing features from rectified-to-rectangling motion representations, demonstrating superior image fidelity and semantic preservation over two-stage methods.
- An End-to-End Framework for Wide-angle Image Distortion Correction. We leverage TPS as a bridge to establish an end-to-end framework for the joint optimization of rectification and rectangling. This model effectively addresses the prevalent issues of error accumulation and misaligned optimization in the cascaded solution. To the best of our knowledge, we represent the first attempt at end-to-end correction and rectangling for wide-angle images.
- A Geometry-aware Constraint. Based on the physical priors of wide-angle lenses, we propose to enforce curvature monotonicity based on the characteristics of the wide-angle camera model, eliminating the error-prone distortion parameter estimation paradigm. We demonstrate through experiments that this constraint can lead to a better correction representation.
- Extensive Experimentation and Promising Performance. We evaluate our ConBo-Net on public datasets and real-world distorted images. The results demonstrate that ConBo-Net outperforms state-of-the-art baselines in terms of correction performance.
The code has been implemented with PyTorch 1.8.1 and CUDA 10.1.
An example of installation commands is provided as follows:
# git clone this repository
git clone https://github.com/lwttttt/ConBo-Net.git
cd ConBo-Net
# create new anaconda env
conda create -n ConBo-Net python=3.6
conda activate ConBo-Net
# install python dependencies
pip install -r requirements.txt
- create a dataset dir by :
- cd ConBo-Net
- mkdir ./dataset
- Download the morphing image datasets of distorted images and the ground truth and then extract them to the
datasetfolder. ji - Download morphing meshes and directly extract it in the ./ConDo-Net folder.
- python test.py(edit the parameters of your own test path)
Note that the pretrained model can be download by [morphing image datasets](https://pan.baidu.com/s/1qaxO4kDI3b3-l4qJbyqJwA?pwd=cafd) soon.
🌈 Check out more visual results and restoration interactions [here].
python train_morphing_c2f.py (edit the parameters of your own training path)
The trained checkpoints can be found in the checkpoints folder.
This project is licensed under NTU S-Lab License 1.0. Redistribution and use should follow this license.
This project is based on RecRecNet, MOWA. Thanks for their awesome works.
If you find our work useful for your research, please consider citing the paper:
@article{
}