An autonomous parking system that uses two fisheye cameras and YOLO-based perception within a ROS2 + Gazebo simulation environment.
This project implements an autonomous parking pipeline using two wide-angle (fisheye) cameras mounted on a simulated vehicle. The system detects parking spots and surrounding obstacles using YOLO object detection and segmentation models, and executes a parking maneuver autonomously.
Key highlights:
- Dual fisheye camera setup for wide field-of-view coverage
- Bird's Eye View (BEV) transformation for top-down spatial understanding
- YOLOv8-based detection and segmentation (120° camera models)
- ROS2 Humble + Gazebo simulation environment
- Pre-trained
.ptmodel weights included
| Category | Tools / Libraries |
|---|---|
| Middleware | ROS 2 Humble |
| Simulation | Gazebo |
| Vision / AI | YOLOv8 (Ultralytics 8.2.69), OpenCV |
| Language | Python 3, Shell, CMake |
| Other | pyserial, pynput, transformers, HuggingFace Hub |
- Ubuntu 22.04
- ROS 2 Humble
- Python 3.10+
-
Clone the repository:
git clone https://github.com/yunss01/two_camera_parking.git cd two_camera_parking -
Run the installation script:
chmod +x install.sh ./install.sh
This will install:
- Python dependencies:
opencv-python,pyserial,ultralytics,pynput,transformers,huggingface_hub - ROS2 packages:
gazebo-ros-pkgs,xacro,gazebo-ros,gazebo-msgs,gazebo-plugins - Gazebo simulation assets
- Python dependencies:
-
Build the ROS2 workspace:
cd ~/ros2_autonomous_vehicle_simulation source /opt/ros/humble/setup.bash colcon build --symlink-install source ./install/local_setup.bash
ros2 launch simulation_pkg mission_sim.launch.py| File | Description |
|---|---|
best_120cam_detection_1.pt |
Object detection model for 120° fisheye camera |
best_120cam_segmentation.pt |
Segmentation model for 120° fisheye camera |