Skip to content

m4tice/caa_new

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

64 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Carla Autonomous Application

Repository created for Master Thesis: Autonomous Driving

SOFTWARES and FRAMEWORKS:

IMPORTANT REQUIREMENTS:

  • Python > 3.7.x
  • Tensorflow > 2.1.x

INSTRUCTIONS:
Install the Conda environment, which contains the necessary libraries by running the following commands:

conda env create -f environment.yml
conda activate tf_gpu

After finishing CARLA installation, clone this repo and place it as follows:

.
├── ...
├── PythonAPI
│   ├── caa_new <<===          
│   ├── carla             
│   ├── examples                      
│   └── util                
└── ...

End-to-end Deep Learning for Autonomous Driving

Alt text Alt text

OVERVIEW
The problem of this part is a supervised regression problem, which relates to the car steering angles and the road images in front of a car. The complete pipeline includes three mainphases:

  • Data collection
  • Training
  • Controlling

DATA COLLECTION
The first step is to set up a camera at the front of the vehicle to capture the road images and record the steering angles at the same time. The name of the image and its corresponding steering angle are viewed as feature and label and are put in a csv file.
The network used for this project is the NVIDIA model, which has been proven to work.
This approach requires a huge amount of data, which is why data augmentation is needed to generate fake data with meaningful label.

Camera view of the road
Alt text Alt text Alt text

Camera input of the network
Alt text

TRAINING
The re-designed model is based on the work of Mister naokishibuya. The architecture of the model is as follows:

  • Image normalization
  • Convolution: 5x5, filter: 24, strides: 2x2, activation: ELU
  • Convolution: 5x5, filter: 36, strides: 2x2, activation: ELU
  • Convolution: 5x5, filter: 48, strides: 2x2, activation: ELU
  • Convolution: 3x3, filter: 64, strides: 1x1, activation: ELU
  • Convolution: 3x3, filter: 64, strides: 1x1, activation: ELU
  • Drop out (0.5)
  • Fully connected: neurons: 100, activation: ELU
  • Fully connected: neurons: 50, activation: ELU
  • Fully connected: neurons: 10, activation: ELU
  • Fully connected: neurons: 1 (output)

DATA AUGMENTATION
During the training, the process of augmentation applied on to the images are performed randomly. The augmentation methods include:

  • random_translation Translated the image randomly and compute the new steering angle corresponding to the movement of the image on the x-axis. Alt text

  • random_flip Randomly flip the image and change the sign of the steering value as positive of negative responsively.
    Alt text

  • random_shadow Randomly create a random region of darkness, which imitates the shadow in real life. This helps the model to be more generalised.
    Alt text

  • random_brightness Randomly adjust the brightness of the image, which imitates the brightness of the sun, lamps, etc.

TRAINING RESULT
The training start with the following parameters:

  • Number of samples: 12000
  • EPOCHS: 50
  • Step per epoch: 10000
  • Batch size: 40
  • Learning rate: 1.0e-4

FLIES INCLUDED - E2E

  • module_e2e.py The file includes the functions used for the demonstration
  • demonstration_e2e.py Run this file to see the demonstration of the E2E approach

CREDITS

  • End-to-end Deep Learning for Self-Driving Cars in Udacity: naokishibuya
  • End-to-End Deep Learning for Self-Driving Cars: NVIDIA

Model Predictive Control

Alt text Alt text

OVERVIEW
The MPC controller control the throttle and steering of the vehicle based on a linearised model of the vehicle. The basic idea is that a reference path is provided and the goal of the controller is find a path the has the smallest cost difference comparing with the reference path. This is done by generating all the possible paths using the throttle and steering applied on the linearised model to predict the next positions of the vehicle with a certain amount of time step. The part is based on the work of Mister AtsushiSakai.

State vector of the vehicle model includes xy position, velocity and yaw angle.

Input vector of the vehicle model includes acceleration and steering.

State matrix:

Input matrix:

RESULTS

CREDITS

GNSS for controller switching

Alt text

LiDAR for obstacle detection and stop

Alt text

About

Carla Autonomous Application

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages