In this project, we will use deep neural networks and convolutional neural networks to clone driving behavior. We will train, validate and test a model using Keras. The model will output a steering angle to an autonomous vehicle.
- Required Files
- Are all required files submitted?
- Includes a model.py file, drive.py, model.h5 a writeup report and video.mp4.
- Are all required files submitted?
- Quality of Code
- Is the code functional?(./README.md#quality-of-code-functional-code)
- The model can be used to successfully operate the simulation.
- Is the code usable and readable?
- Python generator used if needed
- Code is clearly organized
- Code is commented where needed
- Is the code functional?(./README.md#quality-of-code-functional-code)
- Model Architecture and Training Strategy
- Has an appropriate model architecture been employed for the task?
- Neural network uses convolution layers
- Uses appropriate filter sizes
- Nonlinearity using layers
- Data is normalized
- Has an attempt been made to reduce overfitting of the model?
- Split Train/validation/test data
- Using Dropout/other methods to reduce overfitting
- Have the model parameters been tuned appropriately?
- Learning rate parameters are chosen with explanation, or an Adam optimizer is used.
- Is the training data chosen appropriately?
- Training data chosen to keep the car on the track
- Has an appropriate model architecture been employed for the task?
- Architecture and Training Documentation
- Is the solution design documented?
- Document the approach for deriving and designing a solution(model architecture fit)
- Is the model architecture documented?
- Document the model architecture type used, layers, size using visualizations each qualities
- Is the creation of the training dataset and training process documented?
- Document and provide sample how the model was trained and its dataset characteristics
- Is the solution design documented?
- Simulation
- Is the car able to navigate correctly on test data?
- In Simulator No tire may leave the drivable portion of the track surface
- Is the car able to navigate correctly on test data?
- Track Two
- Can the model work for car to stay on the road for track two as well
- Data: Use the simulator to collect data of good driving behavior.
- Model: Design, train and validate a model to predicts a steering angle from image data.
- Test: Use the model to drive the vehicle autonomously around the first track in the simulator.
- Document: Summarize the results in an written report.
NOTE: The vehicle should remain on the road for an entire loop around the track.
Track1 | Track2 |
---|---|
- model.py (script used to create and train the model)
- drive.py (script to drive the car - this file is unchanged)
- model.h5 (a trained Keras model)
- README.md (a report writeup markdown file)
- video.mp4 (a video recording of vehicle driving autonomously around the track for at least one full lap)
The model can be used to successfully operate the simulation.
- video.mp4 (a video recording of vehicle driving autonomously around the track for at least one full lap)
- track1 (a gif of video indicating the same)
Network architecture is modified from NVIDIA CNN is used which consists of 9 layers
- including a normalization layer
- 5 convolutional layers, and
- 3 fully connected layers.
NVIDIA CNN | USED CNN |
---|---|
- Neural network uses convolution layers:
- Feature extraction using convolution layers
- Uses appropriate filter sizes:
- first three convolutional layers with a 2×2 stride and a 5×5 kernel
- next two convolution layes with a non-strided convolution with a 3×3 kernel size
- Nonlinearity using layers:
- Nine layers: normalization, 5 convolution and 3 funny connected.
- Data is normalized:
- The first layer is lambda layer which is a convenient way to parallelize image normalization.
Convolution Featues @1,2,3 Layers |
---|
- Split Train/validation/test data
- Train/validation/test splits have been used with test_size=0.2
- Using Dropout/other methods to reduce overfitting
- Dropout layers used @ 0.5
- epochs have been reduced to 3 to reduce overfitting.
Mean Squared error loss |
---|
- Adam optimizer used because
- default configuration parameters(tuned appropriately) did well
- of its per-parameter learning rate
- combines the best properties of the AdaGrad and RMSProp algorithms
- optimization algorithm that can handle sparse gradients on noisy problems
Forward and Backward training data collected on track one was good enough with model to keep the car on track.
left | center | right |
---|---|---|
left | center | right |
---|---|---|
left | center | right |
---|---|---|
Horizontal Flip and BGR2RGB |
---|
Architecture and Training Documentation: Creation of Training dataset and training process Documented
- The car is able to navigate autonomously in the simulator using the created model
- No tire left the drivable portion of the track surface(track one)
- The car did not pop up onto ledges or roll over any surfaces that would otherwise be considered unsafe(track one)
- The car is able to navigate autonomously most of the track two in the simulator using the created model
- The model seems to struggle on on sharp turns on track two
- The car did better on fidling with speed, throtle parameter
- More data augmentation would help get better result on track two.
- SIMULATOR Udacity/self-driving-car-sim
- ENVIRONMENT Udacity/CarND-Term1-Starter-Kit
With every project exercise on deep/machine learning it becomes obvious and very much reiterated that it really is all about the data.
Changing model rarely has quite the impact than changing the fundaments of training data.
ITS ALL ABOUT DATA
JUNK IN -> JUNK OUT