Check basic implementations on CIFAR10 in the Deep Learning Lab project here
Goals:
- Implement some basic convolutional networks
- Implement different data augmentation
- Implement VGG model
-
Wide Resnet (1 point)
-
Dense Nets (1 point)
Images from "Labeled Faces in the Wild" dataset (LFW) in realistic scenarios, poses and gestures. Faces are automatically detected and cropped to 100x100 pixels RGB.
Training set: 10585 images
Test set: 2648 images
Python Notebook: here
Python code: here
Goals:
-
Implement a model with >98% accuracy over test set
-
Implement a model with >95% accuracy with less than 100K parameters
get some inspiration from Paper
Images of 20 different models of cars.
Training set: 791 images
Test set: 784 images
-
Version 1. Two different CNNs:
Python code: here
-
Version 2. The same CNN (potentially a pre-trained model)
Python code: here
Goals:
- Understand the above Keras implementations:
- Name the layers
- Built several models
- Understand tensors sizes
- Connect models with operations (outproduct)
- Create an image generator that returns a list of tensors
- Create a data flow with multiple inputs for the model
Suggestion:
- Load a pre-trained VGG16, Resnet... model
- Connect this pre-trained model and form a bi-linear
- Train freezing weights first, unfreeze after some epochs, very low learning rate
- Accuracy >65% is expected
Code extracted and adapted from github
Goals:
- Understand the above Keras implementations:
- How to load the inception net
- How to merge encoder and inception result
Python code: here
Need help? Read
Retina image segmentation
An example of encoder-decoder for segmentation:
Python code: here
Exercise: implement a UNET.
You are welcome!