Humans adapt to new environments and information with only a few samples. Future deep-learning models will adapt and learn from only a few samples. MAML and Reptile are gradient based meta-learning algorithms which initialize parameters that are highly sensitive to changes in tasks. We examine the evolution of changes in the sensitive parameters over time and its relative performance to unseen takes.
Added an additional transformation to the original formulation of the regression task from paper Model Agnostic Meta Learning.
Each task involves regressing a periodic or polynomial function. We sample amplitude/slope from the [0.1, 5], phase shift from [0, 2pi] and vertical shift [0, 3], for each periodic or polynomial task respectively. During training and testing we uniformly sample k points from [-5, 5].
python meta.py --mode=maml --n_shot=10 --train_func=sin --iterations=20000 --outer_step_size=0.05 --inner_step_size=0.02 --inner_grad_steps=1 --eval_grad_steps=10 --eval_iters=5 --logdir=logs/maml --seed=1
python meta.py --mode=maml --n_shot=5 --train_func=cos --logdir=logs/maml_5
python meta.py --mode=reptile --n_shot=10 --train_func=sin --logdir=logs/reptile_20
python meta.py --mode=reptile --n_shot=20 --train_func=sin --logdir=logs/reptile_linear_20
tensorboard --logdir=logs/maml