A great and comprehensive review of meta-learning algorithms
https://lilianweng.github.io/lil-log/2018/11/30/meta-learning.html
#meta_learning #deep_learning #machine_learning
https://lilianweng.github.io/lil-log/2018/11/30/meta-learning.html
#meta_learning #deep_learning #machine_learning
Lil'Log
Meta Learning
Self-training with Noisy Student improves ImageNet classification
New state-of-the-art supervised+unsupervised algorithm on ImageNet
https://arxiv.org/abs/1911.04252
#machine_learning #neural_networks #meta_learning
New state-of-the-art supervised+unsupervised algorithm on ImageNet
https://arxiv.org/abs/1911.04252
#machine_learning #neural_networks #meta_learning
arXiv.org
Self-training with Noisy Student improves ImageNet classification
We present Noisy Student Training, a semi-supervised learning approach that works well even when labeled data is abundant. Noisy Student Training achieves 88.4% top-1 accuracy on ImageNet, which...
Meta Reinforcement Learning: An Introduction
Intro: a good meta-learning model is expected to generalize to new tasks or new environments that have never been encountered during training. The adaptation process, essentially a mini learning session, happens at test with limited exposure to the new configurations. Even without any explicit fine-tuning (no gradient backpropagation on trainable variables), the meta-learning model autonomously adjusts internal hidden states to learn. Training RL algorithms can be notoriously difficult sometimes. If the meta-learning agent could become so smart that the distribution of solvable unseen tasks grows extremely broad, we are on track towards general purpose methods — essentially building a “brain” which would solve all kinds of RL problems without much human interference or manual feature engineering. Sounds amazing, right?
Blog: https://lilianweng.github.io/lil-log/2019/06/23/meta-reinforcement-learning.html
#reinforcement_learning #meta_learning #research_paper
Intro: a good meta-learning model is expected to generalize to new tasks or new environments that have never been encountered during training. The adaptation process, essentially a mini learning session, happens at test with limited exposure to the new configurations. Even without any explicit fine-tuning (no gradient backpropagation on trainable variables), the meta-learning model autonomously adjusts internal hidden states to learn. Training RL algorithms can be notoriously difficult sometimes. If the meta-learning agent could become so smart that the distribution of solvable unseen tasks grows extremely broad, we are on track towards general purpose methods — essentially building a “brain” which would solve all kinds of RL problems without much human interference or manual feature engineering. Sounds amazing, right?
Blog: https://lilianweng.github.io/lil-log/2019/06/23/meta-reinforcement-learning.html
#reinforcement_learning #meta_learning #research_paper
Lil'Log
Meta Reinforcement Learning
Meta-RL is meta-learning on reinforcement learning tasks. After trained over a distribution of tasks, the agent is able to solve a new task by developing a new RL algorithm with its internal activity dynamics. This post starts with the origin of meta-RL and…