CACTUs: an unsupervised learning algorithm that learns to learn tasks constructed from unlabeled data. Leads to significantly more effective downstream learning & enables few-shot learning *without* labeled meta-learning datasets
ArXiv: https://arxiv.org/abs/1810.02334
#cactus #unsupervised
ArXiv: https://arxiv.org/abs/1810.02334
#cactus #unsupervised
Momentum Contrast for Unsupervised Visual Representation Learning
The idea represents in one picture there is a new queue mechanism of store features for the batch.
Achieve SOTA on different datasets and architecture.
One of the most popular tasks about unsupervised learning is instance discriminations, which about contrastive loss. We need to feature from different augmentations of the same image to be close to each other and far in other images.
This possible to do with end2end but it will be a big batch size, cause the quality is dependent on the variety of examples inside the batch.
Previous works were founded on memory bank where features previous batches stores and used to negative nonsimilar examples.
The idea in the paper it lets change memory bank in the queue, there will be recent features and it will be two encoders, one of them learning from the batch, the second learning from all queue but with strong momentum.
So the queue is like a crazy big batch. So the best result in different tasks even better than supervised.
via @arsenyinfo
paper: https://arxiv.org/pdf/1911.05722.pdf
#cv #unsupervised
The idea represents in one picture there is a new queue mechanism of store features for the batch.
Achieve SOTA on different datasets and architecture.
One of the most popular tasks about unsupervised learning is instance discriminations, which about contrastive loss. We need to feature from different augmentations of the same image to be close to each other and far in other images.
This possible to do with end2end but it will be a big batch size, cause the quality is dependent on the variety of examples inside the batch.
Previous works were founded on memory bank where features previous batches stores and used to negative nonsimilar examples.
The idea in the paper it lets change memory bank in the queue, there will be recent features and it will be two encoders, one of them learning from the batch, the second learning from all queue but with strong momentum.
So the queue is like a crazy big batch. So the best result in different tasks even better than supervised.
via @arsenyinfo
paper: https://arxiv.org/pdf/1911.05722.pdf
#cv #unsupervised
Unsupervised Translation of Programming Languages
Model provided with Python, C++ or Java source code from GitHub, automatically learns to translate between the 3 languages in a fully unsupervised way.
Again: No supervision.
The correctness is then checked by compiling and running unit tests.
ArXiV: https://arxiv.org/pdf/2006.03511.pdf
#FAIR #FacebookAI #cs #unsupervised
Model provided with Python, C++ or Java source code from GitHub, automatically learns to translate between the 3 languages in a fully unsupervised way.
Again: No supervision.
The correctness is then checked by compiling and running unit tests.
ArXiV: https://arxiv.org/pdf/2006.03511.pdf
#FAIR #FacebookAI #cs #unsupervised