Video on self supervised learning of speech representations
By Mirco Ravanelli: https://youtu.be/1zjUmY8L5TU
#deeplearning #selfsupervisedlearning #unsupervisedlearning
By Mirco Ravanelli: https://youtu.be/1zjUmY8L5TU
#deeplearning #selfsupervisedlearning #unsupervisedlearning
YouTube
Toward Unsupervised Learning of Speech Representations
In this presentation, I first introduce unsupervised/self-supervised learning. Then, I describe some of my recent works that aim to learn general and robust self-supervised speech representations.
Unsupervised Learning with Graph Neural Networks
By Thomas Kipf.
Slides : http://helper.ipam.ucla.edu/publications/glws4/glws4_15546.pdf
Recording: http://www.ipam.ucla.edu/programs/workshops/workshop-iv-deep-geometric-learning-of-big-data-and-applications/?tab=schedule
#deeplearning #neuralnetworks #unsupervisedlearning #technology
By Thomas Kipf.
Slides : http://helper.ipam.ucla.edu/publications/glws4/glws4_15546.pdf
Recording: http://www.ipam.ucla.edu/programs/workshops/workshop-iv-deep-geometric-learning-of-big-data-and-applications/?tab=schedule
#deeplearning #neuralnetworks #unsupervisedlearning #technology
IPAM
Workshop IV: Deep Geometric Learning of Big Data and Applications - IPAM
Unsupervised Learning with Graph Neural Networks
By Thomas Kipf.
Slides : http://helper.ipam.ucla.edu/publications/glws4/glws4_15546.pdf
Recording: http://www.ipam.ucla.edu/programs/workshops/workshop-iv-deep-geometric-learning-of-big-data-and-applications/?tab=schedule
#deeplearning #neuralnetworks #unsupervisedlearning #technology
By Thomas Kipf.
Slides : http://helper.ipam.ucla.edu/publications/glws4/glws4_15546.pdf
Recording: http://www.ipam.ucla.edu/programs/workshops/workshop-iv-deep-geometric-learning-of-big-data-and-applications/?tab=schedule
#deeplearning #neuralnetworks #unsupervisedlearning #technology
COBRA: Data-Efficient Model-Based RL through Unsupervised Object Discovery and Curiosity-Driven Exploration
Watters et al.: https://arxiv.org/abs/1905.09275
#MachineLearning #UnsupervisedLearning #ArtificialIntelligence
Watters et al.: https://arxiv.org/abs/1905.09275
#MachineLearning #UnsupervisedLearning #ArtificialIntelligence
Best paper ICML 2019
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
Locatello et al.: https://arxiv.org/pdf/1811.12359.pdf
#deeplearning #disentangledrepresentations #unsupervisedlearning
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
Locatello et al.: https://arxiv.org/pdf/1811.12359.pdf
#deeplearning #disentangledrepresentations #unsupervisedlearning
Neurobiologists train artificial neural networks to map the brain
http://bit.do/eVNef
#cellularmorpoholopyneuralnetworks #unsupervisedlearning
#analyzinglargedatasets #CNN #AI
The human brain consists of about 86 billion nerve cells and about as many glial cells. In addition, there are about 100 trillion connections between the nerve cells alone. While mapping all the connections of a human brain remains out of reach, scientists have started to address the problem on a smaller scale. Through the development of serial block-face scanning electron microscopy, all cells and connections of a particular brain area can now be automatically surveyed and displayed in a three-dimensional image.
“It can take several months to survey a 0.3 mm3 piece of brain under an electron microscope. Depending on the size of the brain, this seems like a lot of time for a tiny piece. But even this contains thousands of cells. Such a data set would also require almost 100 terabytes of storage space. However, it is not the collection and storage but rather the data analysis that is the difficult part."
http://bit.do/eVNef
#cellularmorpoholopyneuralnetworks #unsupervisedlearning
#analyzinglargedatasets #CNN #AI
The human brain consists of about 86 billion nerve cells and about as many glial cells. In addition, there are about 100 trillion connections between the nerve cells alone. While mapping all the connections of a human brain remains out of reach, scientists have started to address the problem on a smaller scale. Through the development of serial block-face scanning electron microscopy, all cells and connections of a particular brain area can now be automatically surveyed and displayed in a three-dimensional image.
“It can take several months to survey a 0.3 mm3 piece of brain under an electron microscope. Depending on the size of the brain, this seems like a lot of time for a tiny piece. But even this contains thousands of cells. Such a data set would also require almost 100 terabytes of storage space. However, it is not the collection and storage but rather the data analysis that is the difficult part."
CS294-158 Deep Unsupervised Learning Spring 2019
Instructors: Pieter Abbeel, Peter Chen, Jonathan Ho, Aravind Srinivas - https://sites.google.com/view/berkeley-cs294-158-sp19/home
#unsupervisedlearning #machinelearning #deeplearning
Instructors: Pieter Abbeel, Peter Chen, Jonathan Ho, Aravind Srinivas - https://sites.google.com/view/berkeley-cs294-158-sp19/home
#unsupervisedlearning #machinelearning #deeplearning
Google
CS294-158-SP19 Deep Unsupervised Learning Spring 2019
About: This course will cover two areas of deep learning in which labeled data is not required: Deep Generative Models and Self-supervised Learning. Recent advances in generative models have made it possible to realistically model high-dimensional raw data…
What Does BERT Look At? An Analysis of BERT's Attention
Clark et al.: https://arxiv.org/abs/1906.04341
Code: https://github.com/clarkkev/attention-analysis
#bert #naturallanguage #unsupervisedlearning
Clark et al.: https://arxiv.org/abs/1906.04341
Code: https://github.com/clarkkev/attention-analysis
#bert #naturallanguage #unsupervisedlearning
Probing Neural Network Comprehension of Natural Language Arguments
"We are surprised to find that BERT's peak performance of 77% on the Argument Reasoning Comprehension Task reaches just three points below the average untrained human baseline. However, we show that this result is entirely accounted for by exploitation of spurious statistical cues in the dataset. We analyze the nature of these cues and demonstrate that a range of models all exploit them."
Timothy Niven and Hung-Yu Kao: https://arxiv.org/abs/1907.07355
#naturallanguage #neuralnetwork #reasoning #unsupervisedlearning
"We are surprised to find that BERT's peak performance of 77% on the Argument Reasoning Comprehension Task reaches just three points below the average untrained human baseline. However, we show that this result is entirely accounted for by exploitation of spurious statistical cues in the dataset. We analyze the nature of these cues and demonstrate that a range of models all exploit them."
Timothy Niven and Hung-Yu Kao: https://arxiv.org/abs/1907.07355
#naturallanguage #neuralnetwork #reasoning #unsupervisedlearning
Self-supervised Learning for Video Correspondence Flow
Zihang Lai and Weidi Xie: https://zlai0.github.io/CorrFlow/
#MachineLearning #SelfSupervisedLearning #UnsupervisedLearning
Zihang Lai and Weidi Xie: https://zlai0.github.io/CorrFlow/
#MachineLearning #SelfSupervisedLearning #UnsupervisedLearning