A curated list of resources dedicated to Natural Language Processing (NLP)
https://github.com/keon/awesome-nlp
#nlp #deep_learning
https://github.com/keon/awesome-nlp
#nlp #deep_learning
GitHub
GitHub - keon/awesome-nlp: :book: A curated list of resources dedicated to Natural Language Processing (NLP)
:book: A curated list of resources dedicated to Natural Language Processing (NLP) - keon/awesome-nlp
A must read document for deep learning & machine learning practitioners
https://www.deeplearningbook.org/contents/guidelines.html
#deep_learning #machine_learning
https://www.deeplearningbook.org/contents/guidelines.html
#deep_learning #machine_learning
A fascinating research paper in the intersection of Graph Neural Networks and Reinforcement Learning for tackling Robotics challenges
https://openreview.net/pdf?id=S1sqHMZCb
#robotics #deep_learning #geometric_deep_learning
https://openreview.net/pdf?id=S1sqHMZCb
#robotics #deep_learning #geometric_deep_learning
Noam Chomsky: Language, Cognition, and Deep Learning | Artificial Intelligence
Noam Chomsky is one of the greatest minds of our time and is one of the most cited scholars in history. He is a linguist, philosopher, cognitive scientist, historian, social critic, and political activist. He has spent over 60 years at MIT and recently also joined the University of Arizona. This conversation is part of the Artificial Intelligence podcast.
https://www.youtube.com/watch?v=cMscNuSUy0I
#natural_language_processing #deep_learning
Noam Chomsky is one of the greatest minds of our time and is one of the most cited scholars in history. He is a linguist, philosopher, cognitive scientist, historian, social critic, and political activist. He has spent over 60 years at MIT and recently also joined the University of Arizona. This conversation is part of the Artificial Intelligence podcast.
https://www.youtube.com/watch?v=cMscNuSUy0I
#natural_language_processing #deep_learning
YouTube
Noam Chomsky: Language, Cognition, and Deep Learning | Lex Fridman Podcast #53
Dive into Deep Learning (D2L Book)
Dive into Deep Learning: an interactive deep learning book with code, math, and discussions, based on the NumPy interface
https://github.com/d2l-ai/d2l-en
#deep_learning
Dive into Deep Learning: an interactive deep learning book with code, math, and discussions, based on the NumPy interface
https://github.com/d2l-ai/d2l-en
#deep_learning
GitHub
GitHub - d2l-ai/d2l-en: Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities…
Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities from 70 countries including Stanford, MIT, Harvard, and Cambridge. - d2l-ai/d2l-en
An Overview of Recent State of the Art Deep Learning Algorithms/Architectures
Lecture on most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rather a set of highlights of machine learning and AI innovations and progress in academia, industry, and society in general. This lecture is part of the MIT Deep Learning Lecture Series.
https://www.youtube.com/watch?v=0VH1Lim8gL8&t=999s
#deep_learning #artificial_intelligence
Lecture on most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rather a set of highlights of machine learning and AI innovations and progress in academia, industry, and society in general. This lecture is part of the MIT Deep Learning Lecture Series.
https://www.youtube.com/watch?v=0VH1Lim8gL8&t=999s
#deep_learning #artificial_intelligence
YouTube
Deep Learning State of the Art (2020) | MIT Deep Learning Series
Lecture on most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rathe...
Deep Reasoning Papers
A repository which contains recent papers including Neural Symbolic Reasoning, Logical Reasoning, Visual Reasoning, natural language reasoning and any other topics connecting deep learning and reasoning.
https://github.com/floodsung/Deep-Reasoning-Papers
#reasoning #deep_learning #artificial_intelligence
A repository which contains recent papers including Neural Symbolic Reasoning, Logical Reasoning, Visual Reasoning, natural language reasoning and any other topics connecting deep learning and reasoning.
https://github.com/floodsung/Deep-Reasoning-Papers
#reasoning #deep_learning #artificial_intelligence
GitHub
GitHub - floodsung/Deep-Reasoning-Papers: Recent Papers including Neural Symbolic Reasoning, Logical Reasoning, Visual Reasoning…
Recent Papers including Neural Symbolic Reasoning, Logical Reasoning, Visual Reasoning, planning and any other topics connecting deep learning and reasoning - floodsung/Deep-Reasoning-Papers
An overview of gradient descent optimization algorithms
Abstract: Gradient descent optimization algorithms, while increasingly popular, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by. This article aims to provide the reader with intuitions with regard to the behaviour of different algorithms that will allow her to put them to use. In the course of this overview, we look at different variants of gradient descent, summarize challenges, introduce the most common optimization algorithms, review architectures in a parallel and distributed setting, and investigate additional strategies for optimizing gradient descent
https://arxiv.org/pdf/1609.04747.pdf
#deep_learning #optimization
Abstract: Gradient descent optimization algorithms, while increasingly popular, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by. This article aims to provide the reader with intuitions with regard to the behaviour of different algorithms that will allow her to put them to use. In the course of this overview, we look at different variants of gradient descent, summarize challenges, introduce the most common optimization algorithms, review architectures in a parallel and distributed setting, and investigate additional strategies for optimizing gradient descent
https://arxiv.org/pdf/1609.04747.pdf
#deep_learning #optimization
A cool 3D representation of the structure of BERT language model
Blogpost: https://peltarion.com/knowledge-center/documentation/modeling-view/build-an-ai-model/blocks/bert-encoder
#NLP #deep_learning
Blogpost: https://peltarion.com/knowledge-center/documentation/modeling-view/build-an-ai-model/blocks/bert-encoder
#NLP #deep_learning
Critique of Honda Prize for Dr. Hinton
Summary: Hinton has made significant contributions to artificial neural networks (NNs) and deep learning, but Honda credits him for fundamental inventions of others whom he did not cite. Science must not allow corporate PR to distort the academic record. Sec. I: Modern backpropagation was created by Linnainmaa (1970), not by Rumelhart & Hinton & Williams (1985). Ivakhnenko's deep feedforward nets (since 1965) learned internal representations long before Hinton's shallower ones (1980s). Sec. II: Hinton's unsupervised pre-training for deep NNs in the 2000s was conceptually a rehash of my unsupervised pre-training for deep NNs in 1991. And it was irrelevant for the deep learning revolution of the early 2010s which was mostly based on supervised learning - twice my lab spearheaded the shift from unsupervised pre-training to pure supervised learning (1991-95 and 2006-11). Sec. III: The first superior end-to-end neural speech recognition was based on two methods from my lab: LSTM (1990s-2005) and CTC (2006). Hinton et al. (2012) still used an old hybrid approach of the 1980s and 90s, and did not compare it to the revolutionary CTC-LSTM (which was soon on most smartphones). Sec. IV: Our group at IDSIA had superior award-winning computer vision through deep learning (2011) before Hinton's (2012). Sec. V: Hanson (1990) had a variant of "dropout" long before Hinton (2012). Sec. VI: In the 2010s, most major AI-based services across the world (speech recognition, language translation, etc.) on billions of devices were mostly based on our deep learning techniques, not on Hinton's. Repeatedly, Hinton omitted references to fundamental prior art (Sec. I & II & III & V). However, as Elvis Presley put it, "Truth is like the sun. You can shut it out for a time, but it ain't goin' away."
http://people.idsia.ch/~juergen/critique-honda-prize-hinton.html
#deep_learning
Summary: Hinton has made significant contributions to artificial neural networks (NNs) and deep learning, but Honda credits him for fundamental inventions of others whom he did not cite. Science must not allow corporate PR to distort the academic record. Sec. I: Modern backpropagation was created by Linnainmaa (1970), not by Rumelhart & Hinton & Williams (1985). Ivakhnenko's deep feedforward nets (since 1965) learned internal representations long before Hinton's shallower ones (1980s). Sec. II: Hinton's unsupervised pre-training for deep NNs in the 2000s was conceptually a rehash of my unsupervised pre-training for deep NNs in 1991. And it was irrelevant for the deep learning revolution of the early 2010s which was mostly based on supervised learning - twice my lab spearheaded the shift from unsupervised pre-training to pure supervised learning (1991-95 and 2006-11). Sec. III: The first superior end-to-end neural speech recognition was based on two methods from my lab: LSTM (1990s-2005) and CTC (2006). Hinton et al. (2012) still used an old hybrid approach of the 1980s and 90s, and did not compare it to the revolutionary CTC-LSTM (which was soon on most smartphones). Sec. IV: Our group at IDSIA had superior award-winning computer vision through deep learning (2011) before Hinton's (2012). Sec. V: Hanson (1990) had a variant of "dropout" long before Hinton (2012). Sec. VI: In the 2010s, most major AI-based services across the world (speech recognition, language translation, etc.) on billions of devices were mostly based on our deep learning techniques, not on Hinton's. Repeatedly, Hinton omitted references to fundamental prior art (Sec. I & II & III & V). However, as Elvis Presley put it, "Truth is like the sun. You can shut it out for a time, but it ain't goin' away."
http://people.idsia.ch/~juergen/critique-honda-prize-hinton.html
#deep_learning
people.idsia.ch
Critique of Honda Prize for Dr. Hinton
Honda credits Hinton for inventions of others whom he did not cite. Science must not allow corporate PR to distort the academic record.
The Cost of Training NLP Models: A Concise Overview
Abstract: We review the cost of training large-scale language models, and the drivers of these costs. The intended audience includes engineers and scientists budgeting their model-training experiments, as well as non-practitioners trying to make sense of the economics of modern-day Natural Language Processing (NLP).
https://arxiv.org/abs/2004.08900
#nlp #deep_learning
Abstract: We review the cost of training large-scale language models, and the drivers of these costs. The intended audience includes engineers and scientists budgeting their model-training experiments, as well as non-practitioners trying to make sense of the economics of modern-day Natural Language Processing (NLP).
https://arxiv.org/abs/2004.08900
#nlp #deep_learning