Andrew Ng wrote a letter about his upcoming book:
Dear Friends,
You can now download the first 12 chapters of the Machine Learning Yearning book draft. These chapters discuss how good machine learning strategy will help you, and give new guidelines for setting up your datasets and evaluation metric in the deep learning era.
You can download the text here (5.3MB): https://gallery.mailchimp.com/dc3a7ef4d750c0abfc19202a3/files/Machine_Learning_Yearning_V0.5_01.pdf
Thank you for your patience. I ended up making many revisions before feeling this was ready to send to you. Additional chapters will be coming in the next week.
I would love to hear from you. To ask questions, discuss the content, or give feedback, please post on Reddit at:
http://www.reddit.com/r/mlyearning
You can also tweet at me at https://twitter.com/AndrewYNg . I hope this book will help you build highly effective AI and machine learning systems.
Andrew
Learning Deep Neural Networks with Massive Learned Knowledge, Z. Hu, Z. Yang, R. Salakhutdinov, E. Xing
https://www.cs.cmu.edu/~zhitingh/data/emnlp16deep.pdf
#paper #dl
https://www.cs.cmu.edu/~zhitingh/data/emnlp16deep.pdf
#paper #dl
👍1
Spatially Adaptive Computation Time for Residual Networks
with Michael Figurnov et al.
https://arxiv.org/abs/1612.02297
#paper #dl
with Michael Figurnov et al.
https://arxiv.org/abs/1612.02297
#paper #dl
Gated-Attention Readers for Text Comprehension
Bhuwan Dhingra, Hanxiao Liu, William W. Cohen, Ruslan Salakhutdinov
Paper: https://arxiv.org/abs/1606.01549v1
Code: https://github.com/bdhingra/ga-reader
#nlp #dl
Bhuwan Dhingra, Hanxiao Liu, William W. Cohen, Ruslan Salakhutdinov
Paper: https://arxiv.org/abs/1606.01549v1
Code: https://github.com/bdhingra/ga-reader
#nlp #dl
DeepLearning ru:
Clockwork Convnets for Video Semantic Segmentation.
Adaptive video processing by incorporating data-driven clocks.
We define a novel family of "clockwork" convnets driven by fixed or adaptive clock signals that schedule the processing of different layers at different update rates according to their semantic stability. We design a pipeline schedule to reduce latency for real-time recognition and a fixed-rate schedule to reduce overall computation. Finally, we extend clockwork scheduling to adaptive video processing by incorporating data-driven clocks that can be tuned on unlabeled video.
https://arxiv.org/pdf/1608.03609v1.pdf
https://github.com/shelhamer/clockwork-fcn
http://www.gitxiv.com/posts/89zR7ATtd729JEJAg/clockwork-convnets-for-video-semantic-segmentation
#dl #CV #Caffe #video #Segmentation
Clockwork Convnets for Video Semantic Segmentation.
Adaptive video processing by incorporating data-driven clocks.
We define a novel family of "clockwork" convnets driven by fixed or adaptive clock signals that schedule the processing of different layers at different update rates according to their semantic stability. We design a pipeline schedule to reduce latency for real-time recognition and a fixed-rate schedule to reduce overall computation. Finally, we extend clockwork scheduling to adaptive video processing by incorporating data-driven clocks that can be tuned on unlabeled video.
https://arxiv.org/pdf/1608.03609v1.pdf
https://github.com/shelhamer/clockwork-fcn
http://www.gitxiv.com/posts/89zR7ATtd729JEJAg/clockwork-convnets-for-video-semantic-segmentation
#dl #CV #Caffe #video #Segmentation
GitHub
shelhamer/clockwork-fcn
Clockwork Convnets for Video Semantic Segmenation. Contribute to shelhamer/clockwork-fcn development by creating an account on GitHub.
Segmentation is about to become a little hype next week due to release of fabby and magic apps for changing photo/video background.
Recently has finished NIPS — huge conference held in Barcelona. There is a nice post from a visitor, so if you missed an event, it's worth taking a look: "NIPS 2016 experience and highlights".
https://medium.com/@libfun/nips-2016-experience-and-highlights-104e19e4ac95#.dr7xzcqzw
#nips2016 #conference #deeplearning #nips
https://medium.com/@libfun/nips-2016-experience-and-highlights-104e19e4ac95#.dr7xzcqzw
#nips2016 #conference #deeplearning #nips
Medium
NIPS 2016 experience and highlights
It really was a crazy week for me, being first-timer both at NIPS and Barcelona. The impressions that I had compiled from all the…
Inverse Compositional Spatial Transformer Networks
In this paper, we establish a theoretical connection between the classical Lucas & Kanade (LK) algorithm and the emerging topic of Spatial Transformer Networks (STNs). STNs are of interest to the vision and learning communities due to their natural ability to combine alignment and classification within the same theoretical framework. Inspired by the Inverse Compositional (IC) variant of the LK algorithm, we present Inverse Compositional Spatial Transformer Networks (IC-STNs). We demonstrate that IC-STNs can achieve better performance than conventional STNs with less model capacity; in particular, we show superior performance in pure image alignment tasks as well as joint alignment/classification problems on real-world problems.
https://arxiv.org/abs/1612.03897
#arxiv #dl #cv
In this paper, we establish a theoretical connection between the classical Lucas & Kanade (LK) algorithm and the emerging topic of Spatial Transformer Networks (STNs). STNs are of interest to the vision and learning communities due to their natural ability to combine alignment and classification within the same theoretical framework. Inspired by the Inverse Compositional (IC) variant of the LK algorithm, we present Inverse Compositional Spatial Transformer Networks (IC-STNs). We demonstrate that IC-STNs can achieve better performance than conventional STNs with less model capacity; in particular, we show superior performance in pure image alignment tasks as well as joint alignment/classification problems on real-world problems.
https://arxiv.org/abs/1612.03897
#arxiv #dl #cv
Torch, TF, Lasagne code for audio style transfer.
http://dmitryulyanov.github.io/audio-texture-synthesis-and-style-transfer/
#dl #audio #styletransfer #torch #tf #lasagne
http://dmitryulyanov.github.io/audio-texture-synthesis-and-style-transfer/
#dl #audio #styletransfer #torch #tf #lasagne
Dmitry Ulyanov
Audio texture synthesis and style transfer
by Dmitry Ulyanov and Vadim Lebedev We present an extension of texture synthesis and style transfer method of Leon Gatys et al. for audio. We have developed the same code for three frameworks (well, it is cold in Moscow), choose your favorite: Torch TensorFlow…
Three Models for Anomaly Detection: Pros and Cons.
Nice intro into anomaly detection.
https://blogs.technet.microsoft.com/uktechnet/2016/12/13/three-models-for-anomaly-detection-pros-and-cons/
Nice intro into anomaly detection.
https://blogs.technet.microsoft.com/uktechnet/2016/12/13/three-models-for-anomaly-detection-pros-and-cons/
Video of sampling chains for Plug&Play Generative networks:
https://www.youtube.com/watch?list=PL5278ezwmoxQEuFSbNzTMxM7McMIqx_HS&v=ePUlJMtclcY
Link: http://www.evolvingai.org/ppgn
Paper: http://www.evolvingai.org/files/nguyen2016ppgn__v1.pdf
#dl #cv #generativenetworks
https://www.youtube.com/watch?list=PL5278ezwmoxQEuFSbNzTMxM7McMIqx_HS&v=ePUlJMtclcY
Link: http://www.evolvingai.org/ppgn
Paper: http://www.evolvingai.org/files/nguyen2016ppgn__v1.pdf
#dl #cv #generativenetworks
YouTube
PPGN: Sampling between 10 classes
Sampling chain from Plug and Play Generative Networks between 10 different classes. The video shows 1 sample per frame with no frames filtered out.
Paper: Nguyen A, Yosinski J, Bengio Y, Dosovitskiy A, Clune J (2016). Plug & Play Generative Networks: Conditional…
Paper: Nguyen A, Yosinski J, Bengio Y, Dosovitskiy A, Clune J (2016). Plug & Play Generative Networks: Conditional…
And couple of more #nips2016 links:
50 things I learned at NIPS 2016
https://blog.ought.com/nips-2016-875bb8fadb8c#.rirzzwi8h
NIPS 2016: cake, Rocket AI, GANs and the style transfer debate
https://medium.com/@elluba/nips-2016-cake-rocket-ai-gans-and-the-style-transfer-debate-708c46438053#.7fwmiolh1
#nips201 #nips #dl #conference
50 things I learned at NIPS 2016
https://blog.ought.com/nips-2016-875bb8fadb8c#.rirzzwi8h
NIPS 2016: cake, Rocket AI, GANs and the style transfer debate
https://medium.com/@elluba/nips-2016-cake-rocket-ai-gans-and-the-style-transfer-debate-708c46438053#.7fwmiolh1
#nips201 #nips #dl #conference
Ought
50 things I learned at NIPS 2016
I learned many things about AI and ML at NIPS. Here are a few that are particularly suited to being communicated in a few sentences.
Couple of more #NIPS summaries
NIPS 2016: cake, Rocket AI, GANs and the style transfer debate
https://medium.com/@elluba/nips-2016-cake-rocket-ai-gans-and-the-style-transfer-debate-708c46438053#.7fwmiolh1
50 things I learned at NIPS 2016
https://blog.ought.com/nips-2016-875bb8fadb8c#.tf10f1l4e
#nips2016 #conference
NIPS 2016: cake, Rocket AI, GANs and the style transfer debate
https://medium.com/@elluba/nips-2016-cake-rocket-ai-gans-and-the-style-transfer-debate-708c46438053#.7fwmiolh1
50 things I learned at NIPS 2016
https://blog.ought.com/nips-2016-875bb8fadb8c#.tf10f1l4e
#nips2016 #conference
Medium
NIPS 2016: Cake, Rocket AI, GANs and the Style Transfer Debate
Or, if you put them all together, my experience of the NIPS conference can be summarized as the image below. Read along if you prefer…
Where to start with Data Science
There is now way to be taught to be data scientist, but you can learn how to become one yourself. There is no right way, but there is a way, which was adopted by a number of data scientists and it goes through online courses (MOOC). Following suggested order is not required, but might be helpful.
Best resources to study Data Science /Machine Learning
1. Andrew Ng’s Machine Learning (https://www.coursera.org/learn/machine-learning).
2. Geoffrey Hinton’s Neural Networks for Machine Learning (https://www.coursera.org/learn/neural-networks).
3. Probabilistic Graphical Models specialisation on Coursera from Stanford (https://www.coursera.org/specializations/probabilistic-graphical-models).
4. Learning from data by Caltech (https://work.caltech.edu/telecourse.html).
5. CS229 from Stanford by Andrew Ng (http://cs229.stanford.edu/materials.html)
6. CS224d: Deep Learning for Natural Language Processing from Stanford (http://cs224d.stanford.edu/syllabus.html).
7. CS231n: Convolutional Neural Networks for Visual Recognition from Stanford (http://cs231n.stanford.edu/syllabus.html).
8. Deep Learning Book by Ian Goodfellow and Yoshua Bengio and Aaron Courville (http://www.deeplearningbook.org/).
9. Machine Learning Yearning by Andrew Ng (http://www.mlyearning.org/).
#books #wheretostart #mooc
There is now way to be taught to be data scientist, but you can learn how to become one yourself. There is no right way, but there is a way, which was adopted by a number of data scientists and it goes through online courses (MOOC). Following suggested order is not required, but might be helpful.
Best resources to study Data Science /Machine Learning
1. Andrew Ng’s Machine Learning (https://www.coursera.org/learn/machine-learning).
2. Geoffrey Hinton’s Neural Networks for Machine Learning (https://www.coursera.org/learn/neural-networks).
3. Probabilistic Graphical Models specialisation on Coursera from Stanford (https://www.coursera.org/specializations/probabilistic-graphical-models).
4. Learning from data by Caltech (https://work.caltech.edu/telecourse.html).
5. CS229 from Stanford by Andrew Ng (http://cs229.stanford.edu/materials.html)
6. CS224d: Deep Learning for Natural Language Processing from Stanford (http://cs224d.stanford.edu/syllabus.html).
7. CS231n: Convolutional Neural Networks for Visual Recognition from Stanford (http://cs231n.stanford.edu/syllabus.html).
8. Deep Learning Book by Ian Goodfellow and Yoshua Bengio and Aaron Courville (http://www.deeplearningbook.org/).
9. Machine Learning Yearning by Andrew Ng (http://www.mlyearning.org/).
#books #wheretostart #mooc
Coursera
Probabilistic Graphical Models
Offered by Stanford University. Probabilistic Graphical ... Enroll for free.
Where to start with Data Science (list of ML / DL online courses and useful resources):
http://telegra.ph/Where-to-start-with-Data-Science-01-02
http://telegra.ph/Where-to-start-with-Data-Science-01-02
Telegraph
Where to start with Data Science
There is now way to be taught to be data scientist, but you can learn how to become one yourself. There is no right way, but there is a way, which was adopted by a number of data scientists and it goes through online courses (MOOC). Following suggested order…
GAN architecture provides extremely live-looking generated pictures.
https://medium.com/@Moscow25/gans-will-change-the-world-7ed6ae8515ca#.pskqd0wjw
https://medium.com/@Moscow25/gans-will-change-the-world-7ed6ae8515ca#.pskqd0wjw
Medium
GANs will change the world
It’s New Year’s 2017, so time to make predictions. Portfolio diversification has never been me, so I’ll make just one.
There is a new $1MM competition on Kaggle to use ML / AI to diagnose lung cancer from CT scans.
Not only it is the great breakthrough for Kaggle (it is the first competition with this huge prize fund), it is also a breakthrough for science, since top world researchers and enginners will compete to basically crowdsource and ease the lung cancer diagnostics.
Competition is available at: https://www.kaggle.com/c/data-science-bowl-2017
#kaggle #segmentation #deeplearning #cv
Not only it is the great breakthrough for Kaggle (it is the first competition with this huge prize fund), it is also a breakthrough for science, since top world researchers and enginners will compete to basically crowdsource and ease the lung cancer diagnostics.
Competition is available at: https://www.kaggle.com/c/data-science-bowl-2017
#kaggle #segmentation #deeplearning #cv
Kaggle
Data Science Bowl 2017
Can you improve lung cancer detection?
Trained agent was able to beat some humans in one-on-one no-limit Texas Holdem. It is not "beating human" yet.
https://www.technologyreview.com/s/603342/poker-is-the-latest-game-to-fold-against-artificial-intelligence/
https://www.technologyreview.com/s/603342/poker-is-the-latest-game-to-fold-against-artificial-intelligence/
MIT Technology Review
Poker Is the Latest Game to Fold Against Artificial Intelligence
Two research groups have developed poker-playing AI programs that show how computers can out-hustle the best humans.