Inverse Compositional Spatial Transformer Networks
In this paper, we establish a theoretical connection between the classical Lucas & Kanade (LK) algorithm and the emerging topic of Spatial Transformer Networks (STNs). STNs are of interest to the vision and learning communities due to their natural ability to combine alignment and classification within the same theoretical framework. Inspired by the Inverse Compositional (IC) variant of the LK algorithm, we present Inverse Compositional Spatial Transformer Networks (IC-STNs). We demonstrate that IC-STNs can achieve better performance than conventional STNs with less model capacity; in particular, we show superior performance in pure image alignment tasks as well as joint alignment/classification problems on real-world problems.
https://arxiv.org/abs/1612.03897
#arxiv #dl #cv
In this paper, we establish a theoretical connection between the classical Lucas & Kanade (LK) algorithm and the emerging topic of Spatial Transformer Networks (STNs). STNs are of interest to the vision and learning communities due to their natural ability to combine alignment and classification within the same theoretical framework. Inspired by the Inverse Compositional (IC) variant of the LK algorithm, we present Inverse Compositional Spatial Transformer Networks (IC-STNs). We demonstrate that IC-STNs can achieve better performance than conventional STNs with less model capacity; in particular, we show superior performance in pure image alignment tasks as well as joint alignment/classification problems on real-world problems.
https://arxiv.org/abs/1612.03897
#arxiv #dl #cv
Torch, TF, Lasagne code for audio style transfer.
http://dmitryulyanov.github.io/audio-texture-synthesis-and-style-transfer/
#dl #audio #styletransfer #torch #tf #lasagne
http://dmitryulyanov.github.io/audio-texture-synthesis-and-style-transfer/
#dl #audio #styletransfer #torch #tf #lasagne
Dmitry Ulyanov
Audio texture synthesis and style transfer
by Dmitry Ulyanov and Vadim Lebedev We present an extension of texture synthesis and style transfer method of Leon Gatys et al. for audio. We have developed the same code for three frameworks (well, it is cold in Moscow), choose your favorite: Torch TensorFlow…
Three Models for Anomaly Detection: Pros and Cons.
Nice intro into anomaly detection.
https://blogs.technet.microsoft.com/uktechnet/2016/12/13/three-models-for-anomaly-detection-pros-and-cons/
Nice intro into anomaly detection.
https://blogs.technet.microsoft.com/uktechnet/2016/12/13/three-models-for-anomaly-detection-pros-and-cons/
Video of sampling chains for Plug&Play Generative networks:
https://www.youtube.com/watch?list=PL5278ezwmoxQEuFSbNzTMxM7McMIqx_HS&v=ePUlJMtclcY
Link: http://www.evolvingai.org/ppgn
Paper: http://www.evolvingai.org/files/nguyen2016ppgn__v1.pdf
#dl #cv #generativenetworks
https://www.youtube.com/watch?list=PL5278ezwmoxQEuFSbNzTMxM7McMIqx_HS&v=ePUlJMtclcY
Link: http://www.evolvingai.org/ppgn
Paper: http://www.evolvingai.org/files/nguyen2016ppgn__v1.pdf
#dl #cv #generativenetworks
YouTube
PPGN: Sampling between 10 classes
Sampling chain from Plug and Play Generative Networks between 10 different classes. The video shows 1 sample per frame with no frames filtered out.
Paper: Nguyen A, Yosinski J, Bengio Y, Dosovitskiy A, Clune J (2016). Plug & Play Generative Networks: Conditional…
Paper: Nguyen A, Yosinski J, Bengio Y, Dosovitskiy A, Clune J (2016). Plug & Play Generative Networks: Conditional…
And couple of more #nips2016 links:
50 things I learned at NIPS 2016
https://blog.ought.com/nips-2016-875bb8fadb8c#.rirzzwi8h
NIPS 2016: cake, Rocket AI, GANs and the style transfer debate
https://medium.com/@elluba/nips-2016-cake-rocket-ai-gans-and-the-style-transfer-debate-708c46438053#.7fwmiolh1
#nips201 #nips #dl #conference
50 things I learned at NIPS 2016
https://blog.ought.com/nips-2016-875bb8fadb8c#.rirzzwi8h
NIPS 2016: cake, Rocket AI, GANs and the style transfer debate
https://medium.com/@elluba/nips-2016-cake-rocket-ai-gans-and-the-style-transfer-debate-708c46438053#.7fwmiolh1
#nips201 #nips #dl #conference
Ought
50 things I learned at NIPS 2016
I learned many things about AI and ML at NIPS. Here are a few that are particularly suited to being communicated in a few sentences.
Couple of more #NIPS summaries
NIPS 2016: cake, Rocket AI, GANs and the style transfer debate
https://medium.com/@elluba/nips-2016-cake-rocket-ai-gans-and-the-style-transfer-debate-708c46438053#.7fwmiolh1
50 things I learned at NIPS 2016
https://blog.ought.com/nips-2016-875bb8fadb8c#.tf10f1l4e
#nips2016 #conference
NIPS 2016: cake, Rocket AI, GANs and the style transfer debate
https://medium.com/@elluba/nips-2016-cake-rocket-ai-gans-and-the-style-transfer-debate-708c46438053#.7fwmiolh1
50 things I learned at NIPS 2016
https://blog.ought.com/nips-2016-875bb8fadb8c#.tf10f1l4e
#nips2016 #conference
Medium
NIPS 2016: Cake, Rocket AI, GANs and the Style Transfer Debate
Or, if you put them all together, my experience of the NIPS conference can be summarized as the image below. Read along if you prefer…
Where to start with Data Science
There is now way to be taught to be data scientist, but you can learn how to become one yourself. There is no right way, but there is a way, which was adopted by a number of data scientists and it goes through online courses (MOOC). Following suggested order is not required, but might be helpful.
Best resources to study Data Science /Machine Learning
1. Andrew Ng’s Machine Learning (https://www.coursera.org/learn/machine-learning).
2. Geoffrey Hinton’s Neural Networks for Machine Learning (https://www.coursera.org/learn/neural-networks).
3. Probabilistic Graphical Models specialisation on Coursera from Stanford (https://www.coursera.org/specializations/probabilistic-graphical-models).
4. Learning from data by Caltech (https://work.caltech.edu/telecourse.html).
5. CS229 from Stanford by Andrew Ng (http://cs229.stanford.edu/materials.html)
6. CS224d: Deep Learning for Natural Language Processing from Stanford (http://cs224d.stanford.edu/syllabus.html).
7. CS231n: Convolutional Neural Networks for Visual Recognition from Stanford (http://cs231n.stanford.edu/syllabus.html).
8. Deep Learning Book by Ian Goodfellow and Yoshua Bengio and Aaron Courville (http://www.deeplearningbook.org/).
9. Machine Learning Yearning by Andrew Ng (http://www.mlyearning.org/).
#books #wheretostart #mooc
There is now way to be taught to be data scientist, but you can learn how to become one yourself. There is no right way, but there is a way, which was adopted by a number of data scientists and it goes through online courses (MOOC). Following suggested order is not required, but might be helpful.
Best resources to study Data Science /Machine Learning
1. Andrew Ng’s Machine Learning (https://www.coursera.org/learn/machine-learning).
2. Geoffrey Hinton’s Neural Networks for Machine Learning (https://www.coursera.org/learn/neural-networks).
3. Probabilistic Graphical Models specialisation on Coursera from Stanford (https://www.coursera.org/specializations/probabilistic-graphical-models).
4. Learning from data by Caltech (https://work.caltech.edu/telecourse.html).
5. CS229 from Stanford by Andrew Ng (http://cs229.stanford.edu/materials.html)
6. CS224d: Deep Learning for Natural Language Processing from Stanford (http://cs224d.stanford.edu/syllabus.html).
7. CS231n: Convolutional Neural Networks for Visual Recognition from Stanford (http://cs231n.stanford.edu/syllabus.html).
8. Deep Learning Book by Ian Goodfellow and Yoshua Bengio and Aaron Courville (http://www.deeplearningbook.org/).
9. Machine Learning Yearning by Andrew Ng (http://www.mlyearning.org/).
#books #wheretostart #mooc
Coursera
Probabilistic Graphical Models
Offered by Stanford University. Probabilistic Graphical ... Enroll for free.
Where to start with Data Science (list of ML / DL online courses and useful resources):
http://telegra.ph/Where-to-start-with-Data-Science-01-02
http://telegra.ph/Where-to-start-with-Data-Science-01-02
Telegraph
Where to start with Data Science
There is now way to be taught to be data scientist, but you can learn how to become one yourself. There is no right way, but there is a way, which was adopted by a number of data scientists and it goes through online courses (MOOC). Following suggested order…
GAN architecture provides extremely live-looking generated pictures.
https://medium.com/@Moscow25/gans-will-change-the-world-7ed6ae8515ca#.pskqd0wjw
https://medium.com/@Moscow25/gans-will-change-the-world-7ed6ae8515ca#.pskqd0wjw
Medium
GANs will change the world
It’s New Year’s 2017, so time to make predictions. Portfolio diversification has never been me, so I’ll make just one.
There is a new $1MM competition on Kaggle to use ML / AI to diagnose lung cancer from CT scans.
Not only it is the great breakthrough for Kaggle (it is the first competition with this huge prize fund), it is also a breakthrough for science, since top world researchers and enginners will compete to basically crowdsource and ease the lung cancer diagnostics.
Competition is available at: https://www.kaggle.com/c/data-science-bowl-2017
#kaggle #segmentation #deeplearning #cv
Not only it is the great breakthrough for Kaggle (it is the first competition with this huge prize fund), it is also a breakthrough for science, since top world researchers and enginners will compete to basically crowdsource and ease the lung cancer diagnostics.
Competition is available at: https://www.kaggle.com/c/data-science-bowl-2017
#kaggle #segmentation #deeplearning #cv
Kaggle
Data Science Bowl 2017
Can you improve lung cancer detection?
Trained agent was able to beat some humans in one-on-one no-limit Texas Holdem. It is not "beating human" yet.
https://www.technologyreview.com/s/603342/poker-is-the-latest-game-to-fold-against-artificial-intelligence/
https://www.technologyreview.com/s/603342/poker-is-the-latest-game-to-fold-against-artificial-intelligence/
MIT Technology Review
Poker Is the Latest Game to Fold Against Artificial Intelligence
Two research groups have developed poker-playing AI programs that show how computers can out-hustle the best humans.
New release in PyTorch: «GPU Tensors, Dynamic Neural Networks and deep Python integration. Hello world!»
http://pytorch.org
http://pytorch.org
Deep Learning Pipeline for Alzheimer’s Disease Prediction
https://devblogs.nvidia.com/parallelforall/nvidia-digits-alzheimers-disease-prediction/
#deeplearning #digits
https://devblogs.nvidia.com/parallelforall/nvidia-digits-alzheimers-disease-prediction/
#deeplearning #digits
NVIDIA Developer Blog
NVIDIA DIGITS Assists Alzheimer’s Disease Prediction | NVIDIA Developer Blog
Using NVIDIA DIGITS to train a Convolutional Neural Network model to predict Alzheimer’s disease from resting-state functional MRI (rs-fMRI) data.
Image-to-Image Translation in Tensorflow
http://affinelayer.com/pix2pix/index.html
#deeplearning #tf #dl
http://affinelayer.com/pix2pix/index.html
#deeplearning #tf #dl
Today Kaggle announced the launch of Two Sigma's new recruiting competition. In this competition, participants are invited to explore detailed NYC rental listing data from Two Sigma's competition co-sponsor, RentHop, to ease the often hectic process of finding the perfect home.
#kaggle
#kaggle
If you have any news worth spreading, please address @opendatasciencebot and send him a link with quick description.
Simple/limited/incomplete benchmark for scalability, speed and accuracy of machine learning libraries for classification.
https://github.com/szilard/benchm-ml
#github #opensource #worthspreading
https://github.com/szilard/benchm-ml
#github #opensource #worthspreading
GitHub
GitHub - szilard/benchm-ml: A minimal benchmark for scalability, speed and accuracy of commonly used open source implementations…
A minimal benchmark for scalability, speed and accuracy of commonly used open source implementations (R packages, Python scikit-learn, H2O, xgboost, Spark MLlib etc.) of the top machine learning al...