"Stacked Approximated Regression Machine: A Simple Deep Learning Approach"
https://arxiv.org/abs/1608.04062
Great article, you should definately check up
https://arxiv.org/abs/1608.04062
Great article, you should definately check up
Flexible Image Tagging with Fast0Tag
https://gab41.lab41.org/flexible-image-tagging-with-fast0tag-681c6283c9b7#.bp7v0qdn7
https://gab41.lab41.org/flexible-image-tagging-with-fast0tag-681c6283c9b7#.bp7v0qdn7
Gab41
Flexible Image Tagging with Fast0Tag
One of the goals of multimodal embedding is to make it easier to expand machine learning models into new contexts. Deep architectures are…
And deep minds breakthrough in audio generation. It is able to generate human-like speech.
https://deepmind.com/blog/wavenet-generative-model-raw-audio/
https://deepmind.com/blog/wavenet-generative-model-raw-audio/
Google DeepMind
WaveNet: A generative model for raw audio
This post presents WaveNet, a deep generative model of raw audio waveforms. We show that WaveNets are able to generate speech which mimics any human voice and which sounds more natural than the...
If you know any fellow data scientists and machine learning guys, please spread the word and forward this message to them. We all work hard on providing you greatest news, so we will appreciate your attention.
How to write your first spam-filter yourself
https://blog.cambridgecoding.com/2016/01/25/implementing-your-own-spam-filter/
https://blog.cambridgecoding.com/2016/01/25/implementing-your-own-spam-filter/
Two papers about using video games to train computer vision models.
And who says that video games are useless? 😏
https://arxiv.org/abs/1608.01745
https://arxiv.org/pdf/1608.01745v2.pdf
And who says that video games are useless? 😏
https://arxiv.org/abs/1608.01745
https://arxiv.org/pdf/1608.01745v2.pdf
Language Model on One Billion Word Benchmark
In this release, we open source a model trained on the One Billion Word Benchmark (http://arxiv.org/abs/1312.3005), a large language corpus in English which was released in 2013. This dataset contains about one billion words, and has a vocabulary size of about 800K words. It contains mostly news data. Since sentences in the training set are shuffled, models can ignore the context and focus on sentence level language modeling.
In the original release and subsequent work, people have used the same test set to train models on this dataset as a standard benchmark for language modeling. Recently, we wrote an article (http://arxiv.org/abs/1602.02410) describing a model hybrid between character CNN, a large and deep LSTM, and a specific Softmax architecture which allowed us to train the best model on this dataset thus far, almost halving the best perplexity previously obtained by others.
Link for the repo: https://github.com/tensorflow/models/tree/master/lm_1b
In this release, we open source a model trained on the One Billion Word Benchmark (http://arxiv.org/abs/1312.3005), a large language corpus in English which was released in 2013. This dataset contains about one billion words, and has a vocabulary size of about 800K words. It contains mostly news data. Since sentences in the training set are shuffled, models can ignore the context and focus on sentence level language modeling.
In the original release and subsequent work, people have used the same test set to train models on this dataset as a standard benchmark for language modeling. Recently, we wrote an article (http://arxiv.org/abs/1602.02410) describing a model hybrid between character CNN, a large and deep LSTM, and a specific Softmax architecture which allowed us to train the best model on this dataset thus far, almost halving the best perplexity previously obtained by others.
Link for the repo: https://github.com/tensorflow/models/tree/master/lm_1b
Generative Visual Manipulation on the Natural Image Manifold
For more details, please visit the project webpage:
https://people.eecs.berkeley.edu/~junyanz/projects/gvm/
"Generative Visual Manipulation on the Natural Image Manifold", Jun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman and Alexei A. Efros. In European Conference on Computer Vision (ECCV). 2016.
(via Deep Learning community on vk.com)
For more details, please visit the project webpage:
https://people.eecs.berkeley.edu/~junyanz/projects/gvm/
"Generative Visual Manipulation on the Natural Image Manifold", Jun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman and Alexei A. Efros. In European Conference on Computer Vision (ECCV). 2016.
(via Deep Learning community on vk.com)
Convolutional Recurrent Neural Networks for Music Classification
https://keunwoochoi.wordpress.com/2016/09/15/paper-is-out-convolutional-recurrent-neural-networks-for-music-classification/
(again, via Deep Learning community on vk.com).
You can also use our @opendatasciencebot to submit an interesting link or get in contact with channel administration.
https://keunwoochoi.wordpress.com/2016/09/15/paper-is-out-convolutional-recurrent-neural-networks-for-music-classification/
(again, via Deep Learning community on vk.com).
You can also use our @opendatasciencebot to submit an interesting link or get in contact with channel administration.
Keunwoo Choi
paper is out; Convolutional Recurrent Neural Networks for Music Classification
THIS POST IS OUTDATED. PLEASE CHECK OUT THIS NEW ONE. It is highly likely that you don’t need to read the paper after reading this post. Abstract We introduce a convolutional re…
Fully-Convolutional Siamese Networks for Object Tracking
A new state-of-the-art for real-time tracking at 50-100 fps. It can be used to track objects in videos and stuff.
http://www.gitxiv.com/posts/TvEcWEJabGu7pEHEa/fully-convolutional-siamese-networks-for-object-tracking
A new state-of-the-art for real-time tracking at 50-100 fps. It can be used to track objects in videos and stuff.
http://www.gitxiv.com/posts/TvEcWEJabGu7pEHEa/fully-convolutional-siamese-networks-for-object-tracking
Stanford University report on how life will be different with the AI by the 2030.
Spoiler: no skynet just yet.
https://ai100.stanford.edu/2016-report
Spoiler: no skynet just yet.
https://ai100.stanford.edu/2016-report
There is an #opensource repository for automatic image captioning in #tensorflow
As article reports, researches have managed to significally improve quality of recognition.
https://research.googleblog.com/2016/09/show-and-tell-image-captioning-open.html
#deeplearning
As article reports, researches have managed to significally improve quality of recognition.
https://research.googleblog.com/2016/09/show-and-tell-image-captioning-open.html
#deeplearning
blog.research.google
Show and Tell: image captioning open sourced in TensorFlow
👍1
European companies involved with machine learning.
https://medium.com/project-juno/european-machine-intelligence-landscape-43a22b44e961
https://medium.com/project-juno/european-machine-intelligence-landscape-43a22b44e961
Medium
European Machine Intelligence Landscape
We @ProjectJunoAI are big fans of landscapes. That’s why we’ve created a machine intelligence landscape focused entirely on Europe [1].
Why top-down approach for machine learning is wrong (or right).
http://machinelearningmastery.com/deep-learning-for-developers/
http://machinelearningmastery.com/deep-learning-for-developers/
Machine Learning Mastery
What You Know About Deep Learning Is A Lie - Machine Learning Mastery
Getting started in deep learning is a struggle.
It's a struggle because deep learning is taught by academics, for academics.
If you're a developer (or practitioner), you're different.
You want results.
The way practitioners learn new technologies is…
It's a struggle because deep learning is taught by academics, for academics.
If you're a developer (or practitioner), you're different.
You want results.
The way practitioners learn new technologies is…
Google released new ImageNet dataset, but for video.
YouTube8M is a large-scale labeled video dataset that consists of 8 million YouTube video IDs and associated labels from a diverse vocabulary of 4800 visual entities. It also comes with precomputed state-of-the-art vision features from billions of frames, which fit on a single hard disk. This makes it possible to train video models from hundreds of thousands of video hours in less than a day on 1 GPU!
https://research.google.com/youtube8m/
http://arxiv.org/pdf/1609.08675v1.pdf
YouTube8M is a large-scale labeled video dataset that consists of 8 million YouTube video IDs and associated labels from a diverse vocabulary of 4800 visual entities. It also comes with precomputed state-of-the-art vision features from billions of frames, which fit on a single hard disk. This makes it possible to train video models from hundreds of thousands of video hours in less than a day on 1 GPU!
https://research.google.com/youtube8m/
http://arxiv.org/pdf/1609.08675v1.pdf
Top themes for machine learning application from Forbes
http://www.forbes.com/sites/bernardmarr/2016/09/30/what-are-the-top-10-use-cases-for-machine-learning-and-ai/#58cabbb010cf
http://www.forbes.com/sites/bernardmarr/2016/09/30/what-are-the-top-10-use-cases-for-machine-learning-and-ai/#58cabbb010cf
Forbes
The Top 10 AI And Machine Learning Use Cases Everyone Should Know About
The implications of this are wide and varied, and data scientists are coming up with new use cases for machine learning every day, but these are some of the top, most interesting use cases currently being explored.