A visual introduction to machine learning.
It is an interactive website, which would be really useful to the beginners, as a perfect visual explanation of how decision trees work. It shows how one can go from statistical parametric evaluation to decision tree building.
Link: http://www.r2d3.us/visual-intro-to-machine-learning-part-1/?utm_source=telegram&utm_medium=opendatascience
#decisiontrees #beginner #novice #firststep #howitworks
It is an interactive website, which would be really useful to the beginners, as a perfect visual explanation of how decision trees work. It shows how one can go from statistical parametric evaluation to decision tree building.
Link: http://www.r2d3.us/visual-intro-to-machine-learning-part-1/?utm_source=telegram&utm_medium=opendatascience
#decisiontrees #beginner #novice #firststep #howitworks
www.r2d3.us
A visual introduction to machine learning
What is machine learning? See how it works with our animated data visualization.
👍1
Model Tuning and the Bias-Variance Tradeoff
(part II of Visual Introduction to Machine Learning by r2d3)
Bias-Variance tradeoff happens because you have to find optimal balance between model being too simple and too complex. Too complex models tend to overfit — to become to adapted to the training data, so the results on the testing (new, unknown to model) data become less accurate. The article explains with the example from previous part how this actually works.
http://www.r2d3.us/visual-intro-to-machine-learning-part-2/?utm_source=telegram&utm_medium=opendatascience
#decisiontrees #beginner #novice #firststep #howitworks
(part II of Visual Introduction to Machine Learning by r2d3)
Bias-Variance tradeoff happens because you have to find optimal balance between model being too simple and too complex. Too complex models tend to overfit — to become to adapted to the training data, so the results on the testing (new, unknown to model) data become less accurate. The article explains with the example from previous part how this actually works.
http://www.r2d3.us/visual-intro-to-machine-learning-part-2/?utm_source=telegram&utm_medium=opendatascience
#decisiontrees #beginner #novice #firststep #howitworks
www.r2d3.us
A visual introduction to machine learning, Part II
Learn about bias and variance in our second animated data visualization.
Post on the estimation of the likelihood of the ads closing due to actually selling the good, which was the goal of Avito’s #Kaggle contest. Author built a #neuralnetwork solution and had 18th place.
https://towardsdatascience.com/kaggle-avito-demand-challenge-18th-place-solution-neural-network-ac19efd6e183
https://towardsdatascience.com/kaggle-avito-demand-challenge-18th-place-solution-neural-network-ac19efd6e183
Medium
Kaggle Avito Demand Challenge: 18th Place Solution — Neural Network
A few days ago, I just won a silver medal with my teammates in a Kaggle competition hosted by Avito, a Russian advertising company, ending…
Practical Advice for Building Deep Neural Networks
Some practical tips for training deep neural networks based the experiences (rooted mainly in TensorFlow). Some of the suggestions may seem obvious, but they weren’t at some point. Other suggestions may not apply or might even be bad advice for particular task: use discretion!
https://pcc.cs.byu.edu/2017/10/02/practical-advice-for-building-deep-neural-networks/
#neuralnetworks #dl #tensorflow
Some practical tips for training deep neural networks based the experiences (rooted mainly in TensorFlow). Some of the suggestions may seem obvious, but they weren’t at some point. Other suggestions may not apply or might even be bad advice for particular task: use discretion!
https://pcc.cs.byu.edu/2017/10/02/practical-advice-for-building-deep-neural-networks/
#neuralnetworks #dl #tensorflow
On the Adam optimizer convergence
Investigation of the convergence of popular optimization algorithms like Adam, RMSProp and propose new variants of these methods which provably converge to optimal solution in convex settings.
Link: https://openreview.net/forum?id=ryQu7f-RZ
PDF: https://openreview.net/pdf?id=ryQu7f-RZ
#iclr2018 #neuralnetworks #optimizers
Investigation of the convergence of popular optimization algorithms like Adam, RMSProp and propose new variants of these methods which provably converge to optimal solution in convex settings.
Link: https://openreview.net/forum?id=ryQu7f-RZ
PDF: https://openreview.net/pdf?id=ryQu7f-RZ
#iclr2018 #neuralnetworks #optimizers
openreview.net
On the Convergence of Adam and Beyond
We investigate the convergence of popular optimization algorithms like Adam , RMSProp and propose new variants of these methods which provably converge to optimal solution in convex settings.
Pitfalls of Batch Norm in TensorFlow and Sanity Checks for Training Networks
Some more practical advices on #tensorflow training with source code and reference links to look up.
https://medium.com/@theshank/pitfalls-of-batch-norm-in-tensorflow-and-sanity-checks-for-training-networks-e86c207548c8
#beginner #novice #dl #tutorial
Some more practical advices on #tensorflow training with source code and reference links to look up.
https://medium.com/@theshank/pitfalls-of-batch-norm-in-tensorflow-and-sanity-checks-for-training-networks-e86c207548c8
#beginner #novice #dl #tutorial
Medium
Pitfalls of Batch Norm in TensorFlow and Sanity Checks for Training Networks
Caveats of Batch norm: Moving mean and variance update, sharing batch norm parameters, different behaviour at train and test
Adversarial attack — type of input or a mask applied to the input of the machine learning model to make it wrong. It is a way to cheat with the output, to ‘fool’ the algorithm.
«Attacking Machine Learning with Adversarial Examples» at Open AI blog covers the basics and provides some examples.
Open AI blog article: https://blog.openai.com/adversarial-example-research/
#adversarialattack #openai #novice #beginner
«Attacking Machine Learning with Adversarial Examples» at Open AI blog covers the basics and provides some examples.
Open AI blog article: https://blog.openai.com/adversarial-example-research/
#adversarialattack #openai #novice #beginner
New attack on neural networks can alter the purpose of the neural network.
A surprising adversarial attack, whereby a perturbation to all input images can "reprogram" a poorly-defended neural network to change its task entirely. e.g. turn an ImageNet classifier into a network that counts squares.
Arxiv: https://arxiv.org/pdf/1806.11146.pdf
#Goodfellow #gbrain #adversarialattack
A surprising adversarial attack, whereby a perturbation to all input images can "reprogram" a poorly-defended neural network to change its task entirely. e.g. turn an ImageNet classifier into a network that counts squares.
Arxiv: https://arxiv.org/pdf/1806.11146.pdf
#Goodfellow #gbrain #adversarialattack
Most common pitfalls, you can encounter when training neural network.
http://telegra.ph/Most-common-neural-network-mistakes-07-01
#beginner #novice #dl #tutorial
http://telegra.ph/Most-common-neural-network-mistakes-07-01
#beginner #novice #dl #tutorial
Telegraph
Most common neural network mistakes
You didn't try to overfit a single batch first You forgot to toggle train/eval mode for the net You forgot to .zero_grad() (in pytorch) before .backward() You passed softmaxed outputs to a loss that expects raw logits You didn't use `bias=False` for your…
Data Science by ODS.ai 🦜
This is a day to remembered. #OpenAI 's team of five neural networks, OpenAI Five, has started to defeat amateur human teams (including a semi-pro team) at Dota 2: https://blog.openai.com/openai-five/ It is important, because Dota2 is a way more complicated…
Deep Mind announced that its agent beated human performance in Quake III CTF (Capture The Flag)
https://deepmind.com/blog/capture-the-flag/
#rl #quake3 #deepmind
https://deepmind.com/blog/capture-the-flag/
#rl #quake3 #deepmind
ModaNet: A Large-Scale Street Fashion Dataset with Polygon Annotations
Latest segmentation and detection approaches (DeepLabV3+, FasterRCNN) applied to street fashion images. Arxiv paper contains information about both: net and dataset.
Arxiv link: https://arxiv.org/abs/1807.01394
Paperdoll dataset: http://vision.is.tohoku.ac.jp/~kyamagu/research/paperdoll/
#segmentation #dataset #fashion #sv
Latest segmentation and detection approaches (DeepLabV3+, FasterRCNN) applied to street fashion images. Arxiv paper contains information about both: net and dataset.
Arxiv link: https://arxiv.org/abs/1807.01394
Paperdoll dataset: http://vision.is.tohoku.ac.jp/~kyamagu/research/paperdoll/
#segmentation #dataset #fashion #sv
vision.is.tohoku.ac.jp
Kota Yamaguchi - PaperDoll Parsing
Kota Yamaguchi's website
Hey, our fellow colleagues at OpenDataScience community are labeling a meme dataset. You can help them with the markup just by viewing memes in this bot: @MemezoidBot
#DataSet #labeling
#DataSet #labeling
Neural scene representation and rendering
In June #DeepMind introduced Generative Query Network (#GQN) framework within which machines learn to perceive their surroundings by training only on data obtained by themselves as they move around scenes.
Link: https://deepmind.com/blog/neural-scene-representation-and-rendering/
In June #DeepMind introduced Generative Query Network (#GQN) framework within which machines learn to perceive their surroundings by training only on data obtained by themselves as they move around scenes.
Link: https://deepmind.com/blog/neural-scene-representation-and-rendering/
Deepmind
Neural scene representation and rendering
There is more than meets the eye when it comes to how we understand a visual scene: our brains draw on prior knowledge to reason and to make inferences that go far beyond the patterns of light that hit our retinas. For example, when entering a room for the…
#DeepMind new release: Neural Processes (#NPs) that generalise #GQN ’s training regime to other few-shot prediction tasks such as regression and classification
Arxiv 1: https://arxiv.org/abs/1807.01622
Arxiv 2: https://arxiv.org/abs/1807.01613
#ICML2018
Arxiv 1: https://arxiv.org/abs/1807.01622
Arxiv 2: https://arxiv.org/abs/1807.01613
#ICML2018
How #Netflix used data science to make scripts for the shows:
Video: https://www.youtube.com/watch?v=qXo9jTxfqJ8&feature=youtu.be
Github: http://netflix.github.io
Really great video, showing practical approach, with some focus on human interactions and integrating data insights into the product.
#youtube #netflixresearch
Video: https://www.youtube.com/watch?v=qXo9jTxfqJ8&feature=youtu.be
Github: http://netflix.github.io
Really great video, showing practical approach, with some focus on human interactions and integrating data insights into the product.
#youtube #netflixresearch
YouTube
Netflix Data: From Script to Screen - Netflix Los Angeles - June 2017
Ever wonder how Netflix leverages data and analytics to influence the shows and content they create? Hear it straight from Netflix's LA-based content data team themselves. Netflix’s cast of data professionals share aspects of their processes and tools used…
👍1
Udacity has published a github repo for the Deep Reinforcement Learning Nanodegree program
Repo: https://github.com/udacity/deep-reinforcement-learning
Nanodegree: https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893
#dl #udacity #mooc #course #github #rl
Repo: https://github.com/udacity/deep-reinforcement-learning
Nanodegree: https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893
#dl #udacity #mooc #course #github #rl
GitHub
GitHub - udacity/deep-reinforcement-learning: Repo for the Deep Reinforcement Learning Nanodegree program
Repo for the Deep Reinforcement Learning Nanodegree program - udacity/deep-reinforcement-learning
Deep Learning for Matching in Search and Recommendation
PDF: http://www.comp.nus.edu.sg/~xiangnan/sigir18-deep.pdf
#sigir2018 #Tutorial
PDF: http://www.comp.nus.edu.sg/~xiangnan/sigir18-deep.pdf
#sigir2018 #Tutorial
Glow by #OpenAI: Better Reversible Generative Models
Project link: https://blog.openai.com/glow/
Video: https://d4mucfpksywv.cloudfront.net/research-covers/glow/videos/both_loop_new.mp4
Project link: https://blog.openai.com/glow/
Video: https://d4mucfpksywv.cloudfront.net/research-covers/glow/videos/both_loop_new.mp4
Openai
Glow: Better reversible generative models
We introduce Glow, a reversible generative model which uses invertible 1x1 convolutions. It extends previous work on reversible generative models and simplifies the architecture. Our model can generate realistic high resolution images, supports efficient…