ββGeneralization in Deep Networks: The Role of Distance from Initialization
Why it's important to take into account the initialization to explain generalization.
ArXiV: https://arxiv.org/abs/1901.01672
#DL #NN
Why it's important to take into account the initialization to explain generalization.
ArXiV: https://arxiv.org/abs/1901.01672
#DL #NN
ββReproducibility tool for #Jupyter Notebooks
Link: https://mybinder.org
#DS #github #reproducibleresearch
Link: https://mybinder.org
#DS #github #reproducibleresearch
ββPOET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and their Solutions through the Paired Open-Ended Trailblazer
POET: it generates its own increasingly complex, diverse training environments & solves them. It automatically creates a learning curricula & training data, & potentially innovates endlessly.
Link: https://eng.uber.com/poet-open-ended-deep-learning/
#RL #Uber
POET: it generates its own increasingly complex, diverse training environments & solves them. It automatically creates a learning curricula & training data, & potentially innovates endlessly.
Link: https://eng.uber.com/poet-open-ended-deep-learning/
#RL #Uber
YouTube
POET: Endlessly Generating Increasingly Complex & Diverse Learning Environments and their Solutions
This video introduces an algorithm called POET (Paired Open-Ended Trailblazer) that is designed to continually invent increasingly complex and diverse problems, along with their corresponding solutions. Here, we demonstrate POET's potential by unleashingβ¦
ββSuper-resolution GANs for improving the texture resolution of old games.
It is what it is. #GAN to enhance textures in old games making them look better.
ArXiV: https://arxiv.org/abs/1809.00219
Link: https://www.gamespot.com/forums/pc-mac-linux-society-1000004/esrgan-is-pretty-damn-amazing-trying-max-payne-wit-33449670/
#gaming #superresolution
It is what it is. #GAN to enhance textures in old games making them look better.
ArXiV: https://arxiv.org/abs/1809.00219
Link: https://www.gamespot.com/forums/pc-mac-linux-society-1000004/esrgan-is-pretty-damn-amazing-trying-max-payne-wit-33449670/
#gaming #superresolution
Scikit-learn drops support of Python2.7 with new PR.
It means scikit-learn master now requires Python >= 3.5.
https://github.com/scikit-learn/scikit-learn/pull/12639
#scikitlearn
It means scikit-learn master now requires Python >= 3.5.
https://github.com/scikit-learn/scikit-learn/pull/12639
#scikitlearn
GitHub
MRG Drop legacy python / remove six dependencies by amueller Β· Pull Request #12639 Β· scikit-learn/scikit-learn
Tries to drop legacy python (2.7) and remove six everywhere.
ββDeepTraffic β new RL competition hosted by #MIT
Link: https://selfdrivingcars.mit.edu/deeptraffic/
Github: https://github.com/lexfridman/deeptraffic
#RL #selfdrivingcar
Link: https://selfdrivingcars.mit.edu/deeptraffic/
Github: https://github.com/lexfridman/deeptraffic
#RL #selfdrivingcar
A visual exploration of Gaussian Processes: beautiful interactive plots and a brief tutorial to make GPs more approachable
Link: https://www.jgoertler.com/visual-exploration-gaussian-processes/
#Statistics #GP #GaussianProcesses
Link: https://www.jgoertler.com/visual-exploration-gaussian-processes/
#Statistics #GP #GaussianProcesses
Jochen GΓΆrtler
A Visual Exploration of Gaussian Processes
How to turn a collection of small building blocks into a versatile tool for solving regression problems.
Evaluating gambles using dynamics
Link: https://aip.scitation.org/doi/10.1063/1.4940236
#Statistics #Gambling
Link: https://aip.scitation.org/doi/10.1063/1.4940236
#Statistics #Gambling
AIP Publishing
Evaluating gambles using dynamics
Gambles are random variables that model possible changes in wealth. Classic decision theory transforms money into utility through a utility function and defines
ββHow Uber predicts prices
Engineering Uncertainty Estimation in Neural Networks for Time Series Prediction at Uber
Link: https://eng.uber.com/neural-networks-uncertainty-estimation/
#RNN #LSTM #Uber
Engineering Uncertainty Estimation in Neural Networks for Time Series Prediction at Uber
Link: https://eng.uber.com/neural-networks-uncertainty-estimation/
#RNN #LSTM #Uber
Plug-and-play differential privacy for your tensorflow code
#GoogleAI has just released a new library for training machine learning models with (differential) privacy for training data.
where you would write
instead just swap in the
Tutorial: https://github.com/tensorflow/privacy/blob/master/tutorials/mnist_dpsgd_tutorial.py
Link: https://github.com/tensorflow/privacy
#Privacy #tensorflow
#GoogleAI has just released a new library for training machine learning models with (differential) privacy for training data.
where you would write
tf.train.GradientDescentOptimizer
instead just swap in the
DPGradientDescentOptimizer
Tutorial: https://github.com/tensorflow/privacy/blob/master/tutorials/mnist_dpsgd_tutorial.py
Link: https://github.com/tensorflow/privacy
#Privacy #tensorflow
GitHub
privacy/tutorials/mnist_dpsgd_tutorial.py at master Β· tensorflow/privacy
Library for training machine learning models with privacy for training data - tensorflow/privacy
ββDesnapify
Logical followup of #pix2pix project by Isola et al., based on on Keras implementation by Thibault de Boissiere allows to remove that kat/dog faces from #Snapchat photoes.
Github: https://github.com/ipsingh06/ml-desnapify
Mentioned #Keras repo: https://github.com/tdeboissiere/DeepLearningImplementations/tree/master/pix2pix
#DL
Logical followup of #pix2pix project by Isola et al., based on on Keras implementation by Thibault de Boissiere allows to remove that kat/dog faces from #Snapchat photoes.
Github: https://github.com/ipsingh06/ml-desnapify
Mentioned #Keras repo: https://github.com/tdeboissiere/DeepLearningImplementations/tree/master/pix2pix
#DL
How I used NLP (Spacy) to screen Data Science Resumes
Example on how #notAIyet can be used to ease day-to-day job.
Link: https://towardsdatascience.com/do-the-keywords-in-your-resume-aptly-represent-what-type-of-data-scientist-you-are-59134105ba0d
#NLP #HR #DL
Example on how #notAIyet can be used to ease day-to-day job.
Link: https://towardsdatascience.com/do-the-keywords-in-your-resume-aptly-represent-what-type-of-data-scientist-you-are-59134105ba0d
#NLP #HR #DL
Medium
How I used NLP (Spacy) to screen Data Science Resume
Position your Data Science resume better through NLP (Spacy).
AutoML: Automating the design of machine learning models for autonomous driving
Link: https://medium.com/waymo/automl-automating-the-design-of-machine-learning-models-for-autonomous-driving-141a5583ec2a
#Waymo #automl #DL #selfdriving #Google
Link: https://medium.com/waymo/automl-automating-the-design-of-machine-learning-models-for-autonomous-driving-141a5583ec2a
#Waymo #automl #DL #selfdriving #Google
Medium
AutoML: Automating the design of machine learning models for autonomous driving
Through a collaboration with Google AI researchers weβre putting cutting-edge research into practice to automatically generate neural nets.
Valuing Life as an Asset, as a Statistic and at Gunpoint
Ever wondered, how much your life is worth? This is an article about Life as an asset evaluation. It is extremely useful for insuarance companies and as a metric to calculate compensations in case of tragic events, but it is also a key to understand, how valuable (or not) life is.
Math is beautiful.
Link: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3156911
#math #life #insurance #statistics
Ever wondered, how much your life is worth? This is an article about Life as an asset evaluation. It is extremely useful for insuarance companies and as a metric to calculate compensations in case of tragic events, but it is also a key to understand, how valuable (or not) life is.
Math is beautiful.
Link: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3156911
#math #life #insurance #statistics
Learning from Dialogue after Deployment: Feed Yourself, Chatbot!
From abstract: The self-feeding chatbot, a dialogue agent with the ability to extract new training examples from the conversations it participates in.
This is an article about chatbot which is capable of true online learning. There is also a venturebeat article on the subject, covering the perspective: Β«Facebook and Stanford researchers design a chatbot that learns from its mistakesΒ».
Venturebeat: https://venturebeat.com/2019/01/17/facebook-and-stanford-researchers-design-a-chatbot-that-learns-from-its-mistakes/
ArXiV: https://arxiv.org/abs/1901.05415
#NLP #chatbot #facebook #Stanford
From abstract: The self-feeding chatbot, a dialogue agent with the ability to extract new training examples from the conversations it participates in.
This is an article about chatbot which is capable of true online learning. There is also a venturebeat article on the subject, covering the perspective: Β«Facebook and Stanford researchers design a chatbot that learns from its mistakesΒ».
Venturebeat: https://venturebeat.com/2019/01/17/facebook-and-stanford-researchers-design-a-chatbot-that-learns-from-its-mistakes/
ArXiV: https://arxiv.org/abs/1901.05415
#NLP #chatbot #facebook #Stanford
VentureBeat
Facebook and Stanford researchers design a chatbot that learns from its mistakes
In a new paper, scientists at Facebook AI Research and Stanford describe a chatbot that learns from its mistakes over time.
π€Interesting note on weight decay vs L2 regularization
In short, the was difference when moving from caffe (which implements weight decay) to keras (which implements L2). That led to different results on the same net architecture and same set of hyperparameters.
Link: https://bbabenko.github.io/weight-decay/
#DL #nn #hyperopt #hyperparams
In short, the was difference when moving from caffe (which implements weight decay) to keras (which implements L2). That led to different results on the same net architecture and same set of hyperparameters.
Link: https://bbabenko.github.io/weight-decay/
#DL #nn #hyperopt #hyperparams
bbabenko.github.io
weight decay vs L2 regularization
one popular way of adding regularization to deep learning models is to include a weight decay term in the updates. this is the same thing as adding an $L_2$ ...
ββIQ is largely a pseudoscientific swindle
Note by Nassim Taleb on how IQ works. He shows that high-IQ is not well-correlated with wealth or overall cognitive performance.
Link: https://medium.com/incerto/iq-is-largely-a-pseudoscientific-swindle-f131c101ba39
#statistics #iq #fallacy
Note by Nassim Taleb on how IQ works. He shows that high-IQ is not well-correlated with wealth or overall cognitive performance.
Link: https://medium.com/incerto/iq-is-largely-a-pseudoscientific-swindle-f131c101ba39
#statistics #iq #fallacy
Understanding Convolutional Neural Networks through Visualizations in PyTorch
Explanation of how #CNN works
Link: https://towardsdatascience.com/understanding-convolutional-neural-networks-through-visualizations-in-pytorch-b5444de08b91
#PyTorch #nn #DL
Explanation of how #CNN works
Link: https://towardsdatascience.com/understanding-convolutional-neural-networks-through-visualizations-in-pytorch-b5444de08b91
#PyTorch #nn #DL
Towards Data Science
Understanding Convolutional Neural Networks through Visualizations in PyTorch
Getting down to the nitty-gritty of CNNs