Photo to anime portrait
U-GAT-IT — Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation.
Link: https://github.com/taki0112/UGATIT
#Tensorflow #GAN #CV #DL #anime
U-GAT-IT — Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation.
Link: https://github.com/taki0112/UGATIT
#Tensorflow #GAN #CV #DL #anime
Unified rational protein engineering with sequence-only deep representation learning
UniRep predicts amino-acid sequences that form stable bonds. In industry, that’s vital for determining the production yields, reaction rates, and shelf life of protein-based products.
Link: https://www.biorxiv.org/content/10.1101/589333v1.full
#biolearning #rnn #Harvard #sequence #protein
UniRep predicts amino-acid sequences that form stable bonds. In industry, that’s vital for determining the production yields, reaction rates, and shelf life of protein-based products.
Link: https://www.biorxiv.org/content/10.1101/589333v1.full
#biolearning #rnn #Harvard #sequence #protein
A Guide for Making Black Box Models Explainable.
One of the biggest challenges is to make ML models interpretable (explainable to human, preferably, non-expert). It matters not only in terms of credit scoring, to exclude possibility of racism or any other bias or news promotion and display (Cambridge Analytica case), but even in terms of debug and further progress in model training.
Link: https://christophm.github.io/interpretable-ml-book/
#guide #interpretablelearning #IL
One of the biggest challenges is to make ML models interpretable (explainable to human, preferably, non-expert). It matters not only in terms of credit scoring, to exclude possibility of racism or any other bias or news promotion and display (Cambridge Analytica case), but even in terms of debug and further progress in model training.
Link: https://christophm.github.io/interpretable-ml-book/
#guide #interpretablelearning #IL
christophm.github.io
Interpretable Machine Learning
Forwarded from Machinelearning
🔥 New Releases: PyTorch 1.2, torchtext 0.4, torchaudio 0.3, and torchvision 0.4
https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/
https://github.com/pytorch/pytorch/releases
https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/
https://github.com/pytorch/pytorch/releases
PyTorch
New Releases: PyTorch 1.2, torchtext 0.4, torchaudio 0.3, and torchvision 0.4
Since the release of PyTorch 1.0, we’ve seen the community expand to add new tools, contribute to a growing set of models available in the PyTorch Hub, and continually increase usage in both research and production.
Forwarded from Находки в опенсорсе
Simple real time visualisation of the execution of a #python program: https://github.com/alexmojaki/heartrate
Forwarded from Karim Iskakov - канал (Vladimir Ivashkin)
T-shirt to inject junk data into surveillance systems. Stylish tool for peaceful protest against state human tracking
🔎 adversarialfashion.com
📉 @loss_function_porn
🔎 adversarialfashion.com
📉 @loss_function_porn
Community Day @ MLSS 2019
MLSS Community Day is a free one-day event for everyone interested in Machine Learning.
Speakers from premier institutions in Machine Learning such as the University of Oxford, University College London, Max Planck Institute as well as renowned companies will cover the latest advances in applications for healthcare, telecommunications, NLP, finance, and quantum computing.
When & Where: August 31, Skoltech, Moscow
Link: https://mlss2019.skoltech.ru/community-day
#MLSS #MLSS2019 #Skolkovo
MLSS Community Day is a free one-day event for everyone interested in Machine Learning.
Speakers from premier institutions in Machine Learning such as the University of Oxford, University College London, Max Planck Institute as well as renowned companies will cover the latest advances in applications for healthcare, telecommunications, NLP, finance, and quantum computing.
When & Where: August 31, Skoltech, Moscow
Link: https://mlss2019.skoltech.ru/community-day
#MLSS #MLSS2019 #Skolkovo
smiles.skoltech.ru
MLSS Community Day
The Machine Learning Summer School will take place between the 26th of August and 6th of September, 2019 at Skoltech in Moscow, Russia. Join us to learn from world-renowned machine learning specialists, network with a formidable audience, and enjoy Moscow!
DeepMind's Behaviour Suite for Reinforcement Learning
DeepMind released Behaviour Suite for Reinforcement Learning, or ‘bsuite’ – a collection of carefully-designed experiments that investigate core capabilities of RL agents.
bsuite was built to do two things:
1. Offer clear, informative, and scalable experiments that capture key issues in RL
2. Study agent behaviour through performance on shared benchmarks
GitHub: https://github.com/deepmind/bsuite
Paper: https://arxiv.org/abs/1908.03568v1
Google colab: https://colab.research.google.com/drive/1rU20zJ281sZuMD1DHbsODFr1DbASL0RH
#RL #DeepMind #Bsuite
DeepMind released Behaviour Suite for Reinforcement Learning, or ‘bsuite’ – a collection of carefully-designed experiments that investigate core capabilities of RL agents.
bsuite was built to do two things:
1. Offer clear, informative, and scalable experiments that capture key issues in RL
2. Study agent behaviour through performance on shared benchmarks
GitHub: https://github.com/deepmind/bsuite
Paper: https://arxiv.org/abs/1908.03568v1
Google colab: https://colab.research.google.com/drive/1rU20zJ281sZuMD1DHbsODFr1DbASL0RH
#RL #DeepMind #Bsuite
GitHub
GitHub - google-deepmind/bsuite: bsuite is a collection of carefully-designed experiments that investigate core capabilities of…
bsuite is a collection of carefully-designed experiments that investigate core capabilities of a reinforcement learning (RL) agent - google-deepmind/bsuite
Neural Text d̶e̶Generation with Unlikelihood Training
Introducing a new objective, unlikelihood training, which forces unlikely generations to be assigned lower probability by the model, which improves overall quality of generated text.
Link: https://arxiv.org/pdf/1908.04319.pdf
#NLU #NLP #textgeneration
Introducing a new objective, unlikelihood training, which forces unlikely generations to be assigned lower probability by the model, which improves overall quality of generated text.
Link: https://arxiv.org/pdf/1908.04319.pdf
#NLU #NLP #textgeneration
ODS breakfast in Paris! See you this Saturday at 10:30 at Malongo Café, 50 Rue Saint-André des Arts.
🥇Parameter optimization in neural networks.
Play with three interactive visualizations and develop your intuition for optimizing model parameters.
Link: https://www.deeplearning.ai/ai-notes/optimization/
#interactive #demo #optimization #parameteroptimization #novice #entrylevel #beginner #goldcontent #nn #neuralnetwork
Play with three interactive visualizations and develop your intuition for optimizing model parameters.
Link: https://www.deeplearning.ai/ai-notes/optimization/
#interactive #demo #optimization #parameteroptimization #novice #entrylevel #beginner #goldcontent #nn #neuralnetwork
The HSIC Bottleneck: Deep Learning without Back-Propagation
An alternative to conventional backpropagation, that has a number of distinct advantages.
Link: https://arxiv.org/abs/1908.01580
#nn #backpropagation #DL #theory
An alternative to conventional backpropagation, that has a number of distinct advantages.
Link: https://arxiv.org/abs/1908.01580
#nn #backpropagation #DL #theory
arXiv.org
The HSIC Bottleneck: Deep Learning without Back-Propagation
We introduce the HSIC (Hilbert-Schmidt independence criterion) bottleneck for training deep neural networks. The HSIC bottleneck is an alternative to the conventional cross-entropy loss and...
If you happen to be in Moscow in the next couple of weeks, we invite you to take part in Moscow Data Science Major on August 31st at Mail.ru Group office!
It’s like OpenDataScience’s Data Fest, but a mini version (in terms of duration, not content density). It’s like 1st of October, but 31st of August.
MDSM gather all researchers, engineers and developers around Data Science and Machine Learning:
- Top speakers and talks, zero bullshit
- Lots of new insights, skills and know-hows
- Best networking with the community
Link: https://datafest.ru/major/
Registration link: https://corp.mail.ru/ru/press/events/mdsm_aug19/
It’s like OpenDataScience’s Data Fest, but a mini version (in terms of duration, not content density). It’s like 1st of October, but 31st of August.
MDSM gather all researchers, engineers and developers around Data Science and Machine Learning:
- Top speakers and talks, zero bullshit
- Lots of new insights, skills and know-hows
- Best networking with the community
Link: https://datafest.ru/major/
Registration link: https://corp.mail.ru/ru/press/events/mdsm_aug19/
Здесь говорят о трафике
Как привлечь, приумножить и хорошо зарабатывать на трафике
GPT-2: 6-Month Follow-Up
#OpenAI released the 774 million parameter #GPT2 language model.
Link: https://openai.com/blog/gpt-2-6-month-follow-up/
#NLU #NLP
#OpenAI released the 774 million parameter #GPT2 language model.
Link: https://openai.com/blog/gpt-2-6-month-follow-up/
#NLU #NLP
Openai
GPT-2: 6-month follow-up
We’re releasing the 774 million parameter GPT-2 language model after the release of our small 124M model in February, staged release of our medium 355M model in May, and subsequent research with partners and the AI community into the model’s potential for…
Applying machine learning optimization methods to the production of a quantum gas
#DeepMind developed machine learning techniques to optimise the production of a Bose-Einstein condensate, a quantum-mechanical state of matter that can be used to test predictions of theories of many-body physics.
ArXiV: https://arxiv.org/abs/1908.08495
#Physics #DL #BEC
#DeepMind developed machine learning techniques to optimise the production of a Bose-Einstein condensate, a quantum-mechanical state of matter that can be used to test predictions of theories of many-body physics.
ArXiV: https://arxiv.org/abs/1908.08495
#Physics #DL #BEC
Testing Robustness Against Unforeseen Adversaries
OpenAI developed a method to assess whether a neural network classifier can reliably defend against adversarial attacks not seen during training. The method yields a new metric, #UAR (Unforeseen Attack Robustness), which evaluates the robustness of a single model against an unanticipated attack, and highlights the need to measure performance across a more diverse range of unforeseen attacks.
Link: https://openai.com/blog/testing-robustness/
ArXiV: https://arxiv.org/abs/1908.08016
Code: https://github.com/ddkang/advex-uar
#GAN #Adversarial #OpenAI
OpenAI developed a method to assess whether a neural network classifier can reliably defend against adversarial attacks not seen during training. The method yields a new metric, #UAR (Unforeseen Attack Robustness), which evaluates the robustness of a single model against an unanticipated attack, and highlights the need to measure performance across a more diverse range of unforeseen attacks.
Link: https://openai.com/blog/testing-robustness/
ArXiV: https://arxiv.org/abs/1908.08016
Code: https://github.com/ddkang/advex-uar
#GAN #Adversarial #OpenAI
OpenGPT-2: We Replicated GPT-2 Because You Can Too
Article about replication of famous #GPT2. This replication project trained a 1.5B parameter «OpenGPT-2» model on OpenWebTextCorpus, a 38GB dataset similar to the original, and showed comparable results to original GPT-2 on various benchmarks.
Link: https://medium.com/@vanya_cohen/opengpt-2-we-replicated-gpt-2-because-you-can-too-45e34e6d36dc
Google colab: https://colab.research.google.com/drive/1esbpDOorf7DQJV8GXWON24c-EQrSKOit
OpenWebCorpus: https://skylion007.github.io/OpenWebTextCorpus/
#NLU #NLP
Article about replication of famous #GPT2. This replication project trained a 1.5B parameter «OpenGPT-2» model on OpenWebTextCorpus, a 38GB dataset similar to the original, and showed comparable results to original GPT-2 on various benchmarks.
Link: https://medium.com/@vanya_cohen/opengpt-2-we-replicated-gpt-2-because-you-can-too-45e34e6d36dc
Google colab: https://colab.research.google.com/drive/1esbpDOorf7DQJV8GXWON24c-EQrSKOit
OpenWebCorpus: https://skylion007.github.io/OpenWebTextCorpus/
#NLU #NLP
Medium
OpenGPT-2: We Replicated GPT-2 Because You Can Too
By Aaron Gokaslan* and Vanya Cohen*
Open-sourcing hyperparameter autotuning for fastText
Facebook AI researchers are releasing a new feature for the fastText library which provides hyper-parameter autotuning for more efficient text classifiers.
Link: https://ai.facebook.com/blog/fasttext-blog-post-open-source-in-brief/
#FacebookAI #Facebook #FastText #NLU #NLP
Facebook AI researchers are releasing a new feature for the fastText library which provides hyper-parameter autotuning for more efficient text classifiers.
Link: https://ai.facebook.com/blog/fasttext-blog-post-open-source-in-brief/
#FacebookAI #Facebook #FastText #NLU #NLP
Meta
Open-sourcing hyperparameter autotuning for fastText
Facebook AI researchers are releasing a new feature for the fastText library that provides hyperparameter autotuning for more efficient text classifiers.
Data Science by ODS.ai 🦜
Open-sourcing hyperparameter autotuning for fastText Facebook AI researchers are releasing a new feature for the fastText library which provides hyper-parameter autotuning for more efficient text classifiers. Link: https://ai.facebook.com/blog/fasttext-blog…
This media is not supported in your browser
VIEW IN TELEGRAM
The infinite gift
is an interesting object where the side of the nth box is 1/√n. As n→+∞, the gift has infinite surface area and length but finite volume!
is an interesting object where the side of the nth box is 1/√n. As n→+∞, the gift has infinite surface area and length but finite volume!
Exploring Weight Agnostic Neural Networks
Exploration of agents that can already perform well in their environment without the need to learn weight parameters.
Link: https://ai.googleblog.com
Code: https://github.com/google/brain-tokyo-workshop/tree/master/WANNRelease
Exploration of agents that can already perform well in their environment without the need to learn weight parameters.
Link: https://ai.googleblog.com
Code: https://github.com/google/brain-tokyo-workshop/tree/master/WANNRelease