Data Science by ODS.ai 🦜
46.1K subscribers
663 photos
77 videos
7 files
1.75K links
First Telegram Data Science channel. Covering all technical and popular staff about anything related to Data Science: AI, Big Data, Machine Learning, Statistics, general Math and the applications of former. To reach editors contact: @malev
加ε…₯钑道
​​Open AI releasing MMO.

Spoiler: it is not MMORPG. It is Massively Multiagent Mame environment for reinforcement learning agents. It will allow to develop something what for #trueAI will be like an amoeba to human. But it’s live now.

Link: https://blog.openai.com/neural-mmo/
Github: https://github.com/openai/neural-mmo
3DClient github: https://github.com/jsuarez5341/neural-mmo-client

#OpenAI
​​Testing Robustness Against Unforeseen Adversaries

OpenAI developed a method to assess whether a neural network classifier can reliably defend against adversarial attacks not seen during training. The method yields a new metric, #UAR (Unforeseen Attack Robustness), which evaluates the robustness of a single model against an unanticipated attack, and highlights the need to measure performance across a more diverse range of unforeseen attacks.

Link: https://openai.com/blog/testing-robustness/
ArXiV: https://arxiv.org/abs/1908.08016
Code: https://github.com/ddkang/advex-uar

#GAN #Adversarial #OpenAI
πŸŽ“ Reinforcement Learning Course from OpenAI

Reinforcement Learning becoming significant part of the data scientist toolbox.
OpenAI created and published one of the best courses in #RL. Algorithms implementation written in #Tensorflow.
But if you are more comfortable with #PyTorch, we have found #PyTorch implementation of this algs

OpenAI Course: https://spinningup.openai.com/en/latest/
Tensorflow Code: https://github.com/openai/spinningup
PyTorch Code: https://github.com/kashif/firedup

#MOOC #edu #course #OpenAI
​​DEEP DOUBLE DESCENT
where bigger models and more data hurt

it's really cool & interesting research about where we watch that the performance first improves, then gets worse, and then improves again with increasing model size, data size, or training time. but this effect is often avoided through careful regularization.

some conclusions from research:
– there is a regime where bigger models are worse
– there is a regime where more samples hurt
– there is a regime where training longer reverses overfitting

blog post: https://openai.com/blog/deep-double-descent/
paper: https://arxiv.org/abs/1912.02292

#deep #train #size #openai
β€‹β€‹πŸŽ™πŸŽΆImproved audio generative model from OpenAI

Wow! OpenAI just released Jukebox – neural net and service that generates music from genre, artist name, and some lyrics that you can supply. It is can generate even some singing like from corrupted magnet compact cassette.

Some of the sounds seem it is from hell. Agonizing Michel Jakson for example or Creepy Eminiem or Celien Dion

#OpenAI 's approach is to use 3 levels of quantized variational autoencoders VQVAE-2 to learn discrete representations of audio and compress audio by 8x, 32x, and 128x and use the spectral loss to reconstruct spectrograms. And after that, they use sparse transformers conditioned on lyrics to generate new patterns and upsample it to higher discrete samples and decode it to the song.

The net can even learn and generates some solo parts during the track.

explore some creepy songs: https://jukebox.openai.com/
code: https://github.com/openai/jukebox/
paper: https://cdn.openai.com/papers/jukebox.pdf
blog: https://openai.com/blog/jukebox/

#openAI #music #sound #cool #fan #creepy #vae #audiolearning #soundlearning