Data Science by ODS.ai 🦜
46.1K subscribers
663 photos
77 videos
7 files
1.75K links
First Telegram Data Science channel. Covering all technical and popular staff about anything related to Data Science: AI, Big Data, Machine Learning, Statistics, general Math and the applications of former. To reach editors contact: @malev
加ε…₯钑道
πŸ”₯Singing voice conversion system developed at FAIR-Tel Aviv.

This can transform someone's singing voice into someone else's voice.

YouTube: https://www.youtube.com/watch?v=IEpkGenLnjw
Link: https://venturebeat.com/2019/04/16/facebooks-ai-can-convert-one-singers-voice-into-another/
ArXiV: https://arxiv.org/abs/1904.06590

#voiceconversion #audiolearning #DL #Facebook
​​TransGaGa: Geometry-Aware Unsupervised Image-to-Image Translation

Paper: https://arxiv.org/pdf/1904.09571v1.pdf

#GAN #cv #dl
A Recipe for Training Neural Networks by Andrej Karpathy

New article written by Andrej Karpathy distilling a bunch of useful heuristics for training neural nets. The post is full of real-world knowledge and how-to details that are not taught in books and often take endless hours to learn the hard way.

Link: https://karpathy.github.io/2019/04/25/recipe/

#tipsandtricks #karpathy #tutorial #nn #ml #dl
​​ODE DL paper with overview

This paper recieved award at #NeurIPS2018. Main idea: defining a deep residual network as a continuously evolving system & instead of updating the hidden units layer by layer, define their derivative with respect to depth instead.

ArXiV: https://arxiv.org/pdf/1806.07366.pdf
GitHub: https://github.com/rtqichen/torchdiffeq
Overview: https://rkevingibson.github.io/blog/neural-networks-as-ordinary-differential-equations/

#ODE #DL #NeurIPS
Great visualization of DBSCAN

DBSCAN β€” is fast and rather reliable #clustering algorithm. It can outperform classical K-means in some cases and icredibly useful in some cases. This interactive demo helps to understand how algorithm really works.

Link: https://www.naftaliharris.com/blog/visualizing-dbscan-clustering/

#ML #dbscan
πŸ“ΉVideo about best chalk for the blackboard.

This is the story of chalk. Not just any chalk, but a Japanese brand called Hagoromo, which mathematician Satyan Devadoss dubbed "the Michael Jordan of chalk, the Rolls Royce of chalk" Then the company decided to stop making chalk. So mathematicians began hoarding it.

YouTube: https://www.youtube.com/watch?v=PhNUjg9X4g8
​​Real-Time Patch-Based Stylization of Portraits Using Generative Adversarial Network

Face photo stylization from #Snap research team. Rather fast solution with demo available.

Demo: http://facestyle.org/#
Paper: https://dcgi.fel.cvut.cz/home/sykorad/Futschik19-NPAR.pdf
YouTube: https://www.youtube.com/watch?v=G3nwTSd3_XA

#GAN #DL #Styletransfer
Speech synthesis from neural decoding of spoken sentences

Researchers tapped the brains of five epilepsy patients who had been implanted with electrodes to map the source of seizures, according to a paper published by #Nature. During a lull in the procedure, they had the patients read English-language texts aloud. They recorded the fluctuating voltage as the brain controlled the muscles involved in speaking. Later, they fed the voltage measurements into a synthesizer.

Nature: https://www.nature.com/articles/s41586-019-1119-1
Paper: https://www.biorxiv.org/content/biorxiv/early/2018/11/29/481267.full.pdf
YouTube: https://www.youtube.com/watch?v=kbX9FLJ6WKw

#DeepDiveWeekly #DL #speech #audiolearning
Unsupervised community detection with modularity-based attention model

Searching for communities on graphs is hard -> no clear loss, discrete labels (usually). What we do: use soft log-liklehood approximation with tricks + GNNs to try to match classical SOTA.

Paper: https://rlgm.github.io/papers/37.pdf

#ICLR2019 #GNN #GraphLearning
πŸ‘1
The lottery ticket hypothesis: finding sparse, trainable neural networks

Best paper award at #ICLR2019 main idea: dense, randomly-initialized, networks contain sparse subnetworks that trained in isolation reach test accuracy comparable to the original network. Thus compressing the original network up to 10% its original size.

Paper: https://arxiv.org/pdf/1803.03635.pdf

#nn #research