Neural Networks | Нейронные сети
11.7K subscribers
763 photos
169 videos
170 files
9.42K links
Все о машинном обучении

По всем вопросам - @notxxx1

№ 4959169263
加入频道
🎥 TF Machine Learning for Programmers (TensorFlow @ O’Reilly AI Conference, San Francisco '18)
👁 1 раз 2516 сек.
In this talk, Laurence Moroney from Google talked about Machine Learning, AI, Deep Learning and more, and how they fit the programmers toolkit. He introduced what it's all about, cutting through the hype, to show the opportunities that are available in Machine Learning. He also introduced TensorFlow, and how it's a framework that's designed to make Machine Learning easy and accessible, and how intelligent apps that use ML can run in a variety of places including mobile, web and IoT.



More TensorFlow vide
​​​Facebook have created and now open-sourced Nevergrad, a Python3 library that claims making easier to perform gradient-free optimizations.

Link: https://code.fb.com/ai-research/nevergrad/
Github: https://github.com/facebookresearch/nevergrad

🔗 Nevergrad: An open source tool for derivative-free optimization - Facebook Code
We are open-sourcing Nevergrad, a Python3 library that makes it easier to perform gradient-free optimizations used in many machine learning tasks.
​How to Reduce the Variance of Deep Learning Models in Keras Using Model Averaging Ensembles

https://machinelearningmastery.com/model-averaging-ensemble-for-deep-learning-neural-networks/

🔗 How to Reduce the Variance of Deep Learning Models in Keras Using Model Averaging Ensembles
Deep learning neural network models are highly flexible nonlinear algorithms capable of learning a near infinite number of mapping functions. A frustration with this flexibility is the high variance in a final model. The same neural network model trained on the same dataset may find one of many different possible “good enough” solutions each time …
​Reproducing high-quality singing voice
with state-of-the-art AI technology.

Some advance in singing voice synthesis. This opens path toward more interesting collaborations and sythetic celebrities projects.

P.S. Hatsune Miku's will still remain popular for their particular qualities, but now there is more room for competitors.

Link: https://www.techno-speech.com/news-20181214a-en

#SOTA #Voice #Synthesis

🔗 Reproducing high-quality singing voice
​The Illustrated BERT, ELMo, and co. (How NLP Cracked Transfer Learning)

🔗 The Illustrated BERT, ELMo, and co. (How NLP Cracked Transfer Learning)
The year 2018 has been an inflection point for machine learning models handling text (or more accurately, Natural Language Processing or NLP for short). Our conceptual understanding of how best to represent words and sentences in a way that best captures underlying meanings and relationships is rapidly evolving. Moreover, the NLP community has been putting forward incredibly powerful components that you can freely download and use in your own models and pipelines (It’s been referred to as NLP’s ImageNet moment, referencing how years ago similar developments accelerated the development of machine learning in Computer Vision tasks).
​Нижегородский офис компании Intel, помимо прочего, занимается разработкой алгоритмов компьютерного зрения на основе глубоких нейронных сетей. Многие наши алгоритмы публикуются в репозитории Open Model Zoo. Для обучения моделей требуется большое число размеченных данных. Теоретически, существует много способов подготовить их, однако наличие специализированного программного обеспечения многократно ускоряет этот процесс. Так, в целях повышения эффективности и качества разметки, мы разработали собственный инструмент – Computer Vision Annotation Tool (CVAT).

🔗 Computer Vision Annotation Tool: универсальный подход к разметке данных
Нижегородский офис компании Intel, помимо прочего, занимается разработкой алгоритмов компьютерного зрения на основе глубоких нейронных сетей. Для обучения...