Kaggle YouTube-8M 2: классификация видео
🎥 Kaggle YouTube-8M 2: классификация видео — Глеб Стеркин, Владимир Алиев
👁 1 раз ⏳ 1272 сек.
🎥 Kaggle YouTube-8M 2: классификация видео — Глеб Стеркин, Владимир Алиев
👁 1 раз ⏳ 1272 сек.
Глеб Стеркин и Владимир Алиев вместе со своей командой заняли 4 место в конкурсе Kaggle The 2nd YouTube-8M Video Understanding Challenge. Как и в прошлом году стояла задача в классификации большого объёма видео, но в этот раз с ограничениями на размер модели. В видео участники рассказывают про использованные модели, многоуровневый подход с нейронными сетями и градиентным бустингом, сравнивают различные подходы.
Слайды: https://gh.mltrainings.ru/presentations/SterkinAliev_KaggleYT8M2_2018.pdf
Узнать о теку
Vk
Kaggle YouTube-8M 2: классификация видео — Глеб Стеркин, Владимир Алиев
Глеб Стеркин и Владимир Алиев вместе со своей командой заняли 4 место в конкурсе Kaggle The 2nd YouTube-8M Video Understanding Challenge. Как и в прошлом году стояла задача в классификации большого объёма видео, но в этот раз с ограничениями на размер модели.…
Machine Learning Cheatsheet.
Brief visual explanations of machine learning concepts with diagrams, code examples and links to resources for learning more.
🔗 Machine Learning Cheatsheet — ML Cheatsheet documentation
Brief visual explanations of machine learning concepts with diagrams, code examples and links to resources for learning more.
🔗 Machine Learning Cheatsheet — ML Cheatsheet documentation
Kubernetes for Beginners
🎥 Kubernetes for Beginners
👁 1 раз ⏳ 665 сек.
🎥 Kubernetes for Beginners
👁 1 раз ⏳ 665 сек.
Kubernetes is one of the highest velocity open source projects in history. Its a tool that enables developers to manage 'containerized' apps in the cloud easily. In this tutorial video, I'll deploy an image classifier app built in python to the cloud using Kubernetes. It's a 3 step process, and along the way I'll explain key concepts surrounding Docker, Google Cloud, and scalability. Enjoy!
Code for this video:
https://github.com/llSourcell/kubernetes
Please Subscribe! And Like. And comment. Thats what
Vk
Kubernetes for Beginners
Kubernetes is one of the highest velocity open source projects in history. Its a tool that enables developers to manage 'containerized' apps in the cloud easily. In this tutorial video, I'll deploy an image classifier app built in python to the cloud using…
🎥 Процесс решения задач глубокого машинного обучения
👁 3 раз ⏳ 683 сек.
👁 3 раз ⏳ 683 сек.
Обобщенный процесс решения задачи глубокого машинного обучения:
1. Определение задачи и создание набора данных.
2. Выбор меры успеха.
3. Выбор протокола оценки.
4. Предварительная подготовка данных.
5. Разработка модели, более совершенной, чем базовый случай.
6. Масштабирование по вертикали: разработка модели с переобучением. Поиск границы между недообучением и переобучением.
7. Регуляризация модели и настройка гиперпараметров.
Источник: http://datascientist.one/sxema-resheniya-zadach-deep-learning/
Vk
Процесс решения задач глубокого машинного обучения
Обобщенный процесс решения задачи глубокого машинного обучения:
1. Определение задачи и создание набора данных.
2. Выбор меры успеха.
3. Выбор протокола оценки.
4. Предварительная подготовка данных.
5. Разработка модели, более совершенной, чем базовый случай.…
1. Определение задачи и создание набора данных.
2. Выбор меры успеха.
3. Выбор протокола оценки.
4. Предварительная подготовка данных.
5. Разработка модели, более совершенной, чем базовый случай.…
AWS re:Invent 2018: Machine Learning for the Enterprise, Sony Interactive Entertainment (ENT232-R1)
🔗 AWS re:Invent 2018: Machine Learning for the Enterprise, Sony Interactive Entertainment (ENT232-R1)
Machine learning is powering innovation across industries, including media & entertainment, healthcare, finance, and many more. In this session, representati...
🔗 AWS re:Invent 2018: Machine Learning for the Enterprise, Sony Interactive Entertainment (ENT232-R1)
Machine learning is powering innovation across industries, including media & entertainment, healthcare, finance, and many more. In this session, representati...
YouTube
AWS re:Invent 2018: Machine Learning for the Enterprise, Sony Interactive Entertainment (ENT232-R1)
Machine learning is powering innovation across industries, including media & entertainment, healthcare, finance, and many more. In this session, representati...
How Machine Learning can contribute to the day to day work
🔗 How Machine Learning can contribute to the day to day work
In this session you will: - Discover what Machine Learning really is through specific examples from real life - Understand other options to automate operatio...
🔗 How Machine Learning can contribute to the day to day work
In this session you will: - Discover what Machine Learning really is through specific examples from real life - Understand other options to automate operatio...
YouTube
How Machine Learning can contribute to the day to day work
In this session you will: - Discover what Machine Learning really is through specific examples from real life - Understand other options to automate operatio...
A Gentle Introduction to the Rectified Linear Activation Function for Deep Learning Neural Networks
🔗 A Gentle Introduction to the Rectified Linear Activation Function for Deep Learning Neural Networks
In a neural network, the activation function is responsible for transforming the summed weighted input from the node into the activation of the node or output for that input. The rectified linear activation function is a piecewise linear function that will output the input directly if is positive, otherwise, it will output zero. It has …
🔗 A Gentle Introduction to the Rectified Linear Activation Function for Deep Learning Neural Networks
In a neural network, the activation function is responsible for transforming the summed weighted input from the node into the activation of the node or output for that input. The rectified linear activation function is a piecewise linear function that will output the input directly if is positive, otherwise, it will output zero. It has …
Understanding Generative Adversarial Networks (GANs)
🔗 Understanding Generative Adversarial Networks (GANs)
Building, step by step, the reasoning that leads to GANs.
🔗 Understanding Generative Adversarial Networks (GANs)
Building, step by step, the reasoning that leads to GANs.
Towards Data Science
A basic intro to GANs (Generative Adversarial Networks) | Towards Data Science
How do GANs work? Why are they so interesting?
Программист из Красноярска создал мобильное приложение на основе нейросетей и покоряет рынки России, США и Германии. NGS.RU расспросил предпринимателя об особенностях IT-бизнеса за рубежом, рынке труда в Соединённых Штатах и возможности монетизации идей. https://ngs24.ru/news/more/65698631/?from=window_2
🔗 Как программист из Красноярска сделал приложение для модников и открыл офис в США
Однажды программист из Красноярска Андрей Корхов съездил в Кремниевую долину в США и увлекся там изучением искусственного интеллекта.
🔗 Как программист из Красноярска сделал приложение для модников и открыл офис в США
Однажды программист из Красноярска Андрей Корхов съездил в Кремниевую долину в США и увлекся там изучением искусственного интеллекта.
ngs24.ru
Как программист из Красноярска сделал приложение для модников и открыл офис в США
Однажды программист из Красноярска Андрей Корхов съездил в Кремниевую долину в США и увлекся там изучением искусственного интеллекта.
Free dataset for cardio rhythm classification
https://irhythm.github.io/cardiol_test_set/
🔗 Bare - Start Bootstrap Template
https://irhythm.github.io/cardiol_test_set/
🔗 Bare - Start Bootstrap Template
Optimize Data Science Models with Feature Engineering
https://towardsdatascience.com/optimize-data-science-models-with-feature-engineering-cluster-analysis-metrics-development-and-4be15489667a?source=collection_home---4------2---------------------
https://towardsdatascience.com/optimize-data-science-models-with-feature-engineering-cluster-analysis-metrics-development-and-4be15489667a?source=collection_home---4------2---------------------
Towards Data Science
Optimize Data Science Models with Feature Engineering
Cluster Analysis, Metrics Development, and PCA with Baby Names Data
https://habr.com/company/plarium/blog/435534/
Data Science: книги для начального уровня
#machinelearning #neuralnets #deeplearning #машинноеобучение
Наш телеграмм канал - https://yangx.top/ai_machinelearning_big_data
Data Science: книги для начального уровня
#machinelearning #neuralnets #deeplearning #машинноеобучение
Наш телеграмм канал - https://yangx.top/ai_machinelearning_big_data
Хабр
Data Science: книги для начального уровня
Data Science — наука о данных, возникшая на стыке нескольких обширных направлений: программирования, математики и машинного обучения. Этим обусловлен высокий пор...
Generalization in Deep Networks: The Role of Distance from Initialization
Why it's important to take into account the initialization to explain generalization.
ArXiV: https://arxiv.org/abs/1901.01672
#DL #NN
🔗 Generalization in Deep Networks: The Role of Distance from Initialization
Why does training deep neural networks using stochastic gradient descent (SGD) result in a generalization error that does not worsen with the number of parameters in the network? To answer this question, we advocate a notion of effective model capacity that is dependent on {\em a given random initialization of the network} and not just the training algorithm and the data distribution. We provide empirical evidences that demonstrate that the model capacity of SGD-trained deep networks is in fact restricted through implicit regularization of {\em the $\ell_2$ distance from the initialization}. We also provide theoretical arguments that further highlight the need for initialization-dependent notions of model capacity. We leave as open questions how and why distance from initialization is regularized, and whether it is sufficient to explain generalization.
Why it's important to take into account the initialization to explain generalization.
ArXiV: https://arxiv.org/abs/1901.01672
#DL #NN
🔗 Generalization in Deep Networks: The Role of Distance from Initialization
Why does training deep neural networks using stochastic gradient descent (SGD) result in a generalization error that does not worsen with the number of parameters in the network? To answer this question, we advocate a notion of effective model capacity that is dependent on {\em a given random initialization of the network} and not just the training algorithm and the data distribution. We provide empirical evidences that demonstrate that the model capacity of SGD-trained deep networks is in fact restricted through implicit regularization of {\em the $\ell_2$ distance from the initialization}. We also provide theoretical arguments that further highlight the need for initialization-dependent notions of model capacity. We leave as open questions how and why distance from initialization is regularized, and whether it is sufficient to explain generalization.
🎥 Лекция 14 | Основы математической статистики | Михаил Лифшиц | Лекториум
👁 1 раз ⏳ 4967 сек.
👁 1 раз ⏳ 4967 сек.
Лекция 14 | Курс: Основы математической статистики | Лектор: Михаил Лифшиц | Организатор: Математическая лаборатория имени П.Л.Чебышева СПбГУ
Смотрите это видео на Лекториуме: https://www.lektorium.tv/node/33800
Другие лекции по курсу «Основы математической статистики» доступны для просмотра по ссылке: https://www.lektorium.tv/node/33005
Подписывайтесь на канал: https://www.lektorium.tv/ZJA
Следите за новостями:
https://vk.com/openlektorium
https://www.facebook.com/openlektorium
Vk
Лекция 14 | Основы математической статистики | Михаил Лифшиц | Лекториум
Лекция 14 | Курс: Основы математической статистики | Лектор: Михаил Лифшиц | Организатор: Математическая лаборатория имени П.Л.Чебышева СПбГУ
Смотрите это видео на Лекториуме: https://www.lektorium.tv/node/33800
Другие лекции по курсу «Основы математической…
Смотрите это видео на Лекториуме: https://www.lektorium.tv/node/33800
Другие лекции по курсу «Основы математической…
Machine Learning Engineer Roles And Responsibilities | ML Engineer Skills | ML Training | Edureka
🔗 Machine Learning Engineer Roles And Responsibilities | ML Engineer Skills | ML Training | Edureka
( Machine Learning Engineer Masters Program: https://www.edureka.co/masters-program/machine-learning-engineer-training ) This video will provide you with det...
🔗 Machine Learning Engineer Roles And Responsibilities | ML Engineer Skills | ML Training | Edureka
( Machine Learning Engineer Masters Program: https://www.edureka.co/masters-program/machine-learning-engineer-training ) This video will provide you with det...
YouTube
Machine Learning Engineer Roles And Responsibilities | ML Engineer Skills | ML Training | Edureka
( Machine Learning Engineer Masters Program: https://www.edureka.co/masters-program/machine-learning-engineer-training ) This video will provide you with det...
machine learning tutorial for beginners
🔗 machine learning tutorial for beginners
machine learning tutorial for beginners using ml.net C# for more such videos visit http://www.questpond.com
🔗 machine learning tutorial for beginners
machine learning tutorial for beginners using ml.net C# for more such videos visit http://www.questpond.com
YouTube
machine learning tutorial for beginners
machine learning tutorial for beginners using ml.net C# for more such videos visit http://www.questpond.com
Reproducibility tool for #Jupyter Notebooks
Link: https://mybinder.org
#DS #github #reproducibleresearch
🔗 Binder (beta)
Link: https://mybinder.org
#DS #github #reproducibleresearch
🔗 Binder (beta)
POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and their Solutions through the Paired Open-Ended Trailblazer
POET: it generates its own increasingly complex, diverse training environments & solves them. It automatically creates a learning curricula & training data, & potentially innovates endlessly.
Link: https://eng.uber.com/poet-open-ended-deep-learning/
#RL #Uber
🔗 POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and their Solutions through the Paired Open-Ended Trailblazer
Uber AI Labs introduces the Paired Open-Ended Trailblazer (POET), an algorithm that leverages open-endedness to push the bounds of ML.
POET: it generates its own increasingly complex, diverse training environments & solves them. It automatically creates a learning curricula & training data, & potentially innovates endlessly.
Link: https://eng.uber.com/poet-open-ended-deep-learning/
#RL #Uber
🔗 POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and their Solutions through the Paired Open-Ended Trailblazer
Uber AI Labs introduces the Paired Open-Ended Trailblazer (POET), an algorithm that leverages open-endedness to push the bounds of ML.
Scikit-learn drops support of Python2.7 with new PR.
It means scikit-learn master now requires Python >= 3.5.
https://github.com/scikit-learn/scikit-learn/pull/12639
#scikitlearn
🔗 MRG Drop legacy python / remove six dependencies by amueller · Pull Request #12639 · scikit-learn/sc
Tries to drop legacy python (2.7) and remove six everywhere.
It means scikit-learn master now requires Python >= 3.5.
https://github.com/scikit-learn/scikit-learn/pull/12639
#scikitlearn
🔗 MRG Drop legacy python / remove six dependencies by amueller · Pull Request #12639 · scikit-learn/sc
Tries to drop legacy python (2.7) and remove six everywhere.
GitHub
MRG Drop legacy python / remove six dependencies by amueller · Pull Request #12639 · scikit-learn/scikit-learn
Tries to drop legacy python (2.7) and remove six everywhere.
Super-resolution GANs for improving the texture resolution of old games.
It is what it is. #GAN to enhance textures in old games making them look better.
ArXiV: https://arxiv.org/abs/1809.00219
Link: https://www.gamespot.com/forums/pc-mac-linux-society-1000004/esrgan-is-pretty-damn-amazing-trying-max-payne-wit-33449670/
#gaming #superresolution
🔗 ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks
The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN - network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic an
It is what it is. #GAN to enhance textures in old games making them look better.
ArXiV: https://arxiv.org/abs/1809.00219
Link: https://www.gamespot.com/forums/pc-mac-linux-society-1000004/esrgan-is-pretty-damn-amazing-trying-max-payne-wit-33449670/
#gaming #superresolution
🔗 ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks
The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN - network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic an
arXiv.org
ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks
The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated...