🎥 Lecture 8 part 2: Deep Neural Networks
👁 1 раз ⏳ 2762 сек.
👁 1 раз ⏳ 2762 сек.
This is Lecture 8 - part 2 - of the KT EP3260 Fundamentals of Machine Learning over Networks (MLoNs). This lecture reviews the fundamentals and recent advances of deep neural networks (DNNs). In particular, this lecture covers its non-convex optimization landscape, various algorithms to address it, back propagation, preconditioning the optimization landscape, adaptive step size (including ADAM, RMSprop, and ADGRAD), and batch normalization. It then addresses learning and inference over networks, where we ma
Vk
Lecture 8 part 2: Deep Neural Networks
This is Lecture 8 - part 2 - of the KT EP3260 Fundamentals of Machine Learning over Networks (MLoNs). This lecture reviews the fundamentals and recent advances of deep neural networks (DNNs). In particular, this lecture covers its non-convex optimization…
Putting ML in production I: using Apache Kafka in Python.
Using a message broker to productionise algorithms in real time
Наш телеграм канал - https://tele.click/ai_machinelearning_big_data
https://towardsdatascience.com/putting-ml-in-production-i-using-apache-kafka-in-python-ce06b3a395c8?source=collection_home---4------1---------------------
🔗 Putting ML in production I: using Apache Kafka in Python.
Using a message broker to productionise algorithms in real time
Using a message broker to productionise algorithms in real time
Наш телеграм канал - https://tele.click/ai_machinelearning_big_data
https://towardsdatascience.com/putting-ml-in-production-i-using-apache-kafka-in-python-ce06b3a395c8?source=collection_home---4------1---------------------
🔗 Putting ML in production I: using Apache Kafka in Python.
Using a message broker to productionise algorithms in real time
What to expect from Reinforcement Learning?
And what’s beyond Imitation Learning
Наш телеграм канал - https://tele.click/ai_machinelearning_big_data
https://towardsdatascience.com/what-to-expect-from-reinforcement-learning-a22e8c16f40c?source=topic_page---------2------------------1
🔗 What to expect from Reinforcement Learning?
And what’s beyond Imitation Learning
And what’s beyond Imitation Learning
Наш телеграм канал - https://tele.click/ai_machinelearning_big_data
https://towardsdatascience.com/what-to-expect-from-reinforcement-learning-a22e8c16f40c?source=topic_page---------2------------------1
🔗 What to expect from Reinforcement Learning?
And what’s beyond Imitation Learning
Towards Data Science
What to expect from Reinforcement Learning?
And what’s beyond Imitation Learning
🎥 'How neural networks learn' - Part III: The learning dynamics behind generalization and overfitting
👁 1 раз ⏳ 1355 сек.
👁 1 раз ⏳ 1355 сек.
In this third episode on "How neural nets learn" I dive into a bunch of academical research that tries to explain why neural networks generalize as wel as they do. We first look at the remarkable capability of DNNs to simply memorize huge amounts of (random) data. We then see how this picture is more subtle when training on real data and finally dive into some beautiful analysis from the viewpoint on information theory.
Main papers discussed in this video:
First paper on Memorization in DNNs: https://arxiv
Vk
'How neural networks learn' - Part III: The learning dynamics behind generalization and overfitting
In this third episode on "How neural nets learn" I dive into a bunch of academical research that tries to explain why neural networks generalize as wel as they do. We first look at the remarkable capability of DNNs to simply memorize huge amounts of (random)…
Linear Regression with TF Keras
https://www.youtube.com/watch?v=oGuCxVyEhiA
🎥 Linear Regression with TF Keras
👁 1 раз ⏳ 937 сек.
https://www.youtube.com/watch?v=oGuCxVyEhiA
🎥 Linear Regression with TF Keras
👁 1 раз ⏳ 937 сек.
In this video we learn how to perform linear regression with Keras in TensorFlow.
Keras is TensorFlow's high level API for building deep learning models.
Email: [email protected]
Website: https://www.poincaregroup.com
LinkedIn: https://www.linkedin.com/in/carlos-lara-1055a16b/
YouTube
Linear Regression with TF Keras
In this video we learn how to perform linear regression with Keras in TensorFlow. Keras is TensorFlow's high level API for building deep learning models. Ema...
It’s Only Natural: An Excessively Deep Dive Into Natural Gradient Optimization
This post gives an intuitive explanation of an approach called Natural Gradient, an elegant way to dynamically adjust gradient step size.
https://towardsdatascience.com/its-only-natural-an-excessively-deep-dive-into-natural-gradient-optimization-75d464b89dbb?source=collection_home---4------0---------------------
🔗 It’s Only Natural: An Excessively Deep Dive Into Natural Gradient Optimization
This post gives an intuitive explanation of an approach called Natural Gradient, an elegant way to dynamically adjust gradient step size.
This post gives an intuitive explanation of an approach called Natural Gradient, an elegant way to dynamically adjust gradient step size.
https://towardsdatascience.com/its-only-natural-an-excessively-deep-dive-into-natural-gradient-optimization-75d464b89dbb?source=collection_home---4------0---------------------
🔗 It’s Only Natural: An Excessively Deep Dive Into Natural Gradient Optimization
This post gives an intuitive explanation of an approach called Natural Gradient, an elegant way to dynamically adjust gradient step size.
Towards Data Science
It’s Only Natural: An Excessively Deep Dive Into Natural Gradient Optimization
This post gives an intuitive explanation of an approach called Natural Gradient, an elegant way to dynamically adjust gradient step size.
🎥 Week 4 CS294-158 Deep Unsupervised Learning (2/20/19)
👁 1 раз ⏳ 8989 сек.
👁 1 раз ⏳ 8989 сек.
UC Berkeley CS294-158 Deep Unsupervised Learning (Spring 2019)
Instructors: Pieter Abbeel, Xi (Peter) Chen, Jonathan Ho, Aravind Srinivas
https://sites.google.com/view/berkeley-cs294-158-sp19/home
Week 4 Lecture Contents:
- Latent Variable Models (ctd)
- Bits-Back Coding
Vk
Week 4 CS294-158 Deep Unsupervised Learning (2/20/19)
UC Berkeley CS294-158 Deep Unsupervised Learning (Spring 2019)
Instructors: Pieter Abbeel, Xi (Peter) Chen, Jonathan Ho, Aravind Srinivas
https://sites.google.com/view/berkeley-cs294-158-sp19/home
Week 4 Lecture Contents:
- Latent Variable Models (ctd)
-…
Instructors: Pieter Abbeel, Xi (Peter) Chen, Jonathan Ho, Aravind Srinivas
https://sites.google.com/view/berkeley-cs294-158-sp19/home
Week 4 Lecture Contents:
- Latent Variable Models (ctd)
-…
MIT 6.050J Information and Entropy, Spring
🔗 MIT 6.050J Information and Entropy, Spring 2008 - YouTube
Instructors: Paul Penfield, Seth Lloyd This course explores the ultimate limits to communication and computation, with an emphasis on the physical nature of ...
🎥 Unit 1: Bits and Codes, Lecture 2 | MIT 6.050J Information and Entropy, Spring 2008
👁 2 раз ⏳ 6200 сек.
🎥 Unit 2: Compression, Lecture 1 | MIT 6.050J Information and Entropy, Spring 2008
👁 1 раз ⏳ 4769 сек.
🎥 Unit 3: Noise and Errors, Lecture 2 | MIT 6.050J Information and Entropy, Spring 2008
👁 1 раз ⏳ 7009 сек.
🎥 Unit 4: Probability, Lecture 1 | MIT 6.050J Information and Entropy, Spring 2008
👁 1 раз ⏳ 6730 сек.
🎥 Unit 4: Probability, Lecture 2 | MIT 6.050J Information and Entropy, Spring 2008
👁 1 раз ⏳ 6605 сек.
🎥 Unit 5: Communications, Lecture 1 | MIT 6.050J Information and Entropy, Spring 2008
👁 1 раз ⏳ 6577 сек.
🎥 Unit 5: Communications, Lecture 2 | MIT 6.050J Information and Entropy, Spring 2008
👁 1 раз ⏳ 6459 сек.
🎥 Unit 6: Processes, Lecture 1 | MIT 6.050J Information and Entropy, Spring 2008
👁 1 раз ⏳ 6419 сек.
🔗 MIT 6.050J Information and Entropy, Spring 2008 - YouTube
Instructors: Paul Penfield, Seth Lloyd This course explores the ultimate limits to communication and computation, with an emphasis on the physical nature of ...
🎥 Unit 1: Bits and Codes, Lecture 2 | MIT 6.050J Information and Entropy, Spring 2008
👁 2 раз ⏳ 6200 сек.
* Note: Due to technical difficulties, not all the lectures for this course are available.
Unit 1: Bits and Codes, Lecture 2
Instructors: Paul Penfield, Seth Lloyd
See the complete course at: http://ocw.mit.edu/6-050js08
License: Creative Commons BY-NC-SA
More information at http://ocw.mit.edu/terms
More courses at http://ocw.mit.edu
🎥 Unit 2: Compression, Lecture 1 | MIT 6.050J Information and Entropy, Spring 2008
👁 1 раз ⏳ 4769 сек.
Unit 2: Compression, Lecture 1
Instructors: Paul Penfield, Seth Lloyd
See the complete course at: http://ocw.mit.edu/6-050js08
License: Creative Commons BY-NC-SA
More information at http://ocw.mit.edu/terms
More courses at http://ocw.mit.edu
🎥 Unit 3: Noise and Errors, Lecture 2 | MIT 6.050J Information and Entropy, Spring 2008
👁 1 раз ⏳ 7009 сек.
Unit 3: Noise and Errors, Lecture 2
Instructors: Paul Penfield, Seth Lloyd
See the complete course at: http://ocw.mit.edu/6-050js08
License: Creative Commons BY-NC-SA
More information at http://ocw.mit.edu/terms
More courses at http://ocw.mit.edu
🎥 Unit 4: Probability, Lecture 1 | MIT 6.050J Information and Entropy, Spring 2008
👁 1 раз ⏳ 6730 сек.
Unit 4: Probability, Lecture 1
Instructors: Paul Penfield, Seth Lloyd
See the complete course at: http://ocw.mit.edu/6-050js08
License: Creative Commons BY-NC-SA
More information at http://ocw.mit.edu/terms
More courses at http://ocw.mit.edu
🎥 Unit 4: Probability, Lecture 2 | MIT 6.050J Information and Entropy, Spring 2008
👁 1 раз ⏳ 6605 сек.
Unit 4: Probability, Lecture 2
Instructors: Paul Penfield, Seth Lloyd
See the complete course at: http://ocw.mit.edu/6-050js08
License: Creative Commons BY-NC-SA
More information at http://ocw.mit.edu/terms
More courses at http://ocw.mit.edu
🎥 Unit 5: Communications, Lecture 1 | MIT 6.050J Information and Entropy, Spring 2008
👁 1 раз ⏳ 6577 сек.
Unit 5: Communications, Lecture 1
Instructors: Paul Penfield, Seth Lloyd
See the complete course at: http://ocw.mit.edu/6-050js08
License: Creative Commons BY-NC-SA
More information at http://ocw.mit.edu/terms
More courses at http://ocw.mit.edu
🎥 Unit 5: Communications, Lecture 2 | MIT 6.050J Information and Entropy, Spring 2008
👁 1 раз ⏳ 6459 сек.
Unit 5: Communications, Lecture 2
Instructors: Paul Penfield, Seth Lloyd
See the complete course at: http://ocw.mit.edu/6-050js08
License: Creative Commons BY-NC-SA
More information at http://ocw.mit.edu/terms
More courses at http://ocw.mit.edu
🎥 Unit 6: Processes, Lecture 1 | MIT 6.050J Information and Entropy, Spring 2008
👁 1 раз ⏳ 6419 сек.
Unit 6: Processes, Lecture 1
Instructors: Paul Penfield, Seth Lloyd
See the complete course at: http://ocw.mit.edu/6-050js08
License: Creative Commons BY-NC-SA
More information at http://ocw.mit.edu/terms
More courses at http://ocw.mit.edu
YouTube
MIT 6.050J Information and Entropy, Spring 2008 - YouTube
Instructors: Paul Penfield, Seth Lloyd This course explores the ultimate limits to communication and computation, with an emphasis on the physical nature of ...
5 Tools To Evaluate Machine Learning Model Fairness and Bias.
Introducing Some Tools To Easily Evaluate and Audit Machine Learning Models For Fairness and Bias
Наш телеграм канал - https://tele.click/ai_machinelearning_big_data
https://medium.com/@will.badr/5-tools-to-evaluate-machine-learning-model-fairness-and-bias-66cb78920878?source=topic_page---------0------------------1
🔗 5 Tools To Evaluate Machine Learning Model Fairness and Bias.
Introducing Some Tools To Easily Evaluate and Audit Machine Learning Models For Fairness and Bias
Introducing Some Tools To Easily Evaluate and Audit Machine Learning Models For Fairness and Bias
Наш телеграм канал - https://tele.click/ai_machinelearning_big_data
https://medium.com/@will.badr/5-tools-to-evaluate-machine-learning-model-fairness-and-bias-66cb78920878?source=topic_page---------0------------------1
🔗 5 Tools To Evaluate Machine Learning Model Fairness and Bias.
Introducing Some Tools To Easily Evaluate and Audit Machine Learning Models For Fairness and Bias
Towards Data Science
5 Tools To Evaluate Machine Learning Model Fairness and Bias.
Introducing Some Tools To Easily Evaluate and Audit Machine Learning Models For Fairness and Bias
PCI Express vs. Thunderbolt - How much performance drop of your GPU you will have if you put it in eGPU
https://egpu.io/forums/mac-setup/pcie-slot-dgpu-vs-thunderbolt-3-egpu-internal-display-test/
🔗 PCI Express vs. Thunderbolt - How much performance drop of y...
This is the question that many users wants to know: How much performance drop of my Video Card i will have if i put it in the eGPU with Thunderbolt 1,...
https://egpu.io/forums/mac-setup/pcie-slot-dgpu-vs-thunderbolt-3-egpu-internal-display-test/
🔗 PCI Express vs. Thunderbolt - How much performance drop of y...
This is the question that many users wants to know: How much performance drop of my Video Card i will have if i put it in the eGPU with Thunderbolt 1,...
eGPU.io
eGPU Performance Loss - PCI Express vs. Thunderbolt
This is the question that many users wants to know: How much performance drop of my Video Card i will have if i put it in the eGPU with Thunderbolt 1,...
Knitting and Recommendations
How Computers Think: Part Three
https://towardsdatascience.com/knitting-and-recommendations-b9d178a86c97?source=collection_home---4------1---------------------
🔗 Knitting and Recommendations
How Computers Think: Part Three
How Computers Think: Part Three
https://towardsdatascience.com/knitting-and-recommendations-b9d178a86c97?source=collection_home---4------1---------------------
🔗 Knitting and Recommendations
How Computers Think: Part Three
Towards Data Science
Knitting and Recommendations
How Computers Think: Part Three
Jupyter Superpower — Interactive Visualization Combo with Python
https://towardsdatascience.com/jupyter-superpower-interactive-visualization-combo-with-python-ffc0adb37b7b?source=collection_home---4------5---------------------
🔗 Jupyter Superpower — Interactive Visualization Combo with Python
Introduction
https://towardsdatascience.com/jupyter-superpower-interactive-visualization-combo-with-python-ffc0adb37b7b?source=collection_home---4------5---------------------
🔗 Jupyter Superpower — Interactive Visualization Combo with Python
Introduction
Towards Data Science
Jupyter Superpower — Interactive Visualization Combo with Python
Introduction
Using RNNs for Machine Translation
An introduction to RNN and LSTM networks, as well as their applications.
Наш телеграм канал - https://tele.click/ai_machinelearning_big_data
https://towardsdatascience.com/using-rnns-for-machine-translation-11ddded78ddf?source=collection_home---4------4---------------------
🔗 Using RNNs for Machine Translation
An introduction to RNN and LSTM networks, as well as their applications.
An introduction to RNN and LSTM networks, as well as their applications.
Наш телеграм канал - https://tele.click/ai_machinelearning_big_data
https://towardsdatascience.com/using-rnns-for-machine-translation-11ddded78ddf?source=collection_home---4------4---------------------
🔗 Using RNNs for Machine Translation
An introduction to RNN and LSTM networks, as well as their applications.
Medium
Using RNNs for Machine Translation
An introduction to RNN and LSTM networks, as well as their applications.
Data Science Code Refactoring Example
Наш телеграм канал - https://tele.click/ai_machinelearning_big_data
When learning to code for data science we don’t usually consider the idea of modifying our code to reap a particular benefit in terms of performance. We code to modify our data, produce a visualization, and to construct our ML models. But if your code is going to be used for a dashboard or app, we have to consider if our code is optimal. In this code example, we will make a small modification to an ecdf function for speed.
https://towardsdatascience.com/data-science-code-refactoring-example-14c3ec858e0c
🔗 Data Science Code Refactoring Example
When learning to code for data science we don’t usually consider the idea of modifying our code to reap a particular benefit in terms of…
Наш телеграм канал - https://tele.click/ai_machinelearning_big_data
When learning to code for data science we don’t usually consider the idea of modifying our code to reap a particular benefit in terms of performance. We code to modify our data, produce a visualization, and to construct our ML models. But if your code is going to be used for a dashboard or app, we have to consider if our code is optimal. In this code example, we will make a small modification to an ecdf function for speed.
https://towardsdatascience.com/data-science-code-refactoring-example-14c3ec858e0c
🔗 Data Science Code Refactoring Example
When learning to code for data science we don’t usually consider the idea of modifying our code to reap a particular benefit in terms of…
Medium
Data Science Code Refactoring Example
When learning to code for data science we don’t usually consider the idea of modifying our code to reap a particular benefit in terms of…
Building support for pollution-free cities: an Open Data workflow
https://towardsdatascience.com/building-support-for-pollution-free-cities-an-open-data-workflow-888096797cc9?source=collection_home---4------0---------------------
🔗 Building support for pollution-free cities: an Open Data workflow
Air pollution is one of the great killers of our age, causing 6.4 million deaths in 2015 according to a Lancet study — compared with 0.7…
https://towardsdatascience.com/building-support-for-pollution-free-cities-an-open-data-workflow-888096797cc9?source=collection_home---4------0---------------------
🔗 Building support for pollution-free cities: an Open Data workflow
Air pollution is one of the great killers of our age, causing 6.4 million deaths in 2015 according to a Lancet study — compared with 0.7…
Towards Data Science
Building support for pollution-free cities: an Open Data workflow
Air pollution is one of the great killers of our age, causing 6.4 million deaths in 2015 according to a Lancet study — compared with 0.7…
🎥 Александр Радионов | Город IT 2018 | Storekeeper – поиск дубликатов товаров
👁 1 раз ⏳ 1698 сек.
👁 1 раз ⏳ 1698 сек.
Секция "Машинное обучение"
Vk
Александр Радионов | Город IT 2018 | Storekeeper – поиск дубликатов товаров
Секция "Машинное обучение"
🎥 Practical Deep Learning - Part 1
👁 1 раз ⏳ 15180 сек.
👁 1 раз ⏳ 15180 сек.
In this first part of Practical deep learning course you will learn about most cutting edge deep learning technology. Along the way you will be exposed to :
- Image classification using transfer learning
- How to set hyper-parameter , learning rate
- Practical deep learning application
- Image collection
- - Parallel downloading
- - Creating a validation set, and
- - Data cleaning, using the model to help us find data problems.
- Image segmentation
- Fine tuning
- Natural language processing
- Tabular
- Co
Vk
Practical Deep Learning - Part 1
In this first part of Practical deep learning course you will learn about most cutting edge deep learning technology. Along the way you will be exposed to :
- Image classification using transfer learning
- How to set hyper-parameter , learning rate
- Practical…
- Image classification using transfer learning
- How to set hyper-parameter , learning rate
- Practical…
Unsupervised Learning
https://www.youtube.com/watch?v=8dqdDEyzkFA
🎥 Unsupervised Learning
👁 1 раз ⏳ 647 сек.
https://www.youtube.com/watch?v=8dqdDEyzkFA
🎥 Unsupervised Learning
👁 1 раз ⏳ 647 сек.
Unsupervised learning is the most exciting subfield of machine learning! Finding structure in unstructured data automatically sounds like a dream come true, no need to have a label! In this video, I'll demonstrate 2 types of unsupervised learning techniques; k means clustering and principal component analysis. We'll use these techniques on neural data from a patient suffering from seizures to see if we can locate the part of their brain in need of surgery to save their life. You'll laugh, you'll cry, but mo
YouTube
Unsupervised Learning
Unsupervised learning is the most exciting subfield of machine learning! Finding structure in unstructured data automatically sounds like a dream come true, no need to have a label! In this video, I'll demonstrate 2 types of unsupervised learning techniques;…
🎥 Machine Learning: основы и опыт применения.
👁 1 раз ⏳ 2244 сек.
👁 1 раз ⏳ 2244 сек.
Machine Learning, основы и опыт применения
IT-Trends Conference 2019, Херсон
Основные тезисы:
- Что объединяет Machine Learning, уточек и beauty-industry
- Теоретические основы Machine Learning - подходы и методы
- Какие задачи можно решать с помощью Machine Learning
Спикер: Павел Кнорр, Team Lead и Architect, Logicify.
Более восьми лет работы в IT сфере. Начинал, как full stack-разработчик, последние четыре года отвечал за создание основой архитектуры, поиск и реализацию технических решений. На нескольк
Vk
Machine Learning: основы и опыт применения.
Machine Learning, основы и опыт применения
IT-Trends Conference 2019, Херсон
Основные тезисы:
- Что объединяет Machine Learning, уточек и beauty-industry
- Теоретические основы Machine Learning - подходы и методы
- Какие задачи можно решать с помощью Machine…
IT-Trends Conference 2019, Херсон
Основные тезисы:
- Что объединяет Machine Learning, уточек и beauty-industry
- Теоретические основы Machine Learning - подходы и методы
- Какие задачи можно решать с помощью Machine…