#TSNE-#CUDA implementation up to 1200x faster Sklearn
Don't waste your time, use #GPU-Accelerated t-SNE
Paper: https://arxiv.org/pdf/1807.11824.pdf
Code: https://github.com/CannyLab/tsne-cuda
Don't waste your time, use #GPU-Accelerated t-SNE
Paper: https://arxiv.org/pdf/1807.11824.pdf
Code: https://github.com/CannyLab/tsne-cuda
GitHub
GitHub - CannyLab/tsne-cuda: GPU Accelerated t-SNE for CUDA with Python bindings
GPU Accelerated t-SNE for CUDA with Python bindings - CannyLab/tsne-cuda
GPU cooling tool
This script lets you set a custom GPU fan curve on a headless Linux server.
If you want to install multiple GPUs in a single machine, you have to use blower-style GPUs else the hot exhaust builds up in your case. Blower-style GPUs can get very loud, so to avoid annoying customers nvidia artificially limits their fans to ~50% duty. At 50% duty and a heavy workload, blower-style GPUs will hot up to 85C or so and throttle themselves.
Now if you're on Windows nvidia happily lets you override that limit by setting a custom fan curve. If you're on Linux though you need to use nvidia-settings, which - as of Sept 2019 - requires a display attached to each GPU you want to set the fan for. This is a pain to set up, as is checking the GPU temp every few seconds and adjusting the fan speed.
This script does all that for you.
Code: https://github.com/andyljones/coolgpus
#hardware #gpu
This script lets you set a custom GPU fan curve on a headless Linux server.
If you want to install multiple GPUs in a single machine, you have to use blower-style GPUs else the hot exhaust builds up in your case. Blower-style GPUs can get very loud, so to avoid annoying customers nvidia artificially limits their fans to ~50% duty. At 50% duty and a heavy workload, blower-style GPUs will hot up to 85C or so and throttle themselves.
Now if you're on Windows nvidia happily lets you override that limit by setting a custom fan curve. If you're on Linux though you need to use nvidia-settings, which - as of Sept 2019 - requires a display attached to each GPU you want to set the fan for. This is a pain to set up, as is checking the GPU temp every few seconds and adjusting the fan speed.
This script does all that for you.
Code: https://github.com/andyljones/coolgpus
#hardware #gpu
Nvidia announced new card RTX 3090
RTX 3090 is roughly 2 times more powerful than 2080.
There is probably no point in getting 3080 because RAM volume is only 10G.
But what really matters, is how it was presented. Purely technological product for mostly proffesionals, techheads and gamers was presented with absolute brialliancy. That is much more exciting then the release itself.
YouTube: https://www.youtube.com/watch?v=E98hC9e__Xs
#Nvidia #GPU #techstack
RTX 3090 is roughly 2 times more powerful than 2080.
There is probably no point in getting 3080 because RAM volume is only 10G.
But what really matters, is how it was presented. Purely technological product for mostly proffesionals, techheads and gamers was presented with absolute brialliancy. That is much more exciting then the release itself.
YouTube: https://www.youtube.com/watch?v=E98hC9e__Xs
#Nvidia #GPU #techstack
Forwarded from Machinelearning
This media is not supported in your browser
VIEW IN TELEGRAM
Все мы любим scikit-learn за его простоту и мощь. Но что если ваши модели обучаются слишком долго на больших данных? 🤔 NVIDIA предлагает решение!
Вы берете свой обычный скрипт cо scikit-learn, добавляете всего две строки в начало, и он начинает работать в 10, 50, а то и 100+ раз быстрее на NVIDIA GPU!
✨ Как это работает?
Библиотека cuml от NVIDIA содержит супероптимизированные для GPU версии многих алгоритмов машинного обучения. С помощью простого вызова
cuml.patch.apply()
вы "патчите" установленный у вас scikit-learn прямо в памяти.Теперь, когда вы вызываете, например,
KNeighborsClassifier
или PCA
из sklearn:Ключевые преимущества:
2 строчки:import cuml.patch и cuml.patch.apply().
Топ инструмент для всех, кто работает с scikit-learn на задачах, требующих значительных вычислений, и у кого есть GPU от NVIDIA.
👇 Как использовать:
Установите RAPIDS cuml (лучше через conda, см. сайт RAPIDS):
python
conda install -c rapidsai -c conda-forge -c nvidia cuml rapids-build-backend
Добавьте в начало скрипта:
import cuml.patch
cuml.patch.apply()
Используйте scikit-learn как обычно!
Попробуйте и почувствуйте разницу! 😉
▪Блог-пост
▪Colab
▪Github
▪Ускоряем Pandas
@ai_machinelearning_big_data
#python #datascience #machinelearning #scikitlearn #rapids #cuml #gpu #nvidia #ускорение #машинноеобучение #анализданных
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥21❤4👍3🤡1