Neural Networks | Нейронные сети
11.8K subscribers
763 photos
169 videos
170 files
9.42K links
Все о машинном обучении

По всем вопросам - @notxxx1

№ 4959169263
加入频道
​Facebook has released #PyText — new framework on top of #PyTorch.

This framework is build to make it easier for developers to build #NLP models.

Link: https://code.fb.com/ai-research/pytext-open-source-nlp-framework/

🔗 Open-sourcing PyText for faster NLP development
We are open-sourcing PyText, a framework for natural language processing. PyText is built on PyTorch and it makes it faster and easier to build deep learning models for NLP.
​PyText

- PyText https://github.com/facebookresearch/pytext from Facebook:
- TLDR - FastText meets PyTorch;
- Very similar to AllenNLP in nature;
- Will be useful if you can afford to write modules for their framework to solve 100 identical tasks (i.e. like Facebook with 200 languages);
- In itself - seems to be too high maintenance to use;

I will not use use it.

#nlp
#deep_learning

🔗 facebookresearch/pytext
A natural language modeling framework based on PyTorch - facebookresearch/pytext
​Learning from Dialogue after Deployment: Feed Yourself, Chatbot!

From abstract: The self-feeding chatbot, a dialogue agent with the ability to extract new training examples from the conversations it participates in.

This is an article about chatbot which is capable of true online learning. There is also a venturebeat article on the subject, covering the perspective: «Facebook and Stanford researchers design a chatbot that learns from its mistakes».

Venturebeat: https://venturebeat.com/2019/01/17/facebook-and-stanford-researchers-design-a-chatbot-that-learns-from-its-mistakes/
ArXiV: https://arxiv.org/abs/1901.05415

#NLP #chatbot #facebook #Stanford

🔗 Facebook and Stanford researchers design a chatbot that learns from its mistakes
Chatbots rarely make great conversationalists. With the exception of perhaps Microsoft’s Xiaoice in China, which has about 40 million users and averages 23 back-and-forth exchanges, and Alibaba’s Dian Xiaomi, an automated sales agent that serves nearly 3.5 million customers a day, most can’t hold humans’ attention for much longer than 15 minutes. But that’s not tempering bot adoption any — in fact
​Project: DeepNLP course
Link: https://github.com/DanAnastasyev/DeepNLP-Course
Description:
Deep learning for NLP crash course at ABBYY. Topics include: sentiment analysis, word embeddings, CNNs, seq2seq with attention and much more. Enjoy!
#ML #DL #NLP #python #abbyy #opensource

🔗 DanAnastasyev/DeepNLP-Course
Deep NLP Course. Contribute to DanAnastasyev/DeepNLP-Course development by creating an account on GitHub.
🎥 A Practical Introduction to Productionizing NLP Models - Brendon Villalobos - @bkvillalobos
👁 1 раз 1586 сек.
It's exciting to create Deep Learning models that can interpret natural language, but resource-greedy NLP models can bottleneck performance in your app. This talk is a practical introduction to productionizing NLP models from training through deployment, with tips to avoid common pitfalls.

#NLP #deeplearning #techconference
​The Illustrated GPT-2 (Visualizing Transformer Language Models)

https://jalammar.github.io/illustrated-gpt2/
#ArtificialIntelligence #NLP #UnsupervisedLearning

🔗 The Illustrated GPT-2 (Visualizing Transformer Language Models)
Discussions: Hacker News (64 points, 3 comments), Reddit r/MachineLearning (219 points, 18 comments) This year, we saw a dazzling application of machine learning. The OpenAI GPT-2 exhibited impressive ability of writing coherent and passionate essays that exceed what we anticipated current language models are able to produce. The GPT-2 wasn’t a particularly novel architecture – it’s architecture is very similar to the decoder-only transformer. The GPT2 was, however, a very large, transformer-based language model trained on a massive dataset. In this post, we’ll look at the architecture that enabled the model to produce its results. We will go into the depths of its self-attention layer. And then we’ll look at applications for the decoder-only transformer beyond language modeling. My goal here is to also supplement my earlier post, The Illustrated Transformer, with more visuals explaining the inner-workings of transformers, and how they’ve evolved since the original paper. My hope is that this visual language will hopefully make it easier to explain later Transformer-based models as their inner-workings continue to evolve.
​As it turns out, Wang Ling was way ahead of the curve re NLP's muppet craze (see slides from LxMLS '16 & Oxford #NLP course '17 below).

https://github.com/oxford-cs-deepnlp-2017/lectures

🔗 oxford-cs-deepnlp-2017/lectures
Oxford Deep NLP 2017 course. Contribute to oxford-cs-deepnlp-2017/lectures development by creating an account on GitHub.
​SpeechBrain
A PyTorch-based Speech Toolkit

Video, by Mirco Ravanelli : https://youtube.com/watch?v=XETiKbN9ojE

: https://speechbrain.github.io

#speechbrain #NLP #DeepLearning

🔗 The SpeechBrain Project
SpeechBrain is an open-source and all-in-one speech toolkit relying on PyTorch. The goal is to create a single, flexible, and user-friendly toolkit that can be used to easily develop state-of-the-art speech technologies