Forwarded from Tensorflow(@CVision) (Alireza Akhavan)
Better Language Models and Their Implications
یک مدل زیانی جدید با قدرت خیلی زیاد که از ترس سوء استفاده نسخه کامل منتشر نشده!
#gpt2
OpenAI has a new language model, so powerful they aren't releasing the full version! I know, it sounds like some bad sci fi.
The results are absolutely stunning. Prepare to be flooded in a year or so with high-quality text hallucinated by neural nets.
It's another transformer-based model, yet again cementing the dominance of transformer architectures in NLP. It has 1.5 billion parameters.
Paper: https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf
Code: https://github.com/openai/gpt-2
Blog: https://blog.openai.com/better-language-models/
—————————————-
اخبار و توئیت های مرتبط:
🖊توئیت Andrej Karpathy - خطر نوشتن بیشتر در فاضای مجازی ...
🖊تولید شعر با مدل GPT2
🖊هوش مصنوعی openAI میتواند اخبار جعلی بنویسد
🙏Thanks to: @vahidreza01
#deeplearning #artificialintelligence #machinelearning #nlp
یک مدل زیانی جدید با قدرت خیلی زیاد که از ترس سوء استفاده نسخه کامل منتشر نشده!
#gpt2
OpenAI has a new language model, so powerful they aren't releasing the full version! I know, it sounds like some bad sci fi.
The results are absolutely stunning. Prepare to be flooded in a year or so with high-quality text hallucinated by neural nets.
It's another transformer-based model, yet again cementing the dominance of transformer architectures in NLP. It has 1.5 billion parameters.
Paper: https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf
Code: https://github.com/openai/gpt-2
Blog: https://blog.openai.com/better-language-models/
—————————————-
اخبار و توئیت های مرتبط:
🖊توئیت Andrej Karpathy - خطر نوشتن بیشتر در فاضای مجازی ...
🖊تولید شعر با مدل GPT2
🖊هوش مصنوعی openAI میتواند اخبار جعلی بنویسد
🙏Thanks to: @vahidreza01
#deeplearning #artificialintelligence #machinelearning #nlp
A handful of podcasts, labs, projects, and groups which are involved both Neuroscience and Artificial Intelligence:
NeuroAILab: Aim to "reverse engineer" the algorithms of the brain, both to learn about how our minds work and to build more effective artificial intelligence systems.
Learning in Neural Circuits (LiNC) Laboratory: Study general principles of learning and memory in neural networks with the ultimate goal of understanding how real and artificial brains can optimize behaviour.
Human Brain Project: The Human Brain Project (HBP) is building a research infrastructure to help advance neuroscience, medicine and computing. It is one of four FET (Future and Emerging Tehcnology) Flagships, the largest scientific projects ever funded by the European Union.
Center for Brains, Minds and Machines: Understanding how the brain produces intelligent behavior and how we may be able to replicate intelligence in machines is arguably one of the greatest challenges in science and technology. This group brings together computer scientists, cognitive scientists, and neuroscientists to create a new field—the Science and Engineering of Intelligence.
Center for Theoretical Neuroscience: they aim to establish, through the quality of the Center's research, the excellence of its trainees, and the impact of its visitor, dissemination, and outreach programs, a new cooperative paradigm that will move neuroscience to unprecedented levels of discovery and understanding. We believe we have one of the most exciting and interactive environments anywhere for bringing theoretical approaches to Neuroscience.
Unsupervised Thinking: a podcast about neuroscience, artificial intelligence and science more broadly
#NeuroScience #MachineLearning
NeuroAILab: Aim to "reverse engineer" the algorithms of the brain, both to learn about how our minds work and to build more effective artificial intelligence systems.
Learning in Neural Circuits (LiNC) Laboratory: Study general principles of learning and memory in neural networks with the ultimate goal of understanding how real and artificial brains can optimize behaviour.
Human Brain Project: The Human Brain Project (HBP) is building a research infrastructure to help advance neuroscience, medicine and computing. It is one of four FET (Future and Emerging Tehcnology) Flagships, the largest scientific projects ever funded by the European Union.
Center for Brains, Minds and Machines: Understanding how the brain produces intelligent behavior and how we may be able to replicate intelligence in machines is arguably one of the greatest challenges in science and technology. This group brings together computer scientists, cognitive scientists, and neuroscientists to create a new field—the Science and Engineering of Intelligence.
Center for Theoretical Neuroscience: they aim to establish, through the quality of the Center's research, the excellence of its trainees, and the impact of its visitor, dissemination, and outreach programs, a new cooperative paradigm that will move neuroscience to unprecedented levels of discovery and understanding. We believe we have one of the most exciting and interactive environments anywhere for bringing theoretical approaches to Neuroscience.
Unsupervised Thinking: a podcast about neuroscience, artificial intelligence and science more broadly
#NeuroScience #MachineLearning
Neural Architecture Search without Training
Abstract: The time and effort involved in hand-designing deep neural networks is immense. This has prompted the development of Neural Architecture Search (NAS) techniques to automate this design. However, NAS algorithms tend to be extremely slow and expensive; they need to train vast numbers of candidate networks to inform the search process. This could be remedied if we could infer a network's trained accuracy from its initial state. In this work, we examine how the linear maps induced by data points correlate for untrained network architectures in the NAS-Bench-201 search space, and motivate how this can be used to give a measure of modelling flexibility which is highly indicative of a network's trained performance. We incorporate this measure into a simple algorithm that allows us to search for powerful networks without any training in a matter of seconds on a single GPU.
Explanatory Video: https://www.youtube.com/watch?v=a6v92P0EbJc
GitHub Repo: https://github.com/BayesWatch/nas-without-training
Paper: https://arxiv.org/abs/2006.04647
#deep_learning #neural_architecture_search
Abstract: The time and effort involved in hand-designing deep neural networks is immense. This has prompted the development of Neural Architecture Search (NAS) techniques to automate this design. However, NAS algorithms tend to be extremely slow and expensive; they need to train vast numbers of candidate networks to inform the search process. This could be remedied if we could infer a network's trained accuracy from its initial state. In this work, we examine how the linear maps induced by data points correlate for untrained network architectures in the NAS-Bench-201 search space, and motivate how this can be used to give a measure of modelling flexibility which is highly indicative of a network's trained performance. We incorporate this measure into a simple algorithm that allows us to search for powerful networks without any training in a matter of seconds on a single GPU.
Explanatory Video: https://www.youtube.com/watch?v=a6v92P0EbJc
GitHub Repo: https://github.com/BayesWatch/nas-without-training
Paper: https://arxiv.org/abs/2006.04647
#deep_learning #neural_architecture_search
YouTube
Neural Architecture Search without Training (Paper Explained)
#ai #research #machinelearning
Neural Architecture Search is typically very slow and resource-intensive. A meta-controller has to train many hundreds or thousands of different models to find a suitable building plan. This paper proposes to use statistics…
Neural Architecture Search is typically very slow and resource-intensive. A meta-controller has to train many hundreds or thousands of different models to find a suitable building plan. This paper proposes to use statistics…