François Chollet is the creator of Keras, which is an open source deep learning library that is designed to enable fast, user-friendly experimentation with deep neural networks. It serves as an interface to several deep learning libraries, most popular of which is TensorFlow, and it was integrated into TensorFlow main codebase a while back. Aside from creating an exceptionally useful and popular library, François is also a world-class AI researcher and software engineer at Google, and is definitely an outspoken, if not controversial, personality in the AI world, especially in the realm of ideas around the future of artificial intelligence. This conversation is part of the Artificial Intelligence podcast.
https://www.youtube.com/watch?v=Bo8MY4JpiXE&t=173s
#machine_learning #artificial_intelligence #podcast
https://www.youtube.com/watch?v=Bo8MY4JpiXE&t=173s
#machine_learning #artificial_intelligence #podcast
YouTube
François Chollet: Keras, Deep Learning, and the Progress of AI | Artificial Intelligence Podcast
François Chollet is the creator of Keras, which is an open source deep learning library that is designed to enable fast, user-friendly experimentation with d...
Michio Kaku: Future of Humans, Aliens, Space Travel & Physics | Artificial Intelligence (AI) Podcast
Michio Kaku is a theoretical physicist, futurist, and professor at the City College of New York. He is the author of many fascinating books on the nature of our reality and the future of our civilization. This conversation is part of the Artificial Intelligence podcast.
https://www.youtube.com/watch?v=kD5yc1LQrpQ
#artificial_intelligence #physics #cosmology
Michio Kaku is a theoretical physicist, futurist, and professor at the City College of New York. He is the author of many fascinating books on the nature of our reality and the future of our civilization. This conversation is part of the Artificial Intelligence podcast.
https://www.youtube.com/watch?v=kD5yc1LQrpQ
#artificial_intelligence #physics #cosmology
YouTube
Michio Kaku: Future of Humans, Aliens, Space Travel & Physics | Lex Fridman Podcast #45
A great discussion with Sebastian Thrun about various topics such as: Flying Cars, Autonomous Vehicles, and Education
https://www.youtube.com/watch?v=ZPPAOakITeQ
#self_driving_cars #education #artificial_intelligence #machine_learning
https://www.youtube.com/watch?v=ZPPAOakITeQ
#self_driving_cars #education #artificial_intelligence #machine_learning
YouTube
Sebastian Thrun: Flying Cars, Autonomous Vehicles, and Education | Lex Fridman Podcast #59
An Overview of Recent State of the Art Deep Learning Algorithms/Architectures
Lecture on most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rather a set of highlights of machine learning and AI innovations and progress in academia, industry, and society in general. This lecture is part of the MIT Deep Learning Lecture Series.
https://www.youtube.com/watch?v=0VH1Lim8gL8&t=999s
#deep_learning #artificial_intelligence
Lecture on most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rather a set of highlights of machine learning and AI innovations and progress in academia, industry, and society in general. This lecture is part of the MIT Deep Learning Lecture Series.
https://www.youtube.com/watch?v=0VH1Lim8gL8&t=999s
#deep_learning #artificial_intelligence
YouTube
Deep Learning State of the Art (2020) | MIT Deep Learning Series
Lecture on most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rathe...
A fruitful relationship between neuroscience and AI
https://deepmind.com/blog/article/Dopamine-and-temporal-difference-learning-A-fruitful-relationship-between-neuroscience-and-AI
#reinforcement_learning #machine_learning #neuroscience #artificial_intelligence
https://deepmind.com/blog/article/Dopamine-and-temporal-difference-learning-A-fruitful-relationship-between-neuroscience-and-AI
#reinforcement_learning #machine_learning #neuroscience #artificial_intelligence
Google DeepMind
Dopamine and temporal difference learning: A fruitful relationship between neuroscience and AI
Learning and motivation are driven by internal and external rewards. Many of our day-to-day behaviours are guided by predicting, or anticipating, whether a given action will result in a positive...
Book: The SOAR Cognitive Architecture
Introduction: in development for thirty years, Soar is a general cognitive architecture that integrates knowledge-intensive reasoning, reactive execution, hierarchical reasoning, planning, and learning from experience, with the goal of creating a general computational system that has the same cognitive abilities as humans. In contrast, most AI systems are designed to solve only one type of problem, such as playing chess, searching the Internet, or scheduling aircraft departures. Soar is both a software system for agent development and a theory of what computational structures are necessary to support human-level agents. Over the years, both software system and theory have evolved. This book offers the definitive presentation of Soar from theoretical and practical perspectives, providing comprehensive descriptions of fundamental aspects and new components. The current version of Soar features major extensions, adding reinforcement learning, semantic memory, episodic memory, mental imagery, and an appraisal-based model of emotion. This book describes details of Soar's component memories and processes and offers demonstrations of individual components, components working in combination, and real-world applications. Beyond these functional considerations, the book also proposes requirements for general cognitive architectures and explicitly evaluates how well Soar meets those requirements.
https://dl.acm.org/doi/book/10.5555/2222503
#cognitive_science #neuroscience #reinforcement_learning #artificial_intelligence
Introduction: in development for thirty years, Soar is a general cognitive architecture that integrates knowledge-intensive reasoning, reactive execution, hierarchical reasoning, planning, and learning from experience, with the goal of creating a general computational system that has the same cognitive abilities as humans. In contrast, most AI systems are designed to solve only one type of problem, such as playing chess, searching the Internet, or scheduling aircraft departures. Soar is both a software system for agent development and a theory of what computational structures are necessary to support human-level agents. Over the years, both software system and theory have evolved. This book offers the definitive presentation of Soar from theoretical and practical perspectives, providing comprehensive descriptions of fundamental aspects and new components. The current version of Soar features major extensions, adding reinforcement learning, semantic memory, episodic memory, mental imagery, and an appraisal-based model of emotion. This book describes details of Soar's component memories and processes and offers demonstrations of individual components, components working in combination, and real-world applications. Beyond these functional considerations, the book also proposes requirements for general cognitive architectures and explicitly evaluates how well Soar meets those requirements.
https://dl.acm.org/doi/book/10.5555/2222503
#cognitive_science #neuroscience #reinforcement_learning #artificial_intelligence
Deep Reasoning Papers
A repository which contains recent papers including Neural Symbolic Reasoning, Logical Reasoning, Visual Reasoning, natural language reasoning and any other topics connecting deep learning and reasoning.
https://github.com/floodsung/Deep-Reasoning-Papers
#reasoning #deep_learning #artificial_intelligence
A repository which contains recent papers including Neural Symbolic Reasoning, Logical Reasoning, Visual Reasoning, natural language reasoning and any other topics connecting deep learning and reasoning.
https://github.com/floodsung/Deep-Reasoning-Papers
#reasoning #deep_learning #artificial_intelligence
GitHub
GitHub - floodsung/Deep-Reasoning-Papers: Recent Papers including Neural Symbolic Reasoning, Logical Reasoning, Visual Reasoning…
Recent Papers including Neural Symbolic Reasoning, Logical Reasoning, Visual Reasoning, planning and any other topics connecting deep learning and reasoning - floodsung/Deep-Reasoning-Papers
A Collection of Definitions of Intelligence
https://arxiv.org/pdf/0706.3639.pdf
#artificial_intelligence
https://arxiv.org/pdf/0706.3639.pdf
#artificial_intelligence
What's Wrong with Artificial Intelligence: From the perspective of Prof. Richard Sutton
I hold that AI has gone astray by neglecting its essential objective --- the turning over of responsibility for the decision-making and organization of the AI system to the AI system itself. It has become an accepted, indeed lauded, form of success in the field to exhibit a complex system that works well primarily because of some insight the designers have had into solving a particular problem. This is part of an anti-theoretic, or "engineering stance", that considers itself open to any way of solving a problem. But whatever the merits of this approach as engineering, it is not really addressing the objective of AI. For AI it is not enough merely to achieve a better system; it matters how the system was made. The reason it matters can ultimately be considered a practical one, one of scaling. An AI system too reliant on manual tuning, for example, will not be able to scale past what can be held in the heads of a few programmers. This, it seems to me, is essentially the situation we are in today in AI. Our AI systems are limited because we have failed to turn over responsibility for them to them.
Please forgive me for this which must seem a rather broad and vague criticism of AI. One way to proceed would be to detail the criticism with regard to more specific subfields or subparts of AI. But rather than narrowing the scope, let us first try to go the other way. Let us try to talk in general about the longer-term goals of AI which we can share and agree on. In broadest outlines, I think we all envision systems which can ultimately incorporate large amounts of world knowledge. This means knowing things like how to move around, what a bagel looks like, that people have feet, etc. And knowing these things just means that they can be combined flexibly, in a variety of combinations, to achieve whatever are the goals of the AI. If hungry, for example, perhaps the AI can combine its bagel recognizer with its movement knowledge, in some sense, so as to approach and consume the bagel. This is a cartoon view of AI -- as knowledge plus its flexible combination -- but it suffices as a good place to start. Note that it already places us beyond the goals of a pure performance system. We seek knowledge that can be used flexibly, i.e., in several different ways, and at least somewhat independently of its expected initial use.
With respect to this cartoon view of AI, my concern is simply with ensuring the correctness of the AI's knowledge. There is a lot of knowledge, and inevitably some of it will be incorrrect. Who is responsible for maintaining correctness, people or the machine? I think we would all agree that, as much as possible, we would like the AI system to somehow maintain its own knowledge, thus relieving us of a major burden. But it is hard to see how this might be done; easier to simply fix the knowledge ourselves. This is where we are today.
Date: November 12, 2001
http://incompleteideas.net/IncIdeas/WrongWithAI.html
#artificial_intelligence
I hold that AI has gone astray by neglecting its essential objective --- the turning over of responsibility for the decision-making and organization of the AI system to the AI system itself. It has become an accepted, indeed lauded, form of success in the field to exhibit a complex system that works well primarily because of some insight the designers have had into solving a particular problem. This is part of an anti-theoretic, or "engineering stance", that considers itself open to any way of solving a problem. But whatever the merits of this approach as engineering, it is not really addressing the objective of AI. For AI it is not enough merely to achieve a better system; it matters how the system was made. The reason it matters can ultimately be considered a practical one, one of scaling. An AI system too reliant on manual tuning, for example, will not be able to scale past what can be held in the heads of a few programmers. This, it seems to me, is essentially the situation we are in today in AI. Our AI systems are limited because we have failed to turn over responsibility for them to them.
Please forgive me for this which must seem a rather broad and vague criticism of AI. One way to proceed would be to detail the criticism with regard to more specific subfields or subparts of AI. But rather than narrowing the scope, let us first try to go the other way. Let us try to talk in general about the longer-term goals of AI which we can share and agree on. In broadest outlines, I think we all envision systems which can ultimately incorporate large amounts of world knowledge. This means knowing things like how to move around, what a bagel looks like, that people have feet, etc. And knowing these things just means that they can be combined flexibly, in a variety of combinations, to achieve whatever are the goals of the AI. If hungry, for example, perhaps the AI can combine its bagel recognizer with its movement knowledge, in some sense, so as to approach and consume the bagel. This is a cartoon view of AI -- as knowledge plus its flexible combination -- but it suffices as a good place to start. Note that it already places us beyond the goals of a pure performance system. We seek knowledge that can be used flexibly, i.e., in several different ways, and at least somewhat independently of its expected initial use.
With respect to this cartoon view of AI, my concern is simply with ensuring the correctness of the AI's knowledge. There is a lot of knowledge, and inevitably some of it will be incorrrect. Who is responsible for maintaining correctness, people or the machine? I think we would all agree that, as much as possible, we would like the AI system to somehow maintain its own knowledge, thus relieving us of a major burden. But it is hard to see how this might be done; easier to simply fix the knowledge ourselves. This is where we are today.
Date: November 12, 2001
http://incompleteideas.net/IncIdeas/WrongWithAI.html
#artificial_intelligence
AlphaGo - The Movie | Full Documentary
Summary: with more board configurations than there are atoms in the universe, the ancient Chinese game of Go has long been considered a grand challenge for artificial intelligence. On March 9, 2016, the worlds of Go and artificial intelligence collided in South Korea for an extraordinary best-of-five-game competition, coined The DeepMind Challenge Match. Hundreds of millions of people around the world watched as a legendary Go master took on an unproven AI challenger for the first time in history.
https://www.youtube.com/watch?v=WXuK6gekU1Y
#artificial_intelligence #reinforcement_learning #deep_learning
Summary: with more board configurations than there are atoms in the universe, the ancient Chinese game of Go has long been considered a grand challenge for artificial intelligence. On March 9, 2016, the worlds of Go and artificial intelligence collided in South Korea for an extraordinary best-of-five-game competition, coined The DeepMind Challenge Match. Hundreds of millions of people around the world watched as a legendary Go master took on an unproven AI challenger for the first time in history.
https://www.youtube.com/watch?v=WXuK6gekU1Y
#artificial_intelligence #reinforcement_learning #deep_learning
YouTube
AlphaGo - The Movie | Full award-winning documentary
With more board configurations than there are atoms in the universe, the ancient Chinese game of Go has long been considered a grand challenge for artificial intelligence.
On March 9, 2016, the worlds of Go and artificial intelligence collided in South…
On March 9, 2016, the worlds of Go and artificial intelligence collided in South…
Universal Intelligence: A Definition of Machine Intelligence
Abstract: A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines.
https://arxiv.org/pdf/0712.3329.pdf
#artificial_intelligence
Abstract: A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines.
https://arxiv.org/pdf/0712.3329.pdf
#artificial_intelligence
An Approximation of the Universal Intelligence Measure
Abstract: The Universal Intelligence Measure is a recently proposed formal definition of intelligence. It is mathematically specified, extremely general, and captures the essence of many informal definitions of intelligence. It is based on Hutter’s Universal Artificial Intelligence theory, an extension of Ray Solomonoff’s pioneering work on universal induction. Since the Universal Intelligence Measure is only asymptotically computable, building a practical intelligence test from it is not straightforward. This paper studies the practical issues involved in developing a real-world UIM-based performance metric. Based on our investigation, we develop a prototype implementation which we use to evaluate a number of different artificial agents.
https://arxiv.org/pdf/1109.5951v2.pdf
#artificial_intelligence
Abstract: The Universal Intelligence Measure is a recently proposed formal definition of intelligence. It is mathematically specified, extremely general, and captures the essence of many informal definitions of intelligence. It is based on Hutter’s Universal Artificial Intelligence theory, an extension of Ray Solomonoff’s pioneering work on universal induction. Since the Universal Intelligence Measure is only asymptotically computable, building a practical intelligence test from it is not straightforward. This paper studies the practical issues involved in developing a real-world UIM-based performance metric. Based on our investigation, we develop a prototype implementation which we use to evaluate a number of different artificial agents.
https://arxiv.org/pdf/1109.5951v2.pdf
#artificial_intelligence