On Artificial Intelligence
108 subscribers
27 photos
36 files
466 links
If you want to know more about Science, specially Artificial Intelligence, this is the right place for you
Admin Contact:
@Oriea
加入频道
A handful of podcasts, labs, projects, and groups which are involved both Neuroscience and Artificial Intelligence:
NeuroAILab: Aim to "reverse engineer" the algorithms of the brain, both to learn about how our minds work and to build more effective artificial intelligence systems.
Learning in Neural Circuits (LiNC) Laboratory: Study general principles of learning and memory in neural networks with the ultimate goal of understanding how real and artificial brains can optimize behaviour.
Human Brain Project: The Human Brain Project (HBP) is building a research infrastructure to help advance neuroscience, medicine and computing. It is one of four FET (Future and Emerging Tehcnology) Flagships, the largest scientific projects ever funded by the European Union.
Center for Brains, Minds and Machines: Understanding how the brain produces intelligent behavior and how we may be able to replicate intelligence in machines is arguably one of the greatest challenges in science and technology. This group brings together computer scientists, cognitive scientists, and neuroscientists to create a new field—the Science and Engineering of Intelligence.
Center for Theoretical Neuroscience: they aim to establish, through the quality of the Center's research, the excellence of its trainees, and the impact of its visitor, dissemination, and outreach programs, a new cooperative paradigm that will move neuroscience to unprecedented levels of discovery and understanding. We believe we have one of the most exciting and interactive environments anywhere for bringing theoretical approaches to Neuroscience.
Unsupervised Thinking: a podcast about neuroscience, artificial intelligence and science more broadly
#NeuroScience #MachineLearning
The Roles of Supervised Machine Learning in Systems Neuroscience
Over the last several years, the use of machine learning (ML) in neuroscience has been rapidly increasing. Here, we review ML’s contributions, both realized and potential, across several areas of systems neuroscience. We describe four primary roles of ML within neuroscience: 1) creating solutions to engineering problems, 2) identifying predictive variables, 3) setting benchmarks for simple models of the brain, and 4) serving itself as a model for the brain. The breadth and ease of its applicability suggests that machine learning should be in the toolbox of most systems neuroscientists.
https://arxiv.org/ftp/arxiv/papers/1805/1805.08239.pdf
#neuroscience #machine_learning
Book: The SOAR Cognitive Architecture

Introduction:
in development for thirty years, Soar is a general cognitive architecture that integrates knowledge-intensive reasoning, reactive execution, hierarchical reasoning, planning, and learning from experience, with the goal of creating a general computational system that has the same cognitive abilities as humans. In contrast, most AI systems are designed to solve only one type of problem, such as playing chess, searching the Internet, or scheduling aircraft departures. Soar is both a software system for agent development and a theory of what computational structures are necessary to support human-level agents. Over the years, both software system and theory have evolved. This book offers the definitive presentation of Soar from theoretical and practical perspectives, providing comprehensive descriptions of fundamental aspects and new components. The current version of Soar features major extensions, adding reinforcement learning, semantic memory, episodic memory, mental imagery, and an appraisal-based model of emotion. This book describes details of Soar's component memories and processes and offers demonstrations of individual components, components working in combination, and real-world applications. Beyond these functional considerations, the book also proposes requirements for general cognitive architectures and explicitly evaluates how well Soar meets those requirements.

https://dl.acm.org/doi/book/10.5555/2222503
#cognitive_science #neuroscience #reinforcement_learning #artificial_intelligence
Towards Biologically Plausible Deep Learning

Abstract
: Neuroscientists have long criticized deep learning algorithms as incompatible with current knowledge of neurobiology. We explore more biologically plausible versions of deep representation learning, focusing here mostly on unsupervised learning but developing a learning mechanism that could account for supervised, unsupervised and reinforcement learning. The starting point is that the basic learning rule believed to govern synaptic weight updates (Spike-Timing-Dependent Plasticity) arises out of a simple update rule that makes a lot of sense from a machine learning point of view and can be interpreted as gradient descent on some objective function so long as the neuronal dynamics push firing rates towards better values of the objective function (be it supervised, unsupervised, or reward-driven). The second main idea is that this corresponds to a form of the variational EM algorithm, i.e., with approximate rather than exact posteriors, implemented by neural dynamics. Another contribution of this paper is that the gradients required for updating the hidden states in the above variational interpretation can be estimated using an approximation that only requires propagating activations forward and backward, with pairs of layers learning to form a denoising auto-encoder. Finally, we extend the theory about the probabilistic interpretation of auto-encoders to justify improved sampling schemes based on the generative interpretation of denoising auto-encoders, and we validate all these ideas on generative learning tasks.

https://arxiv.org/abs/1502.04156
#deep_learning #neuroscience
Unsupervised learning models of primary cortical receptive fields and receptive field plasticity

Abstract:
The efficient coding hypothesis holds that neural receptive fields are adapted to the statistics of the environment, but is agnostic to the timescale of this adaptation, which occurs on both evolutionary and developmental timescales. In this work we focus on that component of adaptation which occurs during an organism's lifetime, and show that a number of unsupervised feature learning algorithms can account for features of normal receptive field properties across multiple primary sensory cortices. Furthermore, we show that the same algorithms account for altered receptive field properties in response to experimentally altered environmental statistics. Based on these modeling results we propose these models as phenomenological models of receptive field plasticity during an organism's lifetime. Finally, due to the success of the same models in multiple sensory areas, we suggest that these algorithms may provide a constructive realization of the theory, first proposed by Mountcastle (1978), that a qualitatively similar learning algorithm acts throughout primary sensory cortices.

https://papers.nips.cc/paper/4331-unsupervised-learning-models-of-primary-cortical-receptive-fields-and-receptive-field-plasticity
#neural_network #neuroscience
Reinforcement learning in the brain

Abstract
: A wealth of research focuses on the decision-making processes that animals and humans employ when selecting actions in the face of reward and punishment. Initially such work stemmed from psychological investigations of conditioned behavior, and explanations of these in terms of computational models. Increasingly, analysis at the computational level has drawn on ideas from reinforcement learning, which provide a normative framework within which decision-making can be analyzed. More recently, the fruits of these extensive lines of research have made contact with investigations into the neural basis of decision making. Converging evidence now links reinforcement learning to specific neural substrates, assigning them precise computational roles. Specifically, electrophysiological recordings in behaving animals and functional imaging of human decision-making have revealed in the brain the existence of a key reinforcement learning signal, the temporal difference reward prediction error. Here, we first introduce the formal reinforcement learning framework. We then review the multiple lines of evidence linking reinforcement learning to the function of dopaminergic neurons in the mammalian midbrain and to more recent data from human imaging experiments. We further extend the discussion to aspects of learning not associated with phasic dopamine signals, such as learning of goal-directed responding that may not be dopamine-dependent, and learning about the vigor (or rate) with which actions should be performed that has been linked to tonic aspects of dopaminergic signaling. We end with a brief discussion of some of the limitations of the reinforcement learning framework, highlighting questions for future research.

https://psycnet.apa.org/record/2009-07078-003
#reinforcement_learning #neuroscience
From CAPTCHA to Commonsense: How Brain Can Teach Us About Artificial Intelligence

Abstract: Despite the recent progress in AI-powered by deep learning in solving narrow tasks, we are not close to human intelligence in its flexibility, versatility, and efficiency. Efficient learning and effective generalization come from inductive biases, and building Artificial General Intelligence (AGI) is an exercise in finding the right set of inductive biases that make fast learning possible while being general enough to be widely applicable in tasks that humans excel at. To make progress in AGI, we argue that we can look at the human brain for such inductive biases and principles of generalization. To that effect, we propose a strategy to gain insights from the brain by simultaneously looking at the world it acts upon and the computational framework to support efficient learning and generalization. We present a neuroscience-inspired generative model of vision as a case study for such an approach and discuss some open problems about the path to AGI.

URL: https://www.frontiersin.org/articles/10.3389/fncom.2020.554097/full
#neuroscience #artificial_general_intelligence