On Artificial Intelligence
108 subscribers
27 photos
36 files
466 links
If you want to know more about Science, specially Artificial Intelligence, this is the right place for you
Admin Contact:
@Oriea
加入频道
François Chollet is the creator of Keras, which is an open source deep learning library that is designed to enable fast, user-friendly experimentation with deep neural networks. It serves as an interface to several deep learning libraries, most popular of which is TensorFlow, and it was integrated into TensorFlow main codebase a while back. Aside from creating an exceptionally useful and popular library, François is also a world-class AI researcher and software engineer at Google, and is definitely an outspoken, if not controversial, personality in the AI world, especially in the realm of ideas around the future of artificial intelligence. This conversation is part of the Artificial Intelligence podcast.
https://www.youtube.com/watch?v=Bo8MY4JpiXE&t=173s
#machine_learning #artificial_intelligence #podcast
Lectures Slides of Signal Processing for Machine Learning Course by Stanfrod University
http://web.stanford.edu/class/ee269/slides.html
#mathematics #machine_learning
Solving Rubik’s Cube with a Robot Hand

This is fascinating, make sure you read it.

Summary: OpenAI team trained a pair of neural networks to solve the Rubik’s Cube with a human-like robot hand. The neural networks are trained entirely in simulation, using the same reinforcement learning code as OpenAI Five paired with a new technique called Automatic Domain Randomization (ADR). The system can handle situations it never saw during training, such as being prodded by a stuffed giraffe. This shows that reinforcement learning isn’t just a tool for virtual tasks, but can solve physical-world problems requiring unprecedented dexterity.

https://openai.com/blog/solving-rubiks-cube/
#reinforcement_learning #machine_learning #robotics
A must read document for deep learning & machine learning practitioners

https://www.deeplearningbook.org/contents/guidelines.html
#deep_learning #machine_learning
How Relevant is the Turing Test in the Age of Sophisbots?

Popular culture has contemplated societies of thinking machines for generations, envisioning futures from utopian to dystopian. These futures are, arguably, here now-we find ourselves at the doorstep of technology that can at least simulate the appearance of thinking, acting, and feeling. The real question is: now what?

https://arxiv.org/pdf/1909.00056.pdf
#machine_learning #technology #ethics
Crafting Papers on Machine Learning

This paper provides some useful hints and advice for preparing machine learning papers. Besides, consider that it is not meant to cover all types of papers.

https://icml.cc/Conferences/2002/craft.html
#machine_learning #writing
Model-based evolutionary algorithms: a short survey

Abstract: The evolutionary algorithms (EAs) are a family of nature-inspired algorithms widely used for solving complex optimization problems. Since the operators (e.g. crossover, mutation, selection) in most traditional EAs are developed on the basis of fixed heuristic rules or strategies, they are unable to learn the structures or properties of the problems to be optimized. To equip the EAs with learning abilities, recently, various model-based evolutionary algorithms (MBEAs) have been proposed. This survey briefly reviews some representative MBEAs by considering three different motivations of using models. First, the most commonly seen motivation of using models is to estimate the distribution of the candidate solutions. Second, in evolutionary multi-objective optimization, one motivation of using models is to build the inverse models from the objective space to the decision space. Third, when solving computationally expensive problems, models can be used as surrogates of the fitness functions. Based on the review, some further discussions are also given.

https://link.springer.com/article/10.1007/s40747-018-0080-1
#evolutionary_algorithm #machine_learning
At the Interface of Algebra and Statistics

Abstract
: This thesis takes inspiration from quantum physics to investigate mathematical structure that lies at the interface of algebra and statistics. The starting point is a passage from classical probability theory to quantum probability theory. The quantum version of a probability distribution is a density operator, the quantum version of marginalizing is an operation called the partial trace, and the quantum version of a marginal probability distribution is a reduced density operator. Every joint probability distribution on a finite set can be modeled as a rank one density operator. By applying the partial trace, we obtain reduced density operators whose diagonals recover classical marginal probabilities. In general, these reduced densities will have rank higher than one, and their eigenvalues and eigenvectors will contain extra information that encodes subsystem interactions governed by statistics. We decode this information, and show it is akin to conditional probability, and then investigate the extent to which the eigenvectors capture "concepts" inherent in the original joint distribution. The theory is then illustrated with an experiment that exploits these ideas. Turning to a more theoretical application, we also discuss a preliminary framework for modeling entailment and concept hierarchy in natural language, namely, by representing expressions in the language as densities. Finally, initial inspiration for this thesis comes from formal concept analysis, which finds many striking parallels with the linear algebra. The parallels are not coincidental, and a common blueprint is found in category theory. We close with an exposition on free (co)completions and how the free-forgetful adjunctions in which they arise strongly suggest that in certain categorical contexts, the "fixed points" of a morphism with its adjoint encode interesting information.

Introductory Video: https://youtu.be/wiadG3ywJIs

Thesis: https://arxiv.org/abs/2004.05631

#statistics #machine_learning #algebra #quantum_physics