The power of deeper networks for expressing natural functions
David Rolnick & Max Tegmark: https://arxiv.org/abs/1705.05502
#DeepLearning #MachineLearning #NeuralComputing
David Rolnick & Max Tegmark: https://arxiv.org/abs/1705.05502
#DeepLearning #MachineLearning #NeuralComputing
Evolved Art with Transparent, Overlapping, and Geometric Shapes
Berg et al.: https://arxiv.org/abs/1904.06110
#NeuralComputing #EvolutionaryComputing #ArtificialIntelligence
Berg et al.: https://arxiv.org/abs/1904.06110
#NeuralComputing #EvolutionaryComputing #ArtificialIntelligence
A Mean Field Theory of Batch Normalization
Yang et al.: https://arxiv.org/abs/1902.08129
#ArtificialIntelligence #NeuralComputing #NeuralNetworks #MachineLearning #DynamicalSystems
Yang et al.: https://arxiv.org/abs/1902.08129
#ArtificialIntelligence #NeuralComputing #NeuralNetworks #MachineLearning #DynamicalSystems
"Cellular automata as convolutional neural networks"
By William Gilpin: https://arxiv.org/abs/1809.02942
#CellularAutomata #NeuralNetworks #NeuralComputing #EvolutionaryComputing #ComputationalPhysics
By William Gilpin: https://arxiv.org/abs/1809.02942
#CellularAutomata #NeuralNetworks #NeuralComputing #EvolutionaryComputing #ComputationalPhysics
arXiv.org
Cellular automata as convolutional neural networks
Deep learning techniques have recently demonstrated broad success in predicting complex dynamical systems ranging from turbulence to human speech, motivating broader questions about how neural...
Cellular automata as convolutional neural networks"
By William Gilpin: https://arxiv.org/abs/1809.02942
#CellularAutomata #NeuralNetworks #NeuralComputing #EvolutionaryComputing #ComputationalPhysics
By William Gilpin: https://arxiv.org/abs/1809.02942
#CellularAutomata #NeuralNetworks #NeuralComputing #EvolutionaryComputing #ComputationalPhysics
arXiv.org
Cellular automata as convolutional neural networks
Deep learning techniques have recently demonstrated broad success in predicting complex dynamical systems ranging from turbulence to human speech, motivating broader questions about how neural...
Multi-Sample Dropout for Accelerated Training and Better Generalization
Hiroshi Inoue: https://arxiv.org/abs/1905.09788
#ArtificialIntelligence #NeuralComputing #MachineLearning
Hiroshi Inoue: https://arxiv.org/abs/1905.09788
#ArtificialIntelligence #NeuralComputing #MachineLearning
arXiv.org
Multi-Sample Dropout for Accelerated Training and Better Generalization
Dropout is a simple but efficient regularization technique for achieving better generalization of deep neural networks (DNNs); hence it is widely used in tasks based on DNNs. During training,...
Sparse Networks from Scratch: Faster Training without Losing Performance
Tim Dettmers and Luke Zettlemoyer: https://arxiv.org/abs/1907.04840
Paper: https://arxiv.org/abs/1907.04840
Blog post: https://timdettmers.com/2019/07/11/sparse-networks-from-scratch/
Code: https://github.com/TimDettmers/sparse_learning
#MachineLearning #NeuralComputing #EvolutionaryComputing
Tim Dettmers and Luke Zettlemoyer: https://arxiv.org/abs/1907.04840
Paper: https://arxiv.org/abs/1907.04840
Blog post: https://timdettmers.com/2019/07/11/sparse-networks-from-scratch/
Code: https://github.com/TimDettmers/sparse_learning
#MachineLearning #NeuralComputing #EvolutionaryComputing
arXiv.org
Sparse Networks from Scratch: Faster Training without Losing Performance
We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance...
A Fine-Grained Spectral Perspective on Neural Networks
Greg Yang and Hadi Salman : https://arxiv.org/abs/1907.10599
Compute eigenvalues : https://github.com/thegregyang/NNspectra
#MachineLearning #NeuralComputing #EvolutionaryComputing
Greg Yang and Hadi Salman : https://arxiv.org/abs/1907.10599
Compute eigenvalues : https://github.com/thegregyang/NNspectra
#MachineLearning #NeuralComputing #EvolutionaryComputing
One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers
Morcos et al.: https://arxiv.org/abs/1906.02773
#ArtificialIntelligence #MachineLearning #NeuralComputing
Morcos et al.: https://arxiv.org/abs/1906.02773
#ArtificialIntelligence #MachineLearning #NeuralComputing
Sparse Networks from Scratch: Faster Training without Losing Performance
Tim Dettmers and Luke Zettlemoyer : https://arxiv.org/abs/1907.04840
#ArtificialIntelligence #MachineLearning #NeuralComputing
Tim Dettmers and Luke Zettlemoyer : https://arxiv.org/abs/1907.04840
#ArtificialIntelligence #MachineLearning #NeuralComputing
arXiv.org
Sparse Networks from Scratch: Faster Training without Losing Performance
We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance...