Theoretical concerns in machine learning

19 items

Why does machine learning work so well, and what are the theoretical constraints on what it can learn?

A Universal Law of Robustness via Isoperimetry

2021ArXiv

Paper

Training Neural Networks is ER-complete

2021ArXiv

Paper

Understanding deep learning requires rethinking generalization

2016ICLR

Paper

The Lack of A Priori Distinctions Between Learning Algorithms

1996Neural Computation

Paper

Draft : Deep Learning in Neural Networks : An Overview

2014Neural Networks

Paper

On the Number of Linear Regions of Deep Neural Networks

2014NIPS

Paper

An exact mapping between the Variational Renormalization Group and Deep Learning

2018

Paper

Deep learning via Hessian-free optimization

2010ICML

Paper

Why Does Deep and Cheap Learning Work So Well?

2016ArXiv

Paper

1 Efficient BackProp

2012

Paper

Neural Networks and the Bias/Variance Dilemma

1992Neural Computation

Paper

A Theoretically Grounded Application of Dropout in Recurrent Neural Networks

2015NIPS

Paper

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

2018ICLR

Paper

What's hidden in the hidden layers?

1989

Paper

The Description Length of Deep Learning models

2018NeurIPS

Paper

Provable Bounds for Learning Some Deep Representations

2013ICML

Paper

Bottom-up Deep Learning using the Hebbian Principle

2016

Paper

Group theoretical methods in machine learning

2008

Book

Deep Learning and Quantum Entanglement: Fundamental Connections with Implications to Network Design

2017ICLR

Paper