Gitta Kutyniok: ptimal Approximation with Sparsely Connected Deep Neural Networks

Despite the outstanding success of deep neural networks in real-world applications, most of the related research is empirically driven and a mathematical foundation is almost completely missing. One central task of a neural network is to approximate a function typically defined on a manifold, which for instance encodes a classification task. In this talk, we will be concerned with the question, how well such a function can be approximated by a neural network with sparse connectivity. Sparsely connected networks are particularly desirable due to the low memory requirements. We will derive a fundamental lower bound on the sparsity of a neural network for a given approximation accuracy. By explicitly constructing neural networks based on certain representation systems, so-called $\alpha$-shearlets, we will then demonstrate that this lower bound can in fact be attained, i.e., in this sense there exist memory optimal deep neural networks. Finally, we present numerical experiments, which surprisingly show that already the standard backpropagation algorithm generates deep neural networks obeying those optimal approximation rates