Deep Approximation via Deep Learning
The primary task of many applications is approximating/estimating a function through samples drawn from a probability distribution on the input space. The deep approximation is to approximate a function by compositions of many layers of simple functions, that can be viewed as a series of nested feature extractors. The key idea of a deep learning network is to convert layers of compositions to layers of tunable parameters that can be adjusted through a learning process so that it achieves a good approximation with respect to the input data. In this talk, we shall discuss the mathematical theory behind this new approach and the approximation rate of deep networks; we will also show how this new approach differs from the classic approximation theory, and how this new theory can be used to understand and design deep learning network.