Deep neural Networks are effective at learning high-dimensional Banach-valued functions from limited data
Recently, the problem of recovering high-dimensional functions taking values in abstract spaces has received increasing attention. This is largely due to the relevance of such problems to applications in computational science and engineering, e.g., in computational uncertainty quantification (UQ). In UQ, such problems are often posed in terms of parameterized partial differential equations (PDE) whose solutions take values in Hilbert or Banach spaces. Over the last decade impressive results have been achieved on such problems using Deep Learning (DL) techniques, i.e., machine learning based on training Deep Neural Networks (DNN). In this work we focus on the approximation of high-dimensional smooth functions that are Banach-valued, i.e, taking values in a reflexive, but typically infinite-dimensional, Banach space. Tackling this problem is inherently difficult due to the large expense of obtaining samples, which often involves a complicated black-box PDE solver, and the high problem dimensionality. Our novel approach to this problem is fully algorithmic, combining DL, Compressed Sensing (CS) techniques, orthogonal polynomial and finite element discretization. We also present a full theoretical analysis with explicit guarantees on the error and sample complexity. Moreover, our theoretical results for DNN approximation present a clear accounting of all sources of error. Besides, we provide numerical experiments showing that DNNs can outperform state-of-the-art methods for different types of architecture and training procedures. Finally, we give a general result for both Hilbert and Banach-valued functions and conclude with the main application for a specific class of parametric PDEs.