Tight upper bounds for expressivity of one-dimensional deep ReLU networks
In this talk we are concerned with the expressivity power of deep ReLU neural networks. We will particularly discuss the one-dimensional setting. We consider a fully connected feed-forward ReLU network with $L\ge 2$ hidden layers of variable width $n_{\ell}$, $\ell=1, \ldots L$, and with $n_{1} = n_{L+1} =1$.
In this talk we will provide a constructive method to transfer the deep ReLU model into a free-knot linear spline model $f(t) = (a_{0}t + b_{0}) + \sum_{j=1}^{N} \alpha_{j} \, \sigma(t - x_{j})$, where $\sigma(t) := \max\{ t,0\}$. Thereby we will answer the following questions: What is the highest number of achievable breakpoints of a deep ReLU-NN model function? How to construct a deep ReLU-NN (with $L \ge 2$) with highest possible number of prescribed breakpoints? How to construct a deep ReLU-NN that exactly presents an arbitrary fixed free-knot spline $f$ with $N$ breakpoints? Can we construct deep ReLU-NN's with $N$ prescribed breakpoints or even a prescribed spline function $f$ as above, where $N$ exceeds the number of free parameters of the ReLU-NN? Our results improve recent results in Daubechies et al. (2022) in this regard.
These results have be obtained jointly with Yurii Kolomoitsev and Yannick Riebe (University of Goettingen).