Tight upper bounds for expressivity of one-dimensional deep ReLU networks
In this talk we are concerned with the expressivity power of deep ReLU neural networks. We will particularly discuss the one-dimensional setting. We consider a fully connected feed-forward ReLU network with L≥2 hidden layers of variable width nℓ, ℓ=1,…L, and with n1=nL+1=1.
In this talk we will provide a constructive method to transfer the deep ReLU model into a free-knot linear spline model f(t)=(a0t+b0)+∑Nj=1αjσ(t−xj), where σ(t):=max{t,0}. Thereby we will answer the following questions: What is the highest number of achievable breakpoints of a deep ReLU-NN model function? How to construct a deep ReLU-NN (with L≥2) with highest possible number of prescribed breakpoints? How to construct a deep ReLU-NN that exactly presents an arbitrary fixed free-knot spline f with N breakpoints? Can we construct deep ReLU-NN's with N prescribed breakpoints or even a prescribed spline function f as above, where N exceeds the number of free parameters of the ReLU-NN? Our results improve recent results in Daubechies et al. (2022) in this regard.
These results have be obtained jointly with Yurii Kolomoitsev and Yannick Riebe (University of Goettingen).