Stability and Generalization of Graph Neural Networks
Graph neural networks have seen a steep rise in popularity since their introduction as generalizations of convolutional neural networks to graph structured data, leading to many practical applications with commercial impact. In machine learning settings where the dataset consists of many different graphs, the trained neural network should generalize to graphs outside the training set. In this work, we study the generalization capabilities of graph neural networks. To model the data, we assume that graphs of different classes are sampled from different random graph models. Based on this data distribution, we derive non-asymptotic bounds on the generalization gap between the empirical and statistical loss, that decrease to zero as the graphs become larger. Our generalization bounds depend on the regularity of the network and of the random graph models, but not directly on the number of parameters in the network and not on training. These results can be treated as guarantees that the network will always have some generalization capability, no matter how its weights are specifically chosen. Hence, this innate property of graph neural networks explains one aspect of their success in learning tasks like graph classification and regression.