Back to Search
Start Over
PAC-Bayes Generalisation Bounds for Dynamical Systems Including Stable RNNs
- Source :
- AAAI, vol. 38, no. 11, pp. 11901-11909, Mar. 2024
- Publication Year :
- 2023
-
Abstract
- In this paper, we derive a PAC-Bayes bound on the generalisation gap, in a supervised time-series setting for a special class of discrete-time non-linear dynamical systems. This class includes stable recurrent neural networks (RNN), and the motivation for this work was its application to RNNs. In order to achieve the results, we impose some stability constraints, on the allowed models. Here, stability is understood in the sense of dynamical systems. For RNNs, these stability conditions can be expressed in terms of conditions on the weights. We assume the processes involved are essentially bounded and the loss functions are Lipschitz. The proposed bound on the generalisation gap depends on the mixing coefficient of the data distribution, and the essential supremum of the data. Furthermore, the bound converges to zero as the dataset size increases. In this paper, we 1) formalize the learning problem, 2) derive a PAC-Bayesian error bound for such systems, 3) discuss various consequences of this error bound, and 4) show an illustrative example, with discussions on computing the proposed bound. Unlike other available bounds the derived bound holds for non i.i.d. data (time-series) and it does not grow with the number of steps of the RNN.<br />Comment: Accepted to AAAI2024 conference
- Subjects :
- Computer Science - Machine Learning
Statistics - Machine Learning
Subjects
Details
- Database :
- arXiv
- Journal :
- AAAI, vol. 38, no. 11, pp. 11901-11909, Mar. 2024
- Publication Type :
- Report
- Accession number :
- edsarx.2312.09793
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.1609/aaai.v38i11.29076