1. Exploring Model Complexity in Machine Learned Potentials for Simulated Properties
- Author
-
Rohskopf, Andrew, Goff, James, Sema, Dionysios, Gordiz, Kiarash, Nguyen, Ngoc Cuong, Henry, Asegun, Thompson, Aidan P., and Wood, Mitchell A.
- Subjects
Condensed Matter - Materials Science ,Materials Science (cond-mat.mtrl-sci) ,FOS: Physical sciences ,Computational Physics (physics.comp-ph) ,Physics - Computational Physics - Abstract
Machine learning (ML) enables the development of interatomic potentials that promise the accuracy of first principles methods while retaining the low cost and parallel efficiency of empirical potentials. While ML potentials traditionally use atom-centered descriptors as inputs, different models such as linear regression and neural networks can map these descriptors to atomic energies and forces. This begs the question: what is the improvement in accuracy due to model complexity irrespective of choice of descriptors? We curate three datasets to investigate this question in terms of ab initio energy and force errors: (1) solid and liquid silicon, (2) gallium nitride, and (3) the superionic conductor LGPS. We further investigate how these errors affect simulated properties with these models and verify if the improvement in fitting errors corresponds to measurable improvement in property prediction. Since linear and nonlinear regression models have different advantages and disadvantages, the results presented herein help researchers choose models for their particular application. By assessing different models, we observe correlations between fitting quantity (e.g. atomic force) error and simulated property error with respect to ab initio values. Such observations can be repeated by other researchers to determine the level of accuracy, and hence model complexity, needed for their particular systems of interest.
- Published
- 2023
- Full Text
- View/download PDF