1. On Deep-Learning-Based Closures for Algebraic Surrogate Models of Turbulent Flows
- Author
-
Eiximeno, Benet, Sanchís-Agudo, Marcial, Miró, Arnau, Rodríguez, Ivette, Vinuesa, Ricardo, and Lehmkuhl, Oriol
- Subjects
Physics - Fluid Dynamics - Abstract
A deep-learning-based closure model to address energy loss in low-dimensional surrogate models based on proper-orthogonal-decomposition (POD) modes is introduced. Using a transformer-encoder block with easy-attention mechanism, the model predicts the spatial probability density function of fluctuations not captured by the truncated POD modes. The methodology is demonstrated on the wake of the Windsor body at yaw angles of [2.5,5,7.5,10,12.5], with 7.5 as a test case. Key coherent modes are identified by clustering them based on dominant frequency dynamics using Hotelling T2 on the spectral properties of temporal coefficients. These coherent modes account for nearly 60% of the total energy while comprising less than 10% of all modes. A common POD basis is created by concatenating coherent modes from training angles and orthonormalizing the set, reducing the basis vectors from 142 to 90 without losing information. Transformers with different size on the attention layer, (64, 128 and 256), are trained to model the missing fluctuations. Larger attention sizes always improve predictions for the training set, but the transformer with an attention layer of size 256 overshoots the fluctuations predictions in the test set because they have lower intensity than in the training cases. Adding the predicted fluctuations closes the energy gap between the reconstruction and the original flow field, improving predictions for energy, root-mean-square velocity fluctuations, and instantaneous flow fields. The deepest architecture reduces mean energy error from 37% to 12% and decreases the Kullback--Leibler divergence of velocity distributions from KL=0.2 to below KL=0.026.
- Published
- 2024