1. Revisiting Tensor Basis Neural Networks for Reynolds stress modeling: application to plane channel and square duct flows
- Author
-
Cai, Jiayi, Angeli, Pierre-Emmanuel, Martinez, Jean-Marc, Damblin, Guillaume, and Lucor, Didier
- Subjects
Physics - Fluid Dynamics ,Physics - Computational Physics ,Physics - Data Analysis, Statistics and Probability - Abstract
Several Tensor Basis Neural Network (TBNN) frameworks aimed at enhancing turbulence RANS modeling have recently been proposed in the literature as data-driven constitutive models for systems with known invariance properties. However, persistent ambiguities remain regarding the physical adequacy of applying the General Eddy Viscosity Model (GEVM). This work aims at investigating this aspect in an a priori stage for better predictions of the Reynolds stress anisotropy tensor, while preserving the Galilean and rotational invariances. In particular, we propose a general framework providing optimal tensor basis models for two types of canonical flows: Plane Channel Flow (PCF) and Square Duct Flow (SDF). Subsequently, deep neural networks based on these optimal models are trained using state-of-the-art strategies to achieve a balanced and physically sound prediction of the full anisotropy tensor. A priori results obtained by the proposed framework are in very good agreement with the reference DNS data. Notably, our shallow network with three layers provides accurate predictions of the anisotropy tensor for PCF at unobserved friction Reynolds numbers, both in interpolation and extrapolation scenarios. Learning the SDF case is more challenging because of its physical nature and a lack of training data at various regimes. We propose to alleviate this problem based on Transfer Learning (TL). To more efficiently generalize to an unseen intermediate $\mathrm{Re}_\tau$ regime, we take advantage of our prior knowledge acquired from a training with a larger and wider dataset. Our results indicate the potential of the developed network model, and demonstrate the feasibility and efficiency of the TL process in terms of training data size and training time. Based on these results, we believe there is a promising future by integrating these neural networks into an adapted in-house RANS solver.
- Published
- 2024
- Full Text
- View/download PDF