Toward Turbulence Closure Modeling with Data–Driven Techniques
Abstract
Recent advances in machine learning (ML) algorithms, in conjunction with the availability of direct numerical simulation data, have resulted in a surge of interest in data–driven turbulence modelling. The idea with such models is to replace one or more components of a classical closure model with an implicit function obtained through a trained ML procedure. However, as the properties of these learned functions are often poorly understood, the set of modelled equations can be internally inconsistent. Therefore, more research is needed to understand the physical underpinnings of data–driven turbulence models in order to ensure their generalizability to unseen test flows. The work performed addresses the issue of ML closure generalizability by: (i) identifying key challenges in applying ML techniques to two–equation turbulence models and proposing means of mitigating inconsistencies and improving compatibility between different physical elements of the modeled system, (ii) investigating the optimal choice of ML model hyperparameters and required degrees of freedom for accurate approximation and, (iii) improved incorporation of the unsteady dataset for complex turbulent flows with largescale instabilities and coherent structures. Three studies, each addressing the above objectives, are performed which lead to physics–dictated guidance in the development of ML–enhanced turbulence closure models.
In the first study, a novel procedure - based on fixed point analysis - is introduced to ensure that the overall set of equations in data–driven turbulence modelling form a self-consistent dynamical system. Three elements are proposed to ensure the physical underpinnings of ML turbulence closures: (i) characteristic physical features and constraints that all (physics–based (PB) and ML) closure models must strive to satisfy; (ii) ML training scheme that infuses and preserves selected PB constraints; and (iii) physics–guided formulation of ML loss (objective) function to optimize models predictions. First, key closure constraints dictated by the model system dynamics are derived. Then a closed loop training procedure for enforcing the constraints in a self–consistent manner is proposed. Finally, the simple test case of turbulent channel flow is used to highlight the deficiencies in current ML methods and demonstrate improvements stemming from the proposed mitigation measures.
Generalizability of the ML–assisted turbulence model to unseen flows has many challenges due to flow–dependent non–linearity and bifurcations of the constitutive relations. Further, there is little consensus and great deal of uncertainty regarding the choice of Neural Network (NN) hyperparameters and training techniques. Yet, these choices can significantly affect the predictive capability and generalizability of ML turbulence models. Therefore, a second study is performed to under-stand the optimal choice of hyperparameters, training process elements (type of loss function) and necessary number of neurons of Deep Neural Networks (DNNs) required to allow a sufficiently accurate approximation at the Reynolds averaged Navier–Stokes (RANS) closure modeling level. Standard fully connected NNs are trained in a supervised manner and their approximation capabilities are systematically investigated by considering the effects of (i) intrinsic complexity of the solution manifold; (ii) sampling procedure (interpolation vs extrapolation); and (iii) optimization procedure. It is shown that even for a simple proxy–physics system, the NN–model performance is inadequate. Further, we identify and distinguish the challenges to generalizability arising out of non–linearity and bifurcation.
The third study proposes a sub–filter stress neural network for scale–resolving simulations (SRS) in complex turbulent flows with large-scale instabilities and coherent structures. The SRS method chosen is the partially averaged Navier–Stokes (PANS) approach, which is known to be more appropriate for such flows as it resolves the unsteady and coherent scales of motion. The development of the new model is based on three main features. Firstly, there is an improvement in the consistency between local flow field and local turbulent scales in high–fidelity turbulent datasets. This is important for accurate representation of the flow physics. Secondly, an unsteady low–fidelity dataset is reconstructed based on the energy content of the resolved scales at different filter sizes. Finally, parametric machine learning PANS closure functionals are developed under different choices of turbulent scales and degrees of resolution. It is demonstrated that the NN for the (suitably normalized) subgrid stress constitutive relation is insensitive to the cut–off between resolved and unresolved flow fields, so long as the coherent structures are fully resolved.
Citation
Taghizadeh, Salar (2023). Toward Turbulence Closure Modeling with Data–Driven Techniques. Doctoral dissertation, Texas A&M University. Available electronically from https : / /hdl .handle .net /1969 .1 /199191.