Dynamic Mode Decomposition as a Tool for Reducing Computational Time in Numerical Simulation of Complex Flows
The length of simulation time for unsteady computational fluid dynamics (CFD) is often based on intuition or best practices, as there is no established convergence criteria to guarantee that sufficient flow time has elapsed to resolve all spatio-temporal content. Thus, these simulations are often run longer than necessary, but this increases the run time and computational resources required. This thesis first introduces an algorithm, based on dynamic mode decomposition (DMD), that can determine when continuing an unsteady CFD computation no longer yields additional spatio-temporal information. This algorithm is demonstrated on an analytical dataset as well as two CFD test cases: a fully turbulent flow over a cylinder and a flow over a moving rotor with a stationary downstream duct. This algorithm is able to determine the point at which the spatio-temporal content of the flow stops changing meaningfully in advance of using conventional methods: by 44% in the analytical flow data and by 8% in the flow over cylinder case. However, this algorithm is itself expensive on large datasets typical of practical CFD applications. This thesis then introduces an updated algorithm to make it more robust and prevent the accumulation of unnecessary time steps that would increase the computational cost of using the convergence algorithm. Finally, a downsampling algorithm is introduced that can reduce the number of spatial points analyzed by the algorithm by 80-85%, while retaining a similar convergence location as in the full data calculation. This drastically reduces the computational cost of using the convergence algorithm and makes it useful in practical CFD applications.