Explore ARCExplore ARC

AIM Seminar: Robert Krasny, Mathematics, University of Michigan

By |

Two topics in computational fluid dynamics

1. The Lamb dipole is a steady propagating solution of the inviscid fluid equations with opposite-signed vorticity in a circular disk. We compare finite-difference solutions of the Navier-Stokes equation (NSE) and the linear diffusion equation (LDE) using the Lamb dipole as the initial condition. We find some expected and some unexpected results; among the latter is that the maximum core vorticity decreases at the same rate for the NSE and LDE, but at higher Reynolds numbers, convection enhances the viscous cancellation of opposite-signed vorticity.
(This is joint work with Ling Xu.)

2. We discuss a new implementation of the vortex method for the incompressible Euler equations. The vorticity is carried by Lagrangian particles and the velocity is recovered by a regularized Biot-Savart integral. The new work employs remeshing and adaptive refinement to resolve small-scale features in the vorticity as well as a treecode for efficiency. The method is demonstrated for vortex dynamics on a rotating sphere (with Peter Bosler) and axisymmetrization of an elliptical vortex (with Ling Xu).

AIM Seminar: Alex Gorodetsky, Aerospace Engineering, University of Michigan

By |

Low-rank tensor approaches for adaptive function approximation: algorithms and examples

In this talk, we present an adaptive method for approximating high-dimensional low-rank functions. Taking advantage of low-rank structure in approximation problems has been shown to prove advantageous for scaling numerical algorithms and computation to higher dimensions by mitigating the curse-of-dimensionality. The method we describe is an extension of the tensor-train cross approximation algorithm to the continuous case of multivariate functions that enables both global and local adaptivity. Our approach relies on a new adaptive algorithm for computing the CUR/skeleton decomposition of bivariate functions. We then extend this technique to the multidimensional case of the function-train decomposition. We demonstrate the benefits of our approach compared with the standard methodology that computes low-rank approximations by decomposing coefficients of tensor-product basis functions. We finish by demonstrating a wide range of applications that include machine learning, uncertainty quantification, stochastic optimal control, and Bayesian filtering.