Venue: 1008 FXB
Bio: Ali Yilmaz is a Professor of Electrical and Computer Engineering and a core faculty member at the Institute for Computational Engineering and Sciences at the University of Texas at Austin. Dr. Yilmaz received the Ph.D. degree in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign in 2005. He spent 2005 to 2006 as a post-doctoral research associate with the Center for Computational Electromagnetics at the University of Illinois; in 2006, he joined the faculty of The University of Texas at Austin. His research interests include computational electromagnetics (particularly fast frequency- and time-domain integral equation solvers), parallel algorithms, antenna and scattering analysis, bioelectromagnetics, geoelectromagnetics, and electronic packages. He has authored or co-authored over 170 papers in refereed journals and international conferences on these topics.
Increasing the fidelity of the electromagnetic models generally increases the predictive power of the analyses based on the models. It also generally increases the results’ sensitivity to model features/parameters as well as the difficulty of constructing the models, accurately solving the governing equations, and interpreting the resulting data. Therefore, one should base the analysis on the lowest-fidelity model one can get away with or, equivalently, the highest-fidelity model one can afford. The sweet spot for the tradeoff, “the appropriate model”, has changed over time in part because past successes in simulation-based science and engineering have increased expectations/requirements from electromagnetic analysis and in part because tremendous improvements in computing infrastructure and advances in computational methods have increased the affordability of complex analysis. Finding the appropriate model requires understanding both the benefits and the costs of analysis when a lower- or higher-fidelity model is used; neither side of the ledger, however, is known beforehand (unless one is repeating previously established analyses). A possible approach to revealing these unknowns is to construct models by gradually increasing their fidelity, performing analysis at each fidelity level, and comparing the analysis results and costs to those from the previous steps. I will show examples of this “analysis-driven modeling” in bioelectromagnetics (using the AustinMan and AustinWoman human body models) and signal integrity (using an electronic package example) by employing parallel algorithms and advanced integral-equation solvers on leading-edge supercomputers.
The examples will highlight many of the challenges arising from this approach to modeling. An important one is that “the appropriate method” of analysis generally depends on the model, e.g., a method can outperform alternatives for low-fidelity models but underperform them for high-fidelity ones; indeed, inappropriate (but convenient) methods can not only inflate the cost side of the ledger but also deflate the benefit side, leading to misjudgment of the appropriate model fidelity. Thus, not surprisingly, the development of appropriate electromagnetic models and appropriate computational methods are tightly linked (aka “if all you have is a hammer, everything looks like a nail”). Unfortunately, evaluating computational methods to find the appropriate one for a given model is surprisingly difficult, even for unbiased experts, as method performances depend not just on the models but also on the computers, the software realizations of the methods, and the users/developers of the software. On the one hand, theoretical comparisons (e.g., of asymptotic complexities, error convergence rates, parallel scalability limits) are often incapable of factoring in the large impact of software and hardware infrastructure on the realized/observed performance of a computational method—a problem that has worsened as the traditional Dennard scaling of clock frequencies ended in the last decade. On the other hand, empirical comparisons are beset by the same problems that physical measurements face (including irreproducible and uncertain results), require many (potentially low-efficiency) computations, and suffer from the large number of alternative methods. I will discuss whether benchmark suites can improve the judicious use of computational methods for electromagnetic analysis and what the necessary ingredients for such benchmarks are.
Prof. Yilmaz is being hosted by Prof. Michielssen (EECS). If you would like to meet with him during his visit, please send an email to [email protected]. If you are an MICDE students, or a EEC graduate student, and would like to join Prof. Yilmaz for lunch, please RSVP here by October 8th.