Task-Based Parallelism: Portability, Scalability and Performance in the Exascale and GPU Era
by
Hybrid: Zoom & B240/R4301
Description: Task-based data-flow computing has been around for almost sixty years during which time its adaptive execution approach has proved successful for both automated scalability and ability to achieve high percentages of peak performance for appropriate applications. Recent task-based approaches developed in joint collaborative work over a number of years between the University of Utah, Argonne National Laboratory, Oak Ridge National Laboratory and the National Institute of Standards and Technology (NIST) have been shown to deliver this scalability on Exascale machines. For example, the Uintah code was able to reach almost full Aurora runs with a grid that strong-scaled up to 10,240 nodes and 122,880 Intel Ponte Vecchio Xe-Stacks. At the same the NIST Hedgehog code has been extended to reach 80%-85% percentages of peak performance for appropriate applications on National Science Foundation Texas Advanced Computing Center’s Vista machine. This joint work will be described and lessons learned discussed. Finally, the future use of such approaches for the hybrid architectures proposed for the second half of this decade will be considered.
Bio: Dr. Berzins arrived at the Scientific Computing and Imaging Institute (SCI) at the University of Utah from the University of Leeds in the UK where he was Professor of Scientific Computing and Research Dean for Engineering. He has worked in the fields of mathematical software, numerical analysis, and parallel computing for engineering applications such as computational fluid dynamics, combustion, atmospheric modeling and lubrication modeling. Much of this research at present involves the use of the Uintah computational framework. More recently he has worked to extend the NIST Hedgehog code using Uintah ideas while retaining its fast peak performance.