Parallel Scaling of the Regionally-Implicit Discontinuous Galerkin Method with Quasi-Quadrature-Free Matrix Assembly

01/04/2021
by   Pierson T. Guthrey, et al.
0

In this work we investigate the parallel scalability of the numerical method developed in Guthrey and Rossmanith [The regionally implicit discontinuous Galerkin method: Improving the stability of DG-FEM, SIAM J. Numer. Anal. (2019)]. We develop an implementation of the regionally-implicit discontinuous Galerkin (RIDG) method in DoGPack, which is an open source C++ software package for discontinuous Galerkin methods. Specifically, we develop and test a hybrid OpenMP and MPI parallelized implementation of DoGPack with the goal of exploring the efficiency and scalability of RIDG in comparison to the popular strong stability-preserving Runge-Kutta discontinuous Galerkin (SSP-RKDG) method. We demonstrate that RIDG methods are able to hide communication latency associated with distributed memory parallelism, due to the fact that almost all of the work involved in the method is highly localized to each element, producing a localized prediction for each region. We demonstrate the enhanced efficiency and scalability of the of the RIDG method and compare it to SSP-RKDG methods and show extensibility to very high order schemes. The two-dimensional scaling study is performed on machines at the Institute for Cyber-Enabled Research at Michigan State University, using up to 1440 total cores on Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz CPUs. The three dimensional scaling study is performed on Livermore Computing clusters at at Lawrence Livermore National Laboratory, using up to 28672 total cores on Intel Xeon CLX-8276L CPUs with Omni-Path interconnects.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset