About CESAR

CESAR Goals and Objectives

The Center for Exascale Simulation for Advanced Reactors (CESAR) focuses its co-design process around three classes of algorithms critical to the field of nuclear energy: deterministic neutron transport for applications with controlled heat generation; stochastic (Monte Carlo) neutron transport in the same regime, and incompressible computational fluid dynamics with heat transfer. Ultimately, CESAR aims to develop a coupled, next-generation nuclear reactor core simulation tool (TRIDENT) capable of efficient execution on exascale computing platforms, and appropriate for the design, licensing, and safety analysis of next-generation nuclear technology. In the co-design process, however, the main focus of CESAR is not the code itself, but the ongoing two-way collaboration with computer architects to both influence and adapt these algorithms to innovations in computing architectures that lie on the trajectory toward exascale computing.

TRIDENT will use at its starting point three codes with demonstrated performance at the petascale, and whose algorithms are well understood both in terms of modern processor architectures,  inter-node scalability, i/o, and analytics requirements. This type of algorithmic maturity is considered essential to meet the goals of the project, viz. to make progress in the face of likely dramatic changes in next generation computing architectures and programming environments. These three foundational CESAR codes are UNIC, Nek, and OpenMC.  The  MOAB framework will serve as the starting point for inter-code data transfer needed in coupled analysis.

As part of a co-design effort, TRIDENT will be developed an innovative design process with an immediate focus on algorithmic prototyping, kernel extraction, and application modeling for testing/evaluation on candidate exascale platforms (see below). However, application codes have a broad range of range of regimes in which they can be executed; when extracting mini-apps and algorithmic kernels it is essential to have metrics for further isolating the parameter regime of interest (e.g. in terms of energy groups, tally regions, spatial resolution, etc.)

 CESAR is thus based on canonical application drivers that will initially drive the development of mini-apps and algorithmic kernels:

  1. Modeling a complete reactor vessel with coupled, high fidelity models for neutronics and thermal-hydraulics, especially for systems in natural convection conditions;
  2. Modeling coupled fuel depletion/neutronics at high spatial detail with quantified uncertainties;

Existing industry tools cannot resolve the detailed physics couplings needed to simulate these phenomena, and the use of reduced-order, empirically based models results in significant economic impact on reactor design, licensing, and operation.

 

The CESAR Team

The Argonne-led CESAR team is assembled to enable an end-to-end co-design process for exascale platforms while ensuring the impact of TRIDENT within the nuclear engineering community. To accomplish this diverse set of goals, The CESAR team brings together 1) top experts in algorithm development on advanced HPC architectures such as GPGPUs (ORNL, Rice University), 2) expertise in performance modeling, performance analysis, and programming models (ANL, PNNL, LLNL), 3) exascale hardware designers (IBM), 4) industrial designers of both proven and innovative reactor concepts (GA, AREVA, Terapower), 5) the leading developers of high-end reactor research codes (Argonne, Texas A&M, MIT). All participants are critical to the CESAR process and ultimate goal: to develop and test innovative algorithmic and hardware tradeoffs in concert while developing an exascale enabled reactor analysis tool.

TRIDENT Co-Design Strategy

TRIDENT development will be carried out as part of a highly innovative cycle that tightly couples algorithmic development, performance modeling, hardware simulation, and hardware design. Various forms of neutron transport and thermal hydraulics within CESAR are representative of more fundamental algorithms that exercise vastly different aspects of computer architectures. These include, for example 1. Branch-heavy codes characterized by poor cache behavior and bound by large sets of read only data (but with tremendous inherent concurrency and likely good fault tolerant characteristics), 2. Unstructured mesh PDEs with nearest neighbor and global communication phases, good cache performance based on small matrix-matrix products and with communication bounded by global communication (latency of allreduce) and 3. Ray propagation methods (solution of ODEs along independent trajectories) with characterized by more complex communication patterns but good cache performance for computation of exponential.

Initially, working with computer architects via simpler mini-apps and kernels, the focus of CESAR is to answer a set of questions about the CESAR codes vis-a-vis different options that are being pursued for next-generation architectures. Forw example, we expect to address question such as: what is the optimal memory hierarchy (cache sizes, direct or associative, ROM, etc.) to achieve good performance for each algorithm?; what instruction support would be required from a thread to effectively decompose the problem on SIMD architectures?; what network speeds and latencies are required to achieve exascale performance, etc.? It is likely that significant algorithmic restructuring will be required to achieve exascale levels of performance, and that ultimately CESAR application needs will drive key decisions in both the software and potentially hardware of next generation machines.