*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Dr. Douglass E. Post
Chief Scientist
United States Department of Defense High Performance Computing Modernization Program
"The Opportunities and Challenges of Multi-Scale Computational Science and Engineering—A Pragmatist’s View"
The next generation of computers will offer society unprecedented opportunities to solve problems of strategic importance in basic and applied science and engineering. Within ten years we will have access not only to multi-petaflop supercomputers, but also to multi-teraflop desktops and small clusters. Multi-scale applications present some of the greatest opportunities as well as some of the greatest challenges.
Developing multi-scale applications to exploit this exponential growth in computing power is the key challenge. Generally each application area has unique requirements. Developing each major application has taken large teams (10 to 30 staff) many years (10 or more). It’s more difficult today. Massive parallelization has resulted in highly complex computer platforms and has increased the challenge of developing applications. The challenges include integrating the effects of many complex, strongly interacting scientific phenomena across many orders of magnitude of time and distance scales; managing and coordinating multi-institutional and multi-disciplinary project teams; organizing the code development process while maintaining adequate flexibility and agility; meeting the requirements of the laws of nature and groups of production users; ensuring adequate verification and validation; and developing and using software development tools for these complex platforms. Unfortunately, while there is much support for developing more powerful computers, but little (or none) for addressing the code development challenges.
Multi-scale problems typically have distance (and time) scales that have a dynamic range of a factor of 105 or more. For problems where increased spatial resolution is desirable, adaptive mesh refinement is often used. To obtain increased stability with large time steps, implicit techniques are often useful. However, implicit techniques generally require communication across the whole mesh each time step, which is undesirable for massively parallel computers with limited bandwidth and high memory latency. Another approach is to express the small scale physics with an approximate but rapid treatment that captures the main features of interest of the small scale phenomena. Other approaches include identifying the key physics and “averaging” out the inessential features. Many successful approaches capture the “emergent” features of the small scale effects (i.e., use of hydrodynamics instead of molecular dynamics for calculating water flow in a river). Many multi-scale problems are NP-hard, meaning that the problem complexity is such that a brute force technique requires a level of computer power that grows exponentially with the size of the problem. In those cases, the use of emergent principles and focusing the calculation on the key element of interest is essential. There are few, if any, general methods for solving multi-scale problems when increased resolution doesn’t work. Most successful techniques build on a deep understanding of the physics of the problem to capture the essential features in an economical algorithm.
BIO:
Douglass Post has been developing and applying large-scale multi-physics simulations and leading technical projects since 1967. He is the Chief Scientist of the DoD High Performance Computing Modernization Program, manager of the CREATE Program, and a member of the senior technical staff of the Carnegie Mellon University Software Engineering Institute. He is an Associate Editor-in-Chief of the joint AIP/IEEE publication “Computing in Science and Engineering”. Doug received a Ph.D. in Physics from Stanford University in 1975. He led the tokamak modeling group at the Princeton University Plasma Physics Laboratory from 1975 to 1993 and served as head of International Thermonuclear Experimental Reactor (ITER) Physics Project Unit (1988-1990), and head of ITER Joint Central Team In-vessel Physics Group (1993-1998). More recently, he was the A-X Associate Division Leader for Simulation at Lawrence Livermore National Laboratory (1998-2000) and the Deputy X-Division Leader for Simulation at the Los Alamos National Laboratory (2001-2003). He has published over 120 refereed papers, 100 conference papers and 10 book chapters on computational, experimental and theoretical physics with over 5200 citations. He is a Fellow of the American Physical Society, the American Nuclear Society, and the Institute of Electrical and Electronic Engineers.