*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
CSE Distinguished Lecture Series in High Performance Computing
Department of Computer Science,
University of Illinois at Urbana-Champaign
Petascale and Multicore programming Models: What is needed
The almost simultaneous emergence of multicore chips and petascale computers presents multidimensional challenges and opportunities for parallel programming. What kind of programming models will prevail? What are some of the required and desired characteristics of such model/s? I will attempt to answer these questions. My answers are based in part on my experience with several applications ranging from quantum chemistry, biomolecular simulations, simulation of solid propellant rockets, and computational astronomy.
First, the models need to be independent of number of processors, allowing programmers to over-decompose the computation into logical pieces. Such models, including 15 year old Charm++, enable intelligent runtime optimizations. More importantly, they promote compositionality.
Second, building on this compositionality, one needs a collection of parallel programming languages/models, each incomplete by itself, but capable of inter-operating with each other. Third, many parallel applications can be "covered" by simple, deterministic, mini-languages which lead to programs that are easy to reason about and debug. These should be used in conjunction with more complex but complete languages.
Also, domain specific frameworks and libraries, which encapsulate expertise and facilitate reuse of commonly needed functionality, should be developed and used whenever feasible. I will illustrate these answers with examples drawn from our CSE applications, and from some relatively new programming notations we have been developing.
BIO:
Professor Laxmikant Kale has been working on various aspects of parallel computing, with a focus on enhancing performance and productivity via adaptive runtime systems, and with the belief that only interdisciplinary research involving multiple CSE and other applications can bring back well-honed abstractions into Computer Science that will have a long-term impact on the state-of-art. His collaborations include the widely used Gordon-Bell award winning (SC'2002) biomolecular simulation program NAMD, and other collaborations on computational cosmology, quantum chemistry, rocket simulation, space-time meshes, and other unstructured mesh applications. He takes pride in his group's success in distributing and supporting software embodying his research ideas, including Charm++, Adaptive MPI and the ParFUM framework.