*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Data dependencies have strong connections with parallelism. The fundamental observation, going (at least) 30 years back, is that two code blocks that have no (transitive) data dependencies can be executed in parallel, resulting in the same final state as running the codes sequentially. This has been the basis and precondition for sophisticated research on parallelizing compilers for many years. Unfortunately, only in rare cases is this precondition met: The candidate code blocks are often dependent, and even if not, the compiler's (static) dependence analysis is typically too conservative to prove independence, failing due to spurious dependencies.
Tripp will propose a new view of program dependencies, utilizing accurate -- yet potentially partial -- dependence information to tune/specialize a baseline synchronization algorithm while preserving its correctness (i.e. serializability guarantees). This can be done in more than one way, including (i) building specialized, client-specific conflict-detection oracles, (ii) synthesizing concurrency monitors that predict the available parallelism per input data and/or computation phase, and (iii) finding true, semantic dependencies that limit parallelism. He will survey several techniques for leveraging dependence information along these lines, which make safe use of dynamic (rather than static) dependencies, backed by user-provided data abstractions, for precise dependence analysis.