Making Effective Use of (Partial) Data Dependencies for Parallelization

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Thursday January 10, 2013 - Friday January 11, 2013
      1:00 pm - 1:59 pm
  • Location: KACB 1116W
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact

Dr. Mayur Naik mayur.naik@cc.gatech.edu 404-385-4746

Summaries

Summary Sentence: Parallelism Talk- Omer Tripp, Tel-Aviv University

Full Summary: Omer Tripp is a graduate student at Tel-Aviv University, soon to complete his studies under the supervision of Prof. Mooly Sagiv. Omer has also been working for IBM for the last five years, and has recently been nominated IBM Master Inventor for his extensive and prolific innovation and mentoring work. Omer's research work -- published at leading conferences and journals including POPL, PLDI, OOPSLA and TOSEM -- has focused on two main areas: (i) program analysis for security and language-based security, and (ii) automatic and interactive software parallelization.

(http://www.cs.tau.ac.il/~omertrip/)

Data dependencies have strong connections with parallelism. The fundamental observation, going (at least) 30 years back, is that two code blocks that have no (transitive) data dependencies can be executed in parallel, resulting in the same final state as running the codes sequentially. This has been the basis and precondition for sophisticated research on parallelizing compilers for many years. Unfortunately, only in rare cases is this precondition met: The candidate code blocks are often dependent, and even if not, the compiler's (static) dependence analysis is typically too conservative to prove independence, failing due to spurious dependencies.


Tripp will propose a new view of program dependencies, utilizing accurate -- yet potentially partial -- dependence information to tune/specialize a baseline synchronization algorithm while preserving its correctness (i.e. serializability guarantees). This can be done in more than one way, including (i) building specialized, client-specific conflict-detection oracles, (ii) synthesizing concurrency monitors that predict the available parallelism per input data and/or computation phase, and (iii) finding true, semantic dependencies that limit parallelism. He will survey several techniques for leveraging dependence information along these lines, which make safe use of dynamic (rather than static) dependencies, backed by user-provided data abstractions, for precise dependence analysis.

Additional Information

In Campus Calendar
No
Groups

College of Computing

Invited Audience
No audiences were selected.
Categories
Seminar/Lecture/Colloquium
Keywords
No keywords were submitted.
Status
  • Created By: Antonette Benford
  • Workflow Status: Published
  • Created On: Jan 9, 2013 - 10:17am
  • Last Updated: Oct 7, 2016 - 10:01pm