*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Addressing Logical Deadlocks through Task-Parallel Language Design
Caleb Voss
Ph.D. student
School of Computer Science
Georgia Institute of Technology
Date: Wednesday, April 29, 2020
Time: 11:00 am - 1:00 pm
Location: https://bluejeans.com/cvoss9
Committee:
Dr. Vivek Sarkar (advisor), School of Computer Science, Georgia Institute of Technology
Dr. Alessandro Orso, School of Computer Science, Georgia Institute of Technology
Dr. David Devecsery, School of Computer Science, Georgia Institute of Technology
Dr. Qirun Zhang, School of Computer Science, Georgia Institute of Technology
Dr. Tiago Cogumbreiro, College of Science and Mathematics, University of Massachusetts Boston
Abstract:
Task-parallel programming languages offer a variety of high-level synchronization mechanisms that trade off between flexibility and deadlock safety. Fork-join approaches, such as spawn-sync and async-finish, are deadlock-free by construction, but support limited synchronization patterns. However, more powerful approaches, such as the promise, are trivial to deadlock. Viewing high-level task-parallel programming as the successor to low-level concurrent programming, it is imperative that language features offer both flexibility to avoid over-synchronization and sufficient protection against logical deadlock bugs. Lack of flexibility leads to code that does not take full advantage of the available parallelism in the computation. Lack of deadlock protection leads to error-prone code in which a single bug can involve arbitrarily many tasks, making it difficult to reason about. We make advances in both flexibility and deadlock protection for existing task-parallel synchronization mechanisms by carefully designing dynamically verifiable usage policies.
We first define a policy for futures that is a maximally correct relaxation of a known deadlock-freedom policy. Our policy admits an additional class of deadlock-free programs and requires less overhead to verify at run-time compared to past work.
We also introduce a deadlock-freedom policy for promises that, instead of precisely detecting cycles, raises an alarm on disorganized point-to-point synchronization occurring between trees of tasks. To establish both safety and flexibility, we prove that this approximation identifies all deadlocks, and we prove that deadlock-free programs can be made to comply with the policy without loss of parallelism through the use of a novel language feature, the guard block.
Finally, we identify a lack of flexibility in an existing deadlock-freedom policy for the powerful phaser construct. As remaining dissertation work, we propose to relax the policy and introduce the concept of subphasers. By organizing phasers into trees, we can eliminate some over-synchronization and anti-modularity that occurs in phaser programs, while still enjoying efficiently verifiable deadlock-freedom.