*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Optimistic Semantic Synchronization
Jaswanth Sreeram
School of Computer Science
College of Computing
Georgia Institute of Technology
Committee:
Santosh Pande (Advisor, College of Computing, Georgia
Tech)
Hyesoon Kim (College of Computing, Georgia Tech)
Karsten Schwan (College
of Computing, Georgia Tech)
Sudhakar Yalamanchili (School of Electrical and
Computer Engineering, Georgia Tech)
Summary
Within the last decade multi-core processors have become increasingly commonplace with the power and performance demands of modern real-world programs acting to accelerate this trend. The rapid advancements in designing and adoption of such architectures mean that there is a serious need for programming models that allow the development of correct parallel programs that execute efficiently on these processors. A principle problem in this regard is that of efficiently synchronizing concurrent accesses to shared memory. Traditional solutions to this problem are either inefficient but provide programmability (coarse-grained locks) or are efficient but are not composable and very hard to program and verify (fine-grained locks). Transactional Memory Systems modeled on database transactions are being proposed as a solution for achieving thread synchronization in parallel applications. While optimistic Transactional Memory systems provide many of the composability and programmability advantages of coarse-grained locks and good theoretical scaling, several studies have found that their performance in practice for many programs remains quite poor. Moreover since they are modeled on database transactions, current transactional memory models remain rigid - they are not suited for expressing some of the complex thread interactions that are prevalent in modern parallel programs. Moreover, the synchronization achieved by these transactional memory systems is at the physical or memory level.
This thesis proposal advocates a position that the memory synchronization problem for threads should be modeled and solved in terms of synchronization of underlying program values which have semantics associated with them and it presents optimistic synchronization techniques that address these semantic synchronization requirements.
These techniques range from methods to enable optimistic transactions to recover from expensive sharing conflicts without discarding all the work made possible by the optimism to mechanisms for enabling finer grained consistency rules (than allowed by traditional optimistic TM models) therefore avoiding conflicts that do not enforce any semantic property required by the program. In addition to improving the expressibility of specific synchronization idioms these techniques are also effective in improving parallel performance. This thesis discusses these techniques in terms of their purpose and the extensions to the language, the compiler as well as to the concurrency control runtime necessary to implement them. It also presents an experimental evaluation of each of them on a variety of modern parallel workloads. These experiments show that these techniques significantly improve parallel performance and scalability over programs using state-of-the-art optimistic synchronization methods.