PhD Proposal by Sanidhya Kashyap

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Wednesday December 5, 2018 - Thursday December 6, 2018
      10:00 am - 11:59 am
  • Location: KACB 3100
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Scaling Synchronization Primitives

Full Summary: No summary paragraph submitted.

Title: Scaling Synchronization Primitives

 

Sanidhya Kashyap

Ph.D. student

School of Computer Science

College of Computing

Georgia Institute of Technology

https://gts3.org/~sanidhya/

 

Date:        Wednesday, December 5th, 2018

Time:       10AM to 12PM (EST)

Location: KACB 3100

 

Committee:

 

Dr. Taesoo Kim (Advisor, School of Computer Science, Georgia Institute of Technology)

Dr. Ada Gavrilovska (School of Computer Science, Georgia Institute of Technology)

Dr. Changwoo Min (The Bradley Department of Electrical and Computer Engineering, Virginia Tech)

 

 

Abstract:

 

For the past decade, we have seen tremendous growth in the adoption and commoditization of multicore machines that have up to 500 hardware threads. Thus, a fundamental question arises is how efficient are existing synchronization primitives---timestamping and locking---that developers use for designing concurrent, scalable, and performant applications. In this thesis, I focus on understanding the performance aspect of these primitives and coming up with new algorithms and approaches that improve the performance of such primitives. As a part of this thesis, I develop a scalable ordering primitive that overcomes the timestamping overhead in timestamp-based concurrent algorithms. In my second line of work, I venture into the realm of locking primitives. In particular, we first analyze and understand the multicore scalability of file systems. We find that some of these bottlenecks require file systems redesign, while some always contend on blocking locks, regardless of the design decision. We design and implement two new blocking locks that efficiently use the task scheduler to scale in both under- and over-subscribed scenarios. We further realize that the task scheduler also affects the scalability of applications in a virtualized scenario, as it leads to the convoy effect and the double scheduling problem. We mitigate these issues by bridging the missing scheduling information between the hypervisor and VMs with almost no overhead. Finally, I look at two extreme forms of lock design: shared memory and message passing. In the first part, I explore the set of design principles and constraints that govern the scalability of shared memory locking primitives. Later, I examine the notion of readers parallelism in message passing based lock design, which allows a writer to exploit higher cache-locality on a single core with hardware parallelism for readers.

 

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Categories
Other/Miscellaneous
Keywords
Phd proposal
Status
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Nov 30, 2018 - 2:17pm
  • Last Updated: Nov 30, 2018 - 2:17pm