PhD Proposal by Sana Damani

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Wednesday December 1, 2021
      1:00 pm - 3:00 pm
  • Location: Atlanta, GA; REMOTE
  • Phone:
  • URL: Bluejeans
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Instruction Reordering and Work Scheduling for Thread-Parallel Architectures

Full Summary: No summary paragraph submitted.

Title: Instruction Reordering and Work Scheduling for Thread-Parallel Architectures

 

Sana Damani

Ph.D. Student

School of Computer Science       

Georgia Institute of Technology

 

Date: Wednesday, December 1, 2021

Time: 11:00 AM - 1:00 PM EST

Location(Remote via BlueJeans): https://gatech.bluejeans.com/sdamani6

 

Committee:

Dr. Vivek Sarkar (Advisor), School of Computer Science, Georgia Institute of Technology

Dr. Hyesoon Kim, School of Computer Science, Georgia Institute of Technology

Dr. Tom Conte, School of Computer Science, Georgia Institute of Technology

Dr. Santosh Pande, School of Computer Science, Georgia Institute of Technology

 

Abstract:

While accelerators such as GPUs and near-memory processors show significant performance improvements for applications with high data parallelism and regular memory accesses, they experience synchronization and memory access overheads in applications with irregular control flow and memory access patterns resulting in reduced efficiency. Examples include graph applications, Monte Carlo simulations, ray tracing applications, and sparse matrix computations. This proposal aims at identifying inefficiencies in executing irregular programs on thread-parallel architectures, and recommends compiler transformations and architecture enhancements to address these inefficiencies. In particular, we describe instruction reordering and thread scheduling techniques that avoid serialization, reduce pipeline stalls and minimize redundant thread migrations, thereby reducing overall program latency and improving processor utilization.

 

Contributions:

  1. Common Subexpression Convergence, a compiler transformation that identifies and removes redundant code in divergent regions of GPU programs.
  2. Speculative Reconvergence, a compiler transformation that identifies new thread reconvergence points in divergent GPU programs to improve SIMT efficiency.
  3. Subwarp Interleaving, an architecture feature that schedules threads at a subwarp granularity on GPUs to reduce pipeline stalls in divergent regions of the program.
  4. Memory Access Scheduling, a software instruction scheduling approach that groups together co-located memory accesses to minimize thread migrations on migratory-thread architectures.

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Faculty/Staff, Public, Undergraduate students
Categories
Other/Miscellaneous
Keywords
Phd proposal
Status
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Nov 29, 2021 - 3:12pm
  • Last Updated: Nov 29, 2021 - 3:12pm