Alexandros Daglis Finds New Beginnings in the End of Moore’s Law

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Contact

Tess Malone, Communications Officer

tess.malone@cc.gatech.edu

Sidebar Content
No sidebar content submitted.
Summaries

Summary Sentence:

Alexandros Daglis joined SCS in January.

Full Summary:

No summary paragraph submitted.

Media
  • Alex Daglis Alex Daglis
    (image/jpeg)

The number of computer chip transistors is no longer expected to double every year as Moore’s law declines. Most computer scientists see this as a problem, but new School of Computer Science Assistant Professor Alexandros Daglis thinks it’s an opportunity.

“There is a shifting balance of computing resources,” he said. “The advancements in raw computing power are tapering off, so networks have the time to catch up.”

The possibilities of hardware

Daglis has always been fascinated by what can make hardware faster. Although he got into computer science by programming video games as a teenager, he quickly discovered the appeal of hardware during his undergraduate years at the National Technical University of Athens.

“Caching and locality, they just felt so natural to me,” he said. “This is why computers work. The fundamental techniques that make computers so fast really piqued my interest.”

His undergraduate thesis was about how to improve caching. Yet during his Ph.D. at École polytechnique fédérale de Lausanne (EFPL), Daglis realized there were bigger problems to tackle.

Catching up with Moore

Under Moore’s law, central processing units (CPUs) became faster, whereas networks lagged. Yet as the paradigm shifts, networks can finally close the gap. According to Daglis, now is the time to lower latency, or data transfer speed, and increase bandwidth.

As a computer architect, Daglis wants to rethink the fundamentals of how communication-intensive systems, like social network applications running in datacenters, function. Users want to retrieve a small amount of data —a message, a friend request — fast. Yet, the existing network isn’t set up to make this run efficiently, according to Daglis.

“Fundamental latency bounds are catching up: we’re getting to speed-of-light data propagation within a datacenter’s internal network soon,” Daglis said. “But the way we build systems precludes leveraging the full potential of these faster networks. Network protocols are too slow for them, and the long-established interfaces our computing resources rely on to tap into the network are too slow as well.”

So Daglis wants to create a new paradigm: co-designing hardware and networks. As hardware needs to evolve and networks improve, it’s the perfect time to revisit system design.

“Networking’s legacy is blocking us from unleashing the true power of modern networks,” Daglis said.

Daglis believes moving higher-level operations closer to the CPU’s network endpoint is one effective way to better leverage growing network capabilities. For example, software predominantly handles decisions that balance incoming network messages across a server CPU’s many cores, but enabling the network endpoint to make these decisions can yield significant latency gains. This means transitioning from traditional CPU-centric computing to network- and memory-centric computing, which could have impacts across software, systems architecture, and algorithms.

For now, though, Daglis is taking a step back and focusing on how he can drastically improve the performance of communication-intensive systems. To do this, he plans to leverage relevant new technologies that are becoming commercially available, such as “smart” programmable network interface comtrllers and switches. Yet his goals are still ambitious.

“It’s interesting to explore the extent of immediate performance gains we can achieve by properly leveraging new commercial system components. However, it’s important to think about what we can do in the longer term that is not just incremental, but fundamentally different from existing computing systems by using pieces of increasingly heterogeneous hardware resources used for computation and networking,” he said. “My vision of co-designing the two will enable a much broader portfolio of functionality.”

Additional Information

Groups

College of Computing, School of Computer Science

Categories
No categories were selected.
Related Core Research Areas
No core research areas were selected.
Newsroom Topics
No newsroom topics were selected.
Keywords
No keywords were submitted.
Status
  • Created By: Tess Malone
  • Workflow Status: Published
  • Created On: Apr 1, 2019 - 5:02pm
  • Last Updated: Apr 1, 2019 - 5:02pm