*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Atlanta, GA | Posted: April 21, 2021
A new way of attacking a computer’s data storage cache is the fastest of its kind and may lead to stronger cybersecurity defenses. Known as Streamline, the new cache attach technique was developed by GT researchers and is more than three times faster than all other covert channel attacks and is the first attack to go faster than 1MB/s after more than a decade of research in this area.
This is the second cache attack paper for School of Computer Science Professor Moin Qureshi’s group, who have been working on secure cache architectures for the past three years.
“It helps to think like an attacker,” said School of Electrical and Computer Engineering Ph.D. student Gururaj Saileshwar , the lead author of the paper. “It is important to improve our understanding of attacks before a real attacker in the wild does so. In the process, we came up with the Streamline attack that is faster than all existing attacks and has fewer requirements.”
“Better attacks motivate better defenses,” Qureshi said. “Advancing the attack enables us to come up with good defenses for making cache memories secure.”
How Covert Channel Attacks Work
In this type of attack, attackers use a covert channel to communicate and transmit data without detection. Memory caches are susceptible because they are often shared between processors. Such channels have become more popular recently after they were used to transmit data in speculative execution attacks like Spectre and Meltdown.
Memory cache covert channel attacks take advantage of the time difference between access to processor caches and DRAM memory. Senders can influence whether a shared address is in the cache and manipulate the receiver’s access to it. The two fastest attacks have been the Flush+Reload and the Flush+Flush.
In a Flush+Reload, a sender installs an address in a cache and a receiver uses cache flush instructions to evict a shared address. In a Flush+Flush, a sender installs an address in the cache then the receiver measures the latency of the flush to access this address.
A major disadvantage of this type of attack is that it requires access to cache flush instructions, which are disabled in many new CPUs. Also, bit-by-bit synchronization between the sender and receiver that considerably slows the attack. This has limited the bit rate of current attacks to 500-600 KB/s for more than a decade.
How Streamline Works
Instead Streamline relies on asynchronous communication and makes the following improvements:
The researchers tested Streamline on an Intel Skylake central processing unit and achieved a bit-rate of 1801 kilobytes/second, which is 3.1 times faster than the previous fastest attack. Given that Streamline relies on generic cache properties, it works on all architectures.
Saileshwar and Qureshi wrote the paper, Streamline: A Fast, Flushless Cache Covert-Channel Attack by Enabling Asynchronous Collusion, with University of Illinois—Urbana Champaign Assistant Professor Christopher Fletcher. The researchers will present at the premiere systems conference Architectural Support for Programming Languages and Operating Systems (ASPLOS) from April 12-23.