Stacking the deck for computer security
UNIVERSITY PARK, Pa. — A new and more reliable method to defend vulnerable data on the stack, a major memory region responsible for storing computer program data for processes, has been developed by an international Penn State-led team. Such data may include local variables, such as return addresses and other objects that bad actors can exploit through memory errors to obtain access to more data.
The researchers published their solution in the Proceedings of the Network and Distributed Systems Security Symposium, which took place at the end of April. The symposium was hosted by the Internet Society, an international nonprofit organization focused on keeping the internet “open, globally connected, secure and trustworthy,” according to their website.
“Despite vast research on defenses to protect stack objects from the exploitation of memory errors, much stack data remains at risk,” said project lead Trent Jaeger, professor of computer science and engineering in the Penn State School of Electrical Engineering and Computer Science. “There are three types of memory errors through which an adversary may access other objects than what the programmer had in mind. These errors are not specific to the stack, but our solution is.”
The memory errors are classified as spatial, temporal and type. Spatial errors allow access to memory outside of the object’s allotted space; temporal errors allow access to memory before or after it was assigned; and type errors allow access by assuming a different format than an object’s actual format.
“In each case, an adversary may access other objects than what the programmer had in mind when programming objects on the stack to access certain data,” Jaeger said. “Recent stack defense approaches provide an incomplete view of security by not accounting for memory errors comprehensively and by unnecessarily limiting the set of objects that can be protected. In this paper, we presented the DATAGUARD system, which improves security through a more comprehensive and accurate safety analysis that proves a larger number of stack objects are safe from memory errors, while ensuring that no unsafe stack objects are mistakenly classified as safe.”
The DATAGUARD system expands beyond a prior classification technique called “Safe Stack,” while also reducing the system processing power needed to identify safe stack objects, according to Jaeger. The Safe Stack technique involves creating two isolated stack memory regions. The safe stack holds only data that are probably safe from memory errors and do not require checking. The other, “regular” stack holds objects that may be unsafe and require checks or are left unsafe. The issue, Jaeger said, is that this technique requires significant overhead computing power and often falsely marks objects as unsafe.
“DATAGUARD leverages static analysis and symbolic execution to validate stack objects that are free from spatial, type and temporal memory errors,” Jaeger said, explaining that this process includes analyzing the safety of items that point to the objects and generating safety constraints for the object’s safety parameters before validating the safe or unsafe status of an object. “Our method more accurately and comprehensively determines which objects belong on the safe stack and which do not.”
In tests, DATAGUARD identified and removed 6.3% of objects that the Safe Stack technique miscategorized as safe and proved that 65% of objects Save Stack labeled as unsafe were actually safe.
“DataGuard shows that a more comprehensive and accurate — yet still conservative — analysis increases the scope of data protection to more than 90% of stack objects, on average, while also reducing overhead, or the additional run time the system uses to protect safe objects , from 11.3% to 4.3%,” Jaeger said.
Next, according to Jaeger, the researchers plan to develop similar classification techniques for other program data in different memory regions, with the goal of protecting a significant fraction of all program data for low overhead costs.
Additional Penn State contributors include Kaiming Huang, doctoral student in computer science and engineering; Yongzhe Huang, doctoral student in computer science and engineering; Jack Sampson, associate professor of computer science and engineering; and Gang Tan, professor of computer science and engineering. Mathias Payer, Ecole Polytechnique Fédérale de Lausanne, and Zhiyun Qian, University of California, Riverside, also co-authored the paper.
The US Army Combat Capabilities Development Command Army Research Laboratory, the National Science Foundation, the European Research Council and the Defense Advanced Research Projects Agency supported this research.