Researchers are adding bugs to experimental software code in order to ultimately wind up with programs that have fewer vulnerabilities.
The idea is to insert a known quantity of vulnerabilities into code, then see how many of them are discovered by bug-finding tools.
By analyzing the reasons bugs escape detection, developers can create more effective bug-finders, according to researchers at New York University in collaboration with others from MIT’s Lincoln Laboratory and Northeastern University.
They created large-scale automated vulnerability addition (LAVA), which is a low-cost technique that adds the vulnerabilities. “The only way to evaluate a bug finder is to control the number of bugs in a program, which is exactly what we do with LAVA,” says Brendan Dolan-Gavitt, a computer science and engineering professor at NYU’s Tandon School of Engineering.
The research showed that the bug-finding tools they tested had a dismal aggregate detection rates -- 2%. Not only that, they often found bugs that weren’t even there, creating unnecessary work for quality assurance teams trying to clean up software before it’s released.
The team inserted into programs a known number of what it calls synthetic vulnerabilities that mimic the attributes of actual vulnerabilities found in the wild. Creation of these synthetic vulnerabilities was automated and carried out by making “judicious edits” to the source code of actual programs. Their automated platform was far less expensive than the alternative of custom-designed vulnerabilities that can sport price tags of tens of thousands of dollars.
By carefully placing the bugs, they could see how well bug finders discovered them in various segments of the code. In addition, they limited the number of inputs affected by the synthetic bugs so the test programs wouldn’t shut down entirely.
One major challenge was creating hundreds of thousands of unique vulnerabilities that could not have been seen before by the bug-finding tools so the researchers could accurately assess how well the tools worked.
The research team is planning a competition for this summer in which developers of bug-finding software receive a score based on how many vulnerabilities their tools detect in a piece of software made vulnerable by LAVA. The idea is to help the developers produce better products.
“Developers can compete for bragging rights on who has the highest success rate,” Dolan-Gavitt says.