Putting the Ghost in the Machine: Can Making Software Buggier Make it More Secure?
Software bugs are commonplace, particularly in languages that lack memory safeguards like C and C++. It is easy for programmer errors to result in memory corruption and random code exploitation.
Traditionally, hackers painstakingly trawl through lines of code to discover exploitable programming errors. Any bugs they find must be triaged to determine the level of exploitability. Not all bugs are equal, however, depending on runtime environments and the nature of errors, many bugs may not cause any violation of security goals such as, null pointer dereferences. These bugs may merely cause a program to crash, serious but background microservices are designed to restart programs in such events.
Once the triage phase reveals exploitable errors, hackers develop their exploits, which they deploy back into the code.
This process is laborious and largely manual, but can result in a costly clean-up for the companies involved, as well as potentially causing career-ending repercussions for those who let them slip by unnoticed.
Conventional countermeasures are just as painstaking; hunting through code looking for vulnerabilities that might be exploited, removing them before the code goes public.
Recently though, a team of cybersecurity researchers at New York University (NYU) have begun to advocate a new, military-grade camouflage approach to bug extermination. Their seemingly simple suggestion is to add more bugs to code, lots and lots more bugs!
Wheat from the Chaff - Why introduce non-exploitable bugs?
Existing exploit mitigations like ASLR and CFI up the ante for hackers but typically come with performance penalties and don't always deter more sophisticated attackers.
NYU researchers, Zhenghao Hu, Yu Hu, and Brendan Dolan-Gavitt have added a new dimension to the security through obscurity approach to tackling cybercrime. They recommend that programmers and cybersecurity experts liberally sprinkle non-exploitable bugs throughout their code to confound hackers. These benign bugs—Chaff Bugs—and like the military chaff where they get their name, they are designed to create a multitude of potential targets, wasting hackers time and resources, finding and triaging what are, in reality, harmless bugs.
Bugs intentionally placed throughout code must appear exploitable to triage tools but must do no harm to software functionality. This is easier said than done. Adding bugs and thereby altering lines of code can render software useless or worse, malicious. To obviate this possible outcome, programmers must run code with different inputs and monitor the results as the code progresses; a lengthy and resource-heavy exercise.
Glitch in the Matrix: Is the Chaff Approach a Panacea?
While this new smokescreen technique has obvious advantages, it is still in its infancy and doesn't yet represent a comprehensive antidote to cyber attack. So far, there is no definitive proof that catching and exploiting bugs is actually all that arduous for hackers. It may be possible that hackers are already capable of using rigs to identify decoy bugs and automate the exploitation process.
At present, Chaff Bugs have the significant disadvantage of not being indistinguishable from other bugs. It is conceivable that hackers could identify artifacts in the code and discover patterns to exploit to malicious ends. Hu et al are aware of this limitation and are striving to create bugs that can be camouflaged completely within existing code while making triggering conditions for errors more natural.
Another potential impediment to widespread adoption of the current Chaff Bug technique is that software bugs are the bane of developers professional lives. It would not be surprising if developers were reluctant to work with code riddled with pre-baked bugs that could render non-exploitable errors exploitable.
The Chaff Bug approach is a novel and mouthwateringly malevolent way to give hackers a taste of their own medicine. Gumming up their systems of attack with a myriad of bugs should, in theory, reduce cybercrime figures. But there is still work to be done.
For this approach to become a more potent deterrent, the nature and variation of the injected bugs must be manipulated to ensure comprehensive cohesion with existing code. In addition, bugs must be successfully injected at the binary level for this approach to work in legacy systems. Currently, there is way to employ this technique in open-source software. With open-source software becoming increasingly widespread, this is a serious deficiency.
Despite its flaws, the current technique appears to work well as an add-on in the build, bolstering the efficacy of existing defenses, such as ASLR, DEP, CFI, and CPI. The academics that created this technique hope their work draws attention to the study of exploit triage. They aim to advance the system so that it can become a powerful means of drowning attackers in a sea of deliberately tainted code.
It’s early days, but it will be interesting to see if developers and cybersecurity experts choose to bug out with this new layer of defense!