On 10 February 2015, Microsoft released a Windows security patch for all versions since Windows Server 2003, addressing a 15 year old vulnerability that would allow arbitrary users to escalate permissions by executing an application. This vulnerability, related to the kernel mode driver, win32k.sys, abuses a window scrollbar drawing function to access a struct after it is freed (use-after-free vulnerability), defeating kernel sandboxing, segregation, and address randomisation.
Aside from the obvious questions like "Why does Windows have scrollbar drawing functions in its kernel?" (which by the way persists all the way through to Windows 10 Developer Preview), this vulnerability highlights a number of issues with secure programming in industry today.
The big names in software development today all have their own "bug fixing culture" which arises from development methodologies and risk assessment factors. Why is it that CERT uses a 45-day, Yahoo a 90-day, ZDI a 120-day, and Google a 90-day disclosure policy on bugs, yet Microsoft still consistently lets bugs accumulate, waiting for their priority to escalate before acting on them? Even if they don’t exercise Correctness by Construction, one should be familiar with the advantages of secure development methodologies, and recognise their habits are in violation of what some methodologies would consider a key principle[2 Section Correctness by Construction, Principle 6].
Google even recently relaxed their disclosure policy, providing an additional 14-day window before public release, almost purely due to Microsoft backlash. I don’t think Microsoft has any grounds to make this kind of complaint and/or demand, given that their own methodology relies on tools for static analysis, and encourages others to do the same. Microsoft should be following Correctness by Construction Principle 3: eliminate errors before testing. "Testing is the second most expensive way of finding errors. The most expensive is to let your customers find them for you."
This is all a real danger for clients. Despite Microsoft boasting a 60% vulnerability reduction in 2008 compared to 2002, this isn’t good enough when critical vulnerabilities are found that have been around for 15 years, and Microsoft phases out support for old systems - the market share for Windows XP still rests above 18%.
The Windows kernel is protected by a number of systems to bolster security, including (among others):
Together these systems provide Defense in Depth, slowing and demoralizing attackers, as well as providing an inner defense system in the case that an outer system is breached. I don’t claim to be an expert on secure operating system programming, but I do recognize that some of these systems share a lot in common with Security by Obscurity. Some of these systems have been criticised as worthless, while in general Security by Obscurity is a major pitfall in secure programming as it either provides a band-aid fix, or masks a real issue by temporarily hiding it.
All of these issues lead to serious consideration for how Microsoft is conducting business with its Windows product family. Microsoft has been following the "good enough" attitude for years, and it doesn’t seem to affect corporate usage, considering the Windows is the most common OS in business, despite these massive security concerns. Still, I’ll not wear a tinfoil hat and claim the real golden age of Linux is coming any time soon.
Correctness by Construction claims the most expensive bugs are the ones found by users - is this really the case? If it is, Microsoft just has so much money to throw around it doesn’t matter. If it’s not, has Microsoft somehow optimized their risk assessment better than the rest of industry?
Release schedules might also be ruining Windows’ security reputation. Is it the case that overzealous senior management are failing to consider the implications of untested code? Perhaps Windows 10 will be the end of such issues with the rumors of rolling release development cycle.
With all these Defense In Depth measures and still vulnerabilities, we have to wonder if the maintainability nightmare is worth the extra security. One can easily conceive how the six systems described earlier could contribute 10+ kloc.
Lastly, Microsoft encourages the use of static analysis to find security vulnerabilities, yet, the same vulnerability discussed revealed that the same kernel segments are using a function called xxxWindowEvent() - which turns out to be dead code! If Microsoft uses static analysis tools to find security holes, surely they can also realize they’re not even achieving code coverage in practice, let alone test.