NSA Headquarters – Ft Mead, Maryland
Eric De Grasse
Chief Technology Officer
Project Counsel Media
(hat tip to my partner, Gregory Bufithis, for his contributions and late night/morning work on this)
14 January 2020 (Las Vegas, Nevada) – So we are all agog because the National Security Agency (NSA) found a dangerous Microsoft software flaw and alerted the firm rather than weaponize it. Or so they say. I have read the NSA press release and the Microsoft advisory. I do not think it was quite like that. The bug – essentially a mistake in the computer code – affects the Windows 10 operating system, the most widely used in government and business today.
At its heart are blunders in the validation of Elliptic Curve Cryptography certificates (explained below). The end result is that miscreants can cryptographically sign malware using a spoofed certificate to make the code appear to come from a trusted application developer. Thus, folks may be tricked into installing spyware, ransomware, and other horrible stuff.
The NSA took things a step further (disputed by many cyber mavens), suggesting the bug could not only be abused to disguise software nasties as legit apps, but also to potentially intercept secure network communications:
“NSA assesses the vulnerability to be severe and that sophisticated cyber actors will understand the underlying flaw very quickly and, if exploited, would render the previously mentioned platforms as fundamentally vulnerable. The consequences of not patching the vulnerability are severe and widespread.”
But read the tech bits from the advisory and you get:
1) it’s only rated “Important”
2) it’s a spoofing issue
3) to get RCE [Remote Code Execution] with it you would need authorization, and to have code executing already.
I wouldn’t lose any sleep about the impact of this vulnerability. The NSA did a big press tour before announcement so I expected a big media play. My view is the NSA is trying some PR here to make themselves look good. And because I am a cynical old coot, I suspect the NSA probably found 15 different holes, kept 14 to themselves to exploit, and made the 15th, which is probably the least useful one, public just for the good PR.
The American spying agency, though, wants everyone to know – to the point of even holding a press conference – that it privately found and reported this diabolical certification flaw to Microsoft, and that it is a totally friendly mass-surveillance system that has turned a new leaf, wants to be on the good side of infosec researchers, and cares about your ongoing ability to verify the origin and integrity of executable files and network connections. And that it’s happy for Microsoft to publicly thank the snoops for finding the flaw, which it did.
Lots of issues to unpack here and I need to catch a flight so let me just run a few thoughts by you:
• If the NSA could find the vulnerability, isn’t it arrogant to assume no one else did, even before the NSA stumbled on it? From all the publications I have read, it doesn’t look like the NSA is the lead here, rather an also-ran. Why the NSA did not also notify the collateral software manufacturers of the vulnerabilities, leaving the world exposed, is another questionable decision.
• The focus is on “SSL Certificates” which are used for “https:” websites. Just a little background. These are problematic because they authorize a middle-man to block access to a website. It’s a handy way to ‘turn off’ someone the certification authority does not like (or politically does not agree with). And there doesn’t seem to be anything to prevent a flunkie working for a third party, standing at a terminal at the certification authority, to moderate access to a website. That’s because the Internet protocol essentially means that you ask permission to view the site. According to several reports I read the vulnerability unearthed by the N.S.A. could potentially allow a hacker to add a fake signature that could allow malware to be inserted onto a vulnerable computer.
• This story is also a very good primer on how the web of laws work that restrict/do not restrict the NSA. The U.S. military and the N.S.A. are barred by law – under the Posse Comitatus Act – from acting domestically. This is the reason for the large number of U.S. Government front organizations, through which the NSA is authorized to communicate with domestic companies. For example, in the financial sector the U.S. Secret Service operates domestically in its cyber security capacity. In some cases, companies were specifically created by the Congress for these purposes. For example, the MITRE Corporation is one such front company. They talk to the N.S.A., and then they talk to people inside the U.S. No direct communication is allowed. Also, many companies hire people with U.S. security clearances for these same purposes. This allows confidential information to be passed to the companies, with secrecy controlled under threat of the National Security laws.
• Microsoft has never escaped its Swiss-cheese legacy of security holes. I do not have the time to go into a lot of details but some points:
– Microsoft did not design Windows for security. It was built for an 8 bit machine, based on CPM. It wasn’t even considered important to create enough character space for a four digit year and that caused tremendous issues that ended up being patched.
– Unix and Unix-like systems such as Linux, Minix, BSD are designed from the ground up to be secure and securable. That is why most governments use their own custom builds of Linux for their secure systems and the lack of forethought by politicians (and eagerness to be bribed) is why we have based our systems on an inherently insecure platform … Windows.
– Windows has become so entrenched in our business computer world that it is considered almost impossible to get rid of in favor of better systems. It has been greatly improved, but it’s based on a profit model not a security model. The Russians have always said we will do ourselves in by our greed.
– I spent my earliest programming days working in computing environments with 1960s/1970s-vintage names like CPS/IBM-360, CDC-6400, McAuto and GEISCO/MARK III/Honeywell. These were mainframe timesharing systems. MARK III exemplified the security thinking of its day. It was the world’s largest timesharing network in its heyday and was for many years used for top-secret NASA and US military computing as well as an international bank transfer service with 100-million-dollar-plus single transaction sizes. This stuff had to work, and it had to be secure.
– But the security foundation of all four systems was grounded in the same basic fundamentals, as it had to be. When you had dozens, hundreds or thousands of clients sharing the same computing hardware, you absolutely had to keep their resources and activities separated, both to maintain security and system stability and reliability.
– The operating system (OS) of each maintained this separation. It ran in what was called “master mode” and addressed memory from address zero up – all the way up through the last hardware word. Nobody but God and his most trusted, competent assistants ran the MARK III OS or directly worked in master mode.
– Mere mortals ran in “slave mode”. The OS allocated you chunks of memory as needed. Each started at the address in a “base address register”, and you were limited to the memory length held in a “hardware limit register”. There was no way in hell you could peek or poke below the BAR address or above the BAR+HLR address. And I mean no way – period.
– MARK III was born with this technology around 1972, succeeding a precursor OS that had been born with it a few years earlier. Unix was born with it, too, and is still going strong with this same security foundation.
– And guess when Microsoft implemented it. Hold your breath: it never has. Here we are a half-century later in 2020, and we’re still waiting.
– It’s no wonder Microsoft has never escaped its Swiss-cheese legacy of security holes.
CONCLUDING THOUGHTS
The nature of hacking requires the discovery and exploitation of “bugs” in software. It would be reasonable to assume that this alert was made because:
1) The NSA has a large enough exploit database that this one is more valuable for PR purposes, and/or
2) This vulnerability is dangerous enough that it threatens the NSA itself, and/or
3) Other groups are already using it and the NSA is using Microsoft to stop their competition. It would be unreasonable to assume that the NSA has suddenly decided that it no longer wants to have unrestricted access to all of the world’s data.
My guess it is more than likely that NSA has know about the vulnerability — and exploited it — but then found out that other/hostile actors discovered the vulnerability, too … and that’s when NSA stepped forward to help Microsoft.
The more likely scenario and sequence was: (a) seek and find flaws in the product; (b) exploit them in massive sweeps and stash the data for future reference as needed; and (c) let the PR folks have their day and make a big splash announcement about how the white hats want to develop trust with tech companies. Promote trust and all of that. Pure PR. The NSA does not make announcements without an ulterior and self serving motive.
And Microsoft knows it’s screwed because they have built so many layers of complexity causing so many issues they must keep narrowing the “Window” they “support” before using the hackers as proxy sales teams. Because the minute they stop introducing the eclectic entropy of the world’s most over paid typing pool, it’s time to get honest jobs.
I will leave you with a quote from one of our favorite hacker movies:
“And it’s not about who’s got the most bullets. It’s about who controls the information. What we see and hear, how we work, what we think… it’s all about the information! And the world isn’t run by weapons anymore, or energy, or money. It’s run by little ones and zeroes, little bits of data. It’s all just electrons”.
– “Sneakers”, 1992