Wednesday, September 10, 2014

It’s in the Requirements: Cyber Resiliency as a Design Element

This is the second installment of a two-part discussion of the threats and challenges involved with cybersecurity.  The first part of the discussion explored cyber threats and challenges using the Stuxnet attack as a lens.  This post picks up with an allegorical analysis of the cyber threat posed by nation-state attacks as well as ideas about how information systems can be built so that they are less tempting targets.

For me, and others such as Ruth Bader Ginsberg, Donald Douglas and Alan Dershowitz, growing up in Brooklyn was an education in itself.  In addition to practical matters such as what constituted an ideal slice of pizza, how to parallel park and how to tell which blocks to avoid as a pedestrian after dark, there were more philosophical lessons to be learned.  Take, for example, the case of Anthony Gianelli.  (Note:  Names have been changed to protect the innocent.)

Anthony, or Tony as he was called, was a hard working guy.  He had that common Brooklyn fondness for sitting on his stoop in the evenings and pontificating on weighty issues about the state of the world.  One week, as always, Tony played the lottery.  Only this week was different.  Tony won, and won big.  I won’t say just how much money Tony went home with after taxes, but it was bordering on life changing.  So what, you may ask, did Tony do with his winnings? 

For those readers hailing from that storied borough, the answer is both obvious and easy.  For everyone else… I’ll tell you.  Tony bought a car.  And not just any car.  Tony bought a pristine, brand-spanking-new Ferrari GTSi.  However, his trip home from the dealership was only the beginning.  Knowing that he had about a month before the car was delivered, Tony set about fortifying his garage.

Fortifying might have been a bit of an understatement.  Tony broke up the garage’s concrete floor and poured a new one - about eight feet deep.  Sunk deeply into the wet concrete were four hardened steel eye bolts.  The garage door was replaced with a high security model and a state of the art, sensor-based alarm system added.  During the construction process, Tony spent many an evening on his stoop declaiming enthusiastically about the high degree of security being engineered into his garage. 

The big day came and the Ferrari arrived.  Tony drove it in a manner that was almost, well, reverent.  At the end of the day, the ritual began.  Tony lovingly parked the car in the garage, ran hardened steel chains through the undercarriage and secured each chain to an eye bolt with a high security padlock.  The door was shut and hermetically sealed.  The alarm was set, Tony wished the car good night, and then took to the stoop, passionately discussing the Ferrari’s security.

One day, several months after taking delivery, Tony went down to the garage to greet the Ferrari.  To his horror and shock, the car was gone.  Not only was it gone, but there was no evidence of any burglary.  The door hadn’t been forced.  The alarm hadn’t been tripped.  The chains were neatly coiled around the eye bolts, the locks opened, ready for use.  Tony, predictably, went into mourning.

After several months and stages of grief, Tony became somewhat philosophical about the loss.  It was, he mused, a case of “easy come, easy go.”  And so, you can only imagine Tony’s surprise when he walked into his dark garage on the way to retrieve the newspaper one morning only to bump into something with delightful, albeit hard, curves.  Turning on the light, Tony stared and crossed himself.  The Ferrari was back.  In fact, it was all back.  The chains were looped through the undercarriage.  The alarm, which was now going off, had been set, and the door was still sealed.  It was as if the car had never left.  Except for one small detail.

Taped to the windshield was a note.  There were all of eight words:

If we really want it, we’ll take it.

Tony took his Ferrari and moved to New Jersey.
---
Tales of braggadocio and grand theft auto notwithstanding, the story about Tony’s Ferrari has an important nugget of advice for cyber defenders.  Tony ran into a certain kind of reality.  Specifically, he discovered what happens when an individual of significant but finite resources is at odds with an organization that has almost limitless time and resources.  This reality, deriving from the axiom that “given enough time and money, all things are possible,” also applies when cybersecurity intersects with geopolitics.  That is to say, when a nation-state puts your information system in the crosshairs of its cyber capabilities, there’s generally little that can be done about it.

That doesn’t mean that organizations should give up on cyber defense.  Dedicated, specific, targeted attacks by nation-states using Advanced Persistent Threats (e.g., “Stuxnet”) are rare.  The real cyber threats faced by commercial, government and military organizations – probes and penetration by external actors and data loss due to insider threats – are almost mundane in their ubiquity.  Moreover, these threats are so common that many security professionals simply assume that losses due to cyberattacks are just another terrain feature in cyberspace.

That assumption is premised on the ideas that cyber defense is inherently reactive and that that architecture of distributed systems (and, for that matter, the internet) must remain inherently static. 
That premise is inherently flawed. 

Technical standards and capabilities don’t remain static.  They continuously advance.  Many of the advances made over the last decade or so present engineers, architects, designers and developers with new options and choices when crafting responses to operational requirements.  Taken as a whole, this technical progress offers an ability to proactively design and integrally implement security in a manner that could alter much of the cybersecurity calculus. 

This isn’t to say that there is a single silver bullet.  Rather, there are a number of technologies that, operating in concert, offer designers and defenders significant advantages.  An exhaustive discussion of all these technologies could fill volumes (and has) and is beyond the scope of this post.  However, highlighting just a few provides a useful overview of the way things could, and should, be.

1.      Software is broken.  It’s created broken, it’s delivered broken and what’s worse, users become (unwitting) beta testers.  These flaws in the delivered product result in vulnerabilities which are exploited by hackers and malware authors.  In a disturbingly large proportion of cases, the delivery of flawed products can be traced to the nature of the software development life cycle itself.  In these cases, security verification and validation is the penultimate step prior to release.  As a result, it’s often rushed, resulting in flaws not being discovered.  Worse, it’s often too late or too expensive to fix a significant number of the flaws that are found.
 But what if security verification and validation could be pushed back to the beginning of the development lifecycle?  If we could ensure that the only code modules that entered the trunk were those that had passed the complete battery of functional and non-functional (e.g., performance, scalability, interoperability and security) tests, the ensuing increase in the quality of software products would be accompanied by a significant decrease in delivered vulnerabilities.
 The good news is that this is exactly what the DevOps PaaS delivers.  By leveraging a shared, Cloud-based integrated development environment (IDE), environmental variances between Dev, Test and Operations that inject vulnerabilities can be eliminated.  Next, by automating DevOps practices such as Continuous Build, Continuous Integration, Continuous Test and Continuous Design, the onus is shifted to the developer, who must deliver flawless code, from the tester who had previously been (unrealistically) expected to find all the flaws in the code.

2.      Many, if not most, critical systems are protected by access control systems that focus on authentication, or ensuring that the entity requesting access to the system is who it claims to be.  Authentication can be a powerful gate guard, sometimes requiring multiple concurrent methodologies (e.g., something you know, something you have, something you are, etc.).  The problem is that once a user is authenticated, these systems provide few, if any, controls or protections to system resources.  This vulnerability was exploited by both Bradley Manning and Edward Snowden.
 The answer is to add a layer that enforces fine-grained authorization and managing which resources can be accessed by authenticated users with a given set of attributes.  This mechanism, called attribute-based access control, or ABAC, is implemented through an OASIS open standard known as the eXtensible Access Control Markup Language (XACML).  XACML was first published in September 2003, and there are a significant number of commercial software packages (both proprietary and open source) that use it to bring ABAC’s powerful security to the enterprise.

3.     When vulnerabilities are discovered in an enterprise’s key software components, it can take a significant amount of time to disseminate countervailing security measures.  During this time, the enterprise remains vulnerable.  The challenge is to rapidly close the security gap while ensuring that the enterprise’s operations suffer as little disruption as possible.
 The answer is to apply access control security at the operating system level, enabling an access control regime that is dynamic and centrally controlled.  In principle, this is similar to what ABAC implements for enterprise resources.  In this case, however, the control takes place at the inter-process communication (IPC) level.  In practice, this means that the organization can, upon learning about a vulnerability or compromise, push out a new access control policy to all hosts.  The policy can both enable and disable specific IPC types.  The net result is that the compromised software is prevented from executing while replacement software is seamlessly enabled.

None of these things are a panacea to the cyber-vulnerability epidemic.  However, they all represent very real, tangible steps that engineers, designers and defenders can take to mitigate the risks faced while operating in an increasingly hostile environment.  They don’t solve everything.  But, taken in concert with other measures, they create a much more agile, resilient infrastructure.


And that beats moving to New Jersey.

Monday, September 1, 2014

STUXNET: ANATOMY OF A CYBER WEAPON



This is the first of a focused two part discussion of the threats and challenges involved with cyber security.  The exploration of cyber threats and challenges is conducted using the Stuxnet attack as a lens.  The following post picks up with an allegorical analysis of the cyber threat posed by nation-state attacks as well as ideas about how information systems can be built so that they are less tempting targets.

Stuxnet is widely described as the first cyber weapon.  In fact, Stuxnet was the culmination of an orchestrated campaign that employed an array of cyber weapons to achieve destructive effects against a specific industrial target.  This piece explores Stuxnet’s technology, its behavior and how it was used to execute a cyber-campaign against the Iranian uranium enrichment program.  This discussion will continue in a subsequent post describing an orthogonal view on the art and practice of security – one that proposes addressing security as a design-time concern with runtime impacts.

Stuxnet, discovered in June 2010, is a computer worm that was designed to attack industrial programmable logic controllers (PLC). PLCs automate electromechanical processes such as those used to control machinery on factory assembly lines, amusement park rides, or, in Stuxnet’s case, centrifuges for separating nuclear material.  Stuxnet’s impact was significant; forensic analyses conclude that it may have damaged or destroyed as many as 1,000 centrifuges at the Iranian nuclear enrichment facility located in Natanz.   Moreover, Stuxnet was not successfully contained, it has been “in the wild” and has appeared in several other countries, most notably Russia.

There are many aspects of the Stuxnet story, including who developed and deployed it and why.  While recent events seem to have definitively solved the attribution puzzle, Stuxnet’s operation and technology remain both clever and fascinating. 

A Stuxnet attack begins with a USB flash drive infected with the worm.  Why a flash drive?  Because the targeted networks are not usually connected to the internet.  These networks have an “air gap” physically separating them from the internet for security purposes.  That being said, USB drives don’t insert themselves into computers.  The essential transmission mechanism for the virus is, therefore, biological;  a user.   

I’m tempted to use the word “clueless” to describe such a user, but that wouldn’t be fair.  Most of us carbon-based, hominid, bipedal Terran life forms are inherently entropic – we’re hard-wired to seek the greatest return for the least amount of effort. In the case of a shiny new flash drive that’s just fallen into one’s lap, the first thing we’re inclined to do is to shove it into the nearest USB port to see what it contains.  And if that port just happens to be on your work computer, on an air-gapped network. . .well, you get the picture.

It’s now that Stuxnet goes to work, bypassing both the operating system’s (OS) inherent security measures and any anti-virus software that may be present.  Upon interrogation by the OS, it presents itself as a legitimate auto-run file.  Legitimacy, in the digital world, is conferred by means of a digital certificate.  A digital certificate (or identity certificate) is an electronic cryptographic document used to prove identity or legitimacy.  The certificate includes information about a public cryptographic key, information about its owner's identity, and the digital signature of an entity that has verified the certificate's contents are correct.  If the signature is valid, and the person or system examining the certificate trusts the signer, then it is assumed that the public cryptographic key or software signed with that key is safe for use.

Stuxnet proffers a stolen digital certificate to prove its trustworthiness.  Now vetted, the worm begins its own interrogation of the host system. :  Stuxnet confirms that the OS is a compatible version of Microsoft Windows and, if an anti-virus program is present, whether it is one that Stuxnet’s designers had previously compromised.  Upon receiving positive confirmation, Stuxnet downloads itself into the target computer.

It drops two files into the computer’s memory.  One of the files requests a download of the main Stuxnet archive file, while the other sets about camouflaging Stuxnet’s presence using a number of techniques, including modifying file creation and modification times to blend in with the surrounding system files and altering the Windows registry to ensure that the required Stuxnet files run on startup.  Once the archived file is downloaded, the Stuxnet worm unwraps itself to its full, executable form.

Meanwhile, the original Stuxnet infection is still on the USB flash drive.  After successfully infecting three separate computers, it commits “security suicide.”  That is, like a secret agent taking cyanide to ensure that she can’t be tortured to reveal her secrets, Stuxnet deletes itself from the flash drive to frustrate the efforts of malware analysts.

Internally to the target computer, Stuxnet has been busy.  It uses its rootkit to modify, and become part of the OS.  Stuxnet is now indistinguishable from Windows; it’s become part of the computer’s DNA.  It’s now that Stuxnet becomes a detective, exploring the computer and looking for certain files.  Specifically, Stuxnet is looking for industrial control system (ICS) software created by Siemens called Simatic PCS7 or Step 7 running on a Siemens Simatic Field PG notebook (a Windows-based system dedicated for ICS use).  

The problem facing Stuxnet at this point is that a computer can contain millions, if not tens of millions, of files and finding the right Step 7 file is a bit like looking for a needle in a haystack.  In order to systematize the search, Stuxnet needs to find a way to travel around the file system as it conducts its stealthy reconnaissance.  It does this by attaching itself to a very specific kind of process.:  One that is trusted at the highest levels by the OS and that looks at every single file on the computer.  Something like. . . 

. . .the scan process used by anti-virus software.  (In the attack on the facility in Natanz, Stuxnet compromised and used the scan processes from leading anti-virus programs.  (It’s worth noting that all of the companies whose products were compromised have long since remedied the vulnerabilities that Stuxnet exploited.)  Along the way, Stuxnet compromises every comparable process it comes across, pervading the computer’s memory and exploiting every resource available to execute the search.  

All the while, Stuxnet is constantly executing housekeeping functions.  When two Stuxnet worms meet, they compare version numbers, and the earlier version deletes itself from the system.   Stuxnet also continuously evaluates its system permission and access level.  If it finds that it does not have sufficient privileges, it uses a previously unknown system vulnerability (such a thing is called a “Zero-Day,” and will be discussed below) to grant itself the highest administrative privileges and rights.    If a local area network (LAN) connection is available, Stuxnet will communicate with Stuxnet worms on other computers and exchange updates – ensuring that the entire Stuxnet cohort running within the LAN is the most virulent and capable version.   If an Internet connection is found, Stuxnet reaches back to its command and control (C2) servers and uploads information about the infected computers, including their internet protocol (IP) addresses, OS types and whether or not Step 7 software has been found.

As noted earlier, Stuxnet relied on four Zero-Day vulnerabilities to conduct its attacks.  Zero-Days are of particular interest to hacker communities.:  Since they’re unknown, they are by definition almost impossible to defend against.  Stuxnet’s four Zero-Days included:


  • The Microsoft Windows shortcut automatic file execution vulnerability which allowed the worm to spread through removable flash drives;
  • A print spooler remote code execution vulnerability; and
  • TWO different privilege escalation vulnerabilities.

Once Stuxnet finds Step 7 software, it patiently waits and listens until a connection to a PLC is made.  When Stuxnet detects the connection, it penetrates the PLC and begins to wreak all sorts of havoc.  The code controlling frequency converters is modified and Stuxnet takes control of the converter drives.  What’s of great interest is Stuxnet’s method of camouflaging its control.   

Remember the scene in Mission Impossible, Ocean’s 11 and just about every other heist movie where the spies and/or thieves insert a video clip into the surveillance system?  They’re busy emptying the vault, but the hapless guard monitoring the video feed only sees undisturbed safe contents.  Stuxnet turned this little bit of fiction into reality.  Reporting signals indicating abnormal behavior sent by the PLC are intercepted by Stuxnet and in turn signals indicating nominal, normal behavior are sent to the monitoring software on the control computer.

Stuxnet is now in the position to effect a physical attack against the gas centrifuges.  To understand the attack it’s important to understand that centrifuges work by spinning at very high speeds and that maintaining these speeds within tolerance is critical to their safe operation.  Typically, gas centrifuges used to enrich uranium operate at between 807hz and 1,210hz, with 1,064hz as a generally accepted standard.

Stuxnet used the infected PLCs to cause the centrifuge rotors to spin at 1,410hz for short periods of time over a 27 day period.  At the end of the period, Stuxnet would cause the rotor speed to drop to 2hz for fifty minutes at a time.  Then the cycle repeated.  The result was that over time the centrifuge rotors became unbalanced, the motors wore out and in the worst cases, the centrifuges failed violently.

Stuxnet destroyed as much as twenty percent of the Iranian uranium enrichment capacity.  There are two really fascinating lessons that can be learned from the Stuxnet story.  The first is that cyber -attacks can and will have effects in the kinetic and/or physical realm.  Power grids, water purification facilities and other utilities are prime targets for such attacks.  The second is that within the current design and implementation paradigms by which software is created and deployed, if a bad actor with the resources of a nation-state wants to ruin your cyber-day, your day is pretty much going to be ruined.

But that assumes that we maintain the current paradigm of software development and deployment.  In my next post I’ll discuss ways to break the current paradigm and the implications for agile, resilient systems that can go into harm’s way, sustain a cyber-hit and continue to perform their missions.