Tuesday, December 2, 2014

Air Power, Big Cyber and the Coming Collapse

Current cybersecurity organizational and operational topologies were presaged a century ago by proponents of then-new aviation technology.  Aviation proponents like Giulio Douhet and Billy Mitchell preached a gospel declaring the airplane to be an operational panacea that would fundamentally change armed conflict.  History has proved them wrong.  Aviation is now a valuable arrow in the combined arms quiver that dominates the battle space, but it does not stand alone.

Contemporary Cyber-Douhets proselytize a doctrine of cybersecurity independence.  They claim that current and coming crops of large, expensive cybersecurity programs will tame a hostile cyberspace.  As with Douhet and Mitchell, history is likely to be unkind to these cyber-pundits.  Unfortunately, bursting this bubble of misplaced expectations will require an extended, painful and costly process.  From this, however, it is likely that an integrated and effective cybersecurity doctrine will emerge.

The Vision of Air Power

Giulio Douhet was a visionary.  Born in 1869, Douhet began a tumultuous career in the Italian Army in 1882.  While serving on the General Staff shortly after the turn of the century, he became an air power evangelist, advocating for the creation of a separate air arm commanded by aviators.  Appalled by the Italian Army’s shocking reverses at the start of the First World War, Douhet publicly criticized military leadership and demanded an air power solution.  In 1921, Douhet was promoted to general and published his seminal work on aerial warfare, The Command of the Air.

The Command of the Air argued that air power was revolutionary, rendering conventional armies superfluous.  Forces on the ground would be overflown and population centers, military installations and government centers would be attacked with impunity.  As a result, industry, the transportation and communications infrastructure, government and the “will of the people” would be disrupted and the war won through the aviators’ efforts.

Douhet’s American contemporary, Billy Mitchell, was also a visionary.  Like Douhet, the First World War left Mitchell with a belief that air power would dominate warfare, and that strategic bombardment would become the nation’s primary threat.  Mitchell was vocal about his views on air power, and vociferously attacked both the Navy and the War Department for having myopic views on the employment of aerial assets.  In 1924, influenced by Douhet, Mitchell published his own book on the subject, Winged Defense.

Douhet Discredited

Less than 20 years later, the vision of irresistible and dominating strategic airpower shared by Douhet and Mitchell was weighed and found wanting.  Between 1939 and 1945, American and British bombers dropped almost 1.6 million tons of bombs on Germany. 914,637 tons were dropped in 1944 alone.  Despite this, 1944 was the year that the German economy peaked in terms of military production.  Independent the Allied air forces might have been, but strategically decisive they were not.

In contrast, the effectiveness of tactical air power, in which air forces were integrated and operated in direct support of ground operations, was far greater.  The German air force, the Luftwaffe, shattered Polish forces in 1939 when German ground forces were often fought to a standstill.  Similarly, the XIX Tactical Air Command, integrated with and in support of General Patton’s Third Army, damaged or destroyed 24,634 ground targets in a single month (April 1945).

This pattern was to repeat in subsequent wars.  The American experience in Vietnam reinforced the shortcomings of air power acting independently, demonstrating that strategic bombing is often ineffective even when conducted by modern air forces against weak foes.  In contrast, during operation Linebacker I in the spring of 1972, US Navy and Air Force aircraft were seminal in the defeat of a massive, conventional North Vietnamese offensive.  US air power destroyed North Vietnamese units and effectively interdicted their supply lines, resulting in the North’s decisive defeat.

Since Vietnam, air power integrated with land and sea operations has been extraordinarily effective in conflicts ranging from Grenada to the Balkans, both Gulf Wars and Afghanistan.  In fact, the only conflict in recent history where air power seems less than completely effective is the ongoing campaign against the Islamic State of Iraq and Syria (ISIS).  The distinction?  The ISIS campaign is being conducted by air forces alone, and not by an integrated combat team.

Big Cyber

While Douhet’s vision has been mooted by the realities of the physical battlespace, it has been resurrected in cyberspace with a de facto doctrine that separates cybersecurity operations from an organization’s business activities.  This doctrine, referred to here as “Big Cyber,” is marked by several characteristics, including:

  • The establishment of discrete agencies or business units with exclusive responsibility for the cybersecurity of the larger community to which they belong;
  • Centralized funding, planning and execution of cybersecurity activities that occur parallel to business activities;
  • Minimal or no declarative authority with respect to the cybersecurity posture of the larger community; and
  • A mandate placing great emphasis on the perimetric security of existing vulnerable systems with little emphasis on the development of secure, resilient systems.

Funding lines reflect the dominance of Big Cyber.  In fiscal year (FY) 2014, US Cyber Command’s budget increased to $447 million, more than double FY 2013’s $191 million.  At the same time, the Department of Homeland Security (DHS) cybersecurity operations budget was increased by $35.5 million to a total of $792 million.  Security budgets at companies with more than $100 million in revenues increased by an average of five percent in 2014, while in the healthcare sector cybersecurity spending rocketed by almost 67 percent.

Perhaps more important than the total amount of funding is how the funds are allocated.  The number of personnel allocated to security organizations is growing.  The US Department of Defense (DoD) initially forecast a need for 6,200 additional personnel to support its cyber mission.  Now, DoD anticipates an even greater requirement.  Plans for additional personnel in both the public and private sectors are, almost uniformly, to assign them to dedicated cyber organizations, and not to business units in direct support of the larger organization’s business goals.  Even more telling is the nature of the acquisition programs being funded.  With rare exception, in both the public and private sectors, they focus on developing monolithic, centralized security mechanisms.  While these programs may generate significant new capabilities, there is often no requirement for their adoption by operating or business units.

Put another way, organizations exist to pursue their business activities.  Finance companies exist to profit by managing money.  Pharmaceutical companies exist to profit through the development and sale of drugs.  The military exists to successfully engage, defeat and destroy the enemy in defense of national interests.  The careful reader will note that none of these descriptions used the word “cybersecurity.”  That’s because cybersecurity is a support function intended to enable the primary business activity.  When cybersecurity is perceived as an imposition or an external mandate competing with organizational business goals, it will be summarily discarded and the organization will remain vulnerable.

The Coming Collapse, and Why It’s a Good Thing

Because of this, it can confidently be predicted that Big Cyber will remain ineffective in mitigating the risks inherent to a hostile cyberspace and that it will inevitably collapse under its own weight.
Other than the significant waste of resources, that’s not entirely a bad thing.  As with air power, the collective understanding of cybersecurity is constantly evolving.  An understanding of air power’s evolution allows for the development of a “theory of historical enabler integration.”  Under this theory, an operational enabler’s effectiveness is proportional to the degree to which it is integrated with the organization’s business activities over time.

The Second World War tested (at great expense) and disproved Douhet’s theory of an independent air arm that could determine the course a conflict in its own right.  At the same time, air power embarked upon tighter integration with maneuver forces and, in that role, became increasingly effective.  The development of small, inexpensive armed drones that can be deployed at the tactical level and operated by junior personnel to provide organic air support is simply the latest instance of “historical enabler integration.”

Big Cyber is today where air power was in 1943.  Huge and heroic efforts are being made to bring about operational cybersecurity through the use of independent solutions that operate in parallel to business activities.  It’s likely that over time, empirical operational data will dictate that cybersecurity personnel and capabilities become more tightly integrated with the business activities they are intended to protect.  Cybersecurity solutions will be as prolific and as effective as productivity software is today. And cyber knowledge and expertise will be as common as the knowledge required to configure a smart phone.

Big Cyber, like Douhet’s views captured in The Command of the Air, should be applauded as a necessary phase in the development of an effective, integrated solution.  And, as with The Command of the Air, its discrediting and collapse will be cause for celebration.

Sunday, October 26, 2014

If the Vulnerability of Our National Critical Infrastructure to Cyber-attack Keeps You Up at Night. . .You're Not Alone.

The vulnerability of our national critical infrastructure to cyber-attack is a serious matter that demands attention from industry and elected leadership.  However, if any meaningful change is going to take place, it must be demanded and supported by all stakeholders.  Please join me in Washington, DC on Tuesday, October 28th, 2014 to discuss both the vulnerabilities faced by the electrical grid and to explore – with your assistance and involvement – the way ahead to a safer, cyber-resilient national critical infrastructure.  For more information please see:  www.kasperskygovforum.com

Quoted in the Columbia, South Carolina newspaper The State on October 24, 2014, American University history professor Alan Lichtman characterized the national response to the current Ebola outbreak:

“When caught unprepared in a crisis, Americans have a tendency to see things in apocalyptic terms. . . It may not be a uniquely American trait, but it’s one that appears we’re particularly conditioned to and bound to repeat.”
“We are a people of plenty. We’re the richest nation on Earth. . . We have unparalleled prosperity, yet we have this anxiety that it may not last. This external force over which we don’t seem to have any control can cut at the heart of American contentment and prosperity.”
Regardless of how extreme the American reaction to the Ebola outbreak is, or whether it’s warranted, it’s impossible to argue that local, state and federal governments are taking measures to deal with a real and present threat.  These measures, however, are inherently reactive, coming into force after the danger materialized on American shores.

In the case of a dangerous communicable disease, a reactive approach may be sufficient; time will tell.   By the time the public or private sector is able to react to a successful cyber-attack on our national critical infrastructure, it will be too late.  The damage will already be done and the effects will be catastrophic, wide-spread, and long-lasting.  Imagine tens of millions without heat, light, fuel or purified water during winter.  Imagine an inability to transport or distribute food and other necessities to and within large urban areas for months at a time.

Feeling uneasy?  Concerned?  A little worried around the edges?

If you are, you’re not alone.  There’s growing awareness of the perfect storm of vulnerabilities inherent to the American national critical infrastructure.  It results from the combination of a thoroughly interconnected society, a long standing emphasis on safety and reliability (often to the detriment of security) within industrial control systems (ICS) and a commercial software development model that routinely incorporates (and touts!) post-deployment security and vulnerability patching.

Fortunately, as we become more aware of our vulnerabilities, we are also becoming motivated to discover and implement solutions that address them.  These range from policy initiatives designed to degrade, reduce and eventually remove domains and service providers from which attacks and malware emanate to the development and implementation of new technologies, systems and networks that both render conventional attacks less effective and create resilient systems that can continue to operate in spite of an attack.

Securing the resources necessary to implement these solutions will require broad, grass-roots awareness of and enfranchisement in both the vulnerability and the path to a solution.

To help raise this awareness, Kaspersky Government Security Solutions, Inc. (KGSS), in cooperation with its sponsors and partners, is hosting the 2nd annual Kaspersky Government Cybersecurity Forum in Washington, DC on Tuesday, October 28th, 2014.  The event, which will be held at the Ronald Reagan Building and International Trade Center, is open to all at no cost.  Additionally, attendees who hold PMP, CSEP, ASEP and/or CISSP certifications may use conference participation to claim required continuing education credits toward those certifications.
For more information, please see:  www.kasperskygovforum.com.

Thanks, and I hope to see you there!

Tuesday, October 14, 2014

Securing Cyberspace with Lessons Learned from Civil War Medicine

Those who cannot remember the past are condemned to repeat it. – George Santayana

The total number of Union dead during the Civil War ranges from 360,222 (if you accept the Fox-Livermore estimates from the late 19th century) to the (approximately) 437,006 estimated by Dr. David Hacker in late 2011.  Predictably, many historians struggle with and debate the revision to more than a century’s worth of settled fact.  What is not being debated is that approximately two-thirds of those who perished (between 240,148 and 291,337 men) fell not to Confederate bullets, bayonets, sabers or shells but to disease. 

In historical retrospect, this is surprising as huge investments, and concomitant advances, were made in medicine during the war.  Indeed, when the war began, the United States Army Medical Corps numbered just 87 men – and promotion was based strictly on seniority, not on merit.  By the cessation of hostilities in 1865, more than eleven thousand doctors had served.

Union medical care improved dramatically during 1862.  By the end of the year each regiment was being regularly supplied with a standard set of medical supplies and had an integral medical staff.  In January 1863, division level hospitals were established, serving as a rendezvous point for transports to the general hospitals.

By 1865, there were 204 Union general hospitals, capable of treating 136,894 patients.  General hospitals were designed to accommodate massive numbers of wounded and sick men.  They were built as pavilions with separate buildings, where thousands of patients could be sheltered.  Each building was its own ward, with high vaulted ceilings and large air vents, accommodating about 60 patients.  Ultimately over a million men were treated in the general hospitals, and the collective fatality rate was below 10%. 

Given the rapid development of this tremendous medical capability, the fact that hundreds of thousands of Union soldiers succumbed to disease during the war seems counterintuitive.  However, even a cursory look at what passed for field sanitation is illuminating. 

Soldiers rarely bathed, and the same pots that were used for cooking were also used to boil clothing to remove lice.  Regulations about camp sanitation and overcrowding were ignored.  Each company was supposed to have a field latrine.  Some regiments dug no latrines.  In other cases the men went off into open spaces around the edge of the camp.  Inevitable infestations of flies followed, as did diseases and bacteria they spread to both men and rations.

The Army diet was high in calories and low in vitamins.  Fruits and fresh vegetables were notable by their absence.  The food part of the ration was fresh or preserved beef, salt pork, navy beans, coffee and hardtack; large, thick crackers, usually stale and often inhabited by weevils.  Preparation of the food was as bad as the food itself, hasty, undercooked and almost always fried. 

And so, despite substantial investments in very large, very visible medical programs, huge numbers of Union soldiers died of disease.  Why? Because these programs were inherently reactive, responding to, but not alleviating, the root cause of the problem, which was the inherently unhealthful lifestyle of the individual soldier in the field.

For those in the burgeoning cybersecurity industry, and especially those who work at the nexus of the public and private sectors, Santayana’s words ring especially true.  Much like the Army Medical Corps in the early 1860s, a crisis of epic proportions is faced.  The number of cyber-attacks mounted on an hourly basis against government departments and agencies as well as their contractors and the national critical infrastructure is staggering.  Reports of data breaches suffered by major retailers, banks or manufacturers are a weekly, if not daily, occurrence.  There’s an ongoing information hemorrhage, flowing out through porous perimeters and effective countermeasures remain elusive. 

This isn’t to say that large, visible efforts and investments aren’t being made.  There are eighteen sector-specific Information Sharing and Analysis Centers (known as ISACs), established pursuant to Presidential Policy Directive 63, whose ostensible purpose is to promote risk mitigation, incident response, alert and information sharing within the discrete national critical infrastructure sectors.  Similarly, the United States Computer Emergency Readiness Team (US-CERT) was created in 2003 by the Department of Homeland Security (DHS) to analyze and reduce cyber threats, vulnerabilities, disseminate cyber threat warning information and coordinate incident response activities.

These efforts pale, however, when compared to two ongoing programs intended to secure civilian government networks and systems.  The Continuous Diagnostics and Mitigation (CDM) program is intended to provide capabilities and tools that identify, prioritize and mitigate cybersecurity risks on an ongoing basis.  The Development, Operations and Maintenance services in support of the National Cybersecurity Protection System, or DOMino program is intended to continue DHS efforts to protect the federal .gov domain with an intrusion detection system that monitors the network gateways of government departments and agencies for unauthorized traffic and malicious activity.  As an example of the magnitude of resources allocated to these programs, CDM has a program ceiling of $6 billion. 

Acquisitions resourcing is matched by policy efforts.  In February 2014, the National Institute of Standards and Technology (NIST) released the first version of its Framework for Improving Critical Infrastructure Cybersecurity.  The widely touted document, a collaborative effort of a consortium of industry and government partners, provides standards, guidelines and practices to promote the protection of critical infrastructure.

The Department of Defense (DoD) has also overhauled its cybersecurity policies and guidance so as to be more responsive to the ongoing cybersecurity emergency.  In March 2014, the DoD declared its information assurance mechanism (the Defense Information Assurance Certification and Accreditation Process, or DIACAP) obsolete and replaced it with a set of policies and guidance called the "Risk Management Framework (RMF) for DoD Information Technology (IT)."  The RMF, which aligns with the NIST RMF, is intended to address IT security risks throughout the IT life cycle.
 
All of these programs are important, necessary and from a purely parochial cybersecurity perspective, very welcome.  However, they also represent the same sort of top-down and reactive approach to security that the Army Medical Corps displayed with respect to soldiers’ health during the Civil War.  That is not to say that this sort of approach is incorrect, but rather that it does not form the basis for a complete solution to the problem.  A complete solution requires concurrent, systemic applications of both top-down and bottom-up approaches.

This was recognized by the military healthcare community, and critical changes were put into place with respect to both the individual soldier’s hygiene and sanitation in the field and the overall military medical system.  As a result, while there were 62 deaths from disease per 1,000 Union soldiers (using the Fox-Livermore statistics) during the Civil War, the number dropped to 25.6 per 1,000 in the Spanish-American War, and 16.5 in the First World War.  By the Second World War, less than one American soldier per 1,000 died from disease.

The systemic machinery of government information technology is already responding to the cybersecurity epidemic.  If the overall cybersecurity treatment is to be effective, comparable changes and improvements must be made to the cyber-hygiene requirements at both the operational user and acquisitions program levels.  More precisely, just as compliance with the high-level, top-down security requirements is required for a program to gain or maintain authority to operate on a government network, compliance with low-level implementation guidelines should be required as well.

The good news is that most of these changes are readily implemented, and not matters of breakthrough research.  A non-exhaustive listing of a few examples:

·       Assume that a breach is not a matter of if, it’s a matter of when, and design all systems to continue to operate effectively despite the presence of attackers.
·       Encrypt everything.  This includes data at rest, data in transit and data in use.  This way, even if an attacker gains access to protected system resources they will be of little or no value upon exfiltration, thus maintaining confidentiality despite a breach.  Additionally, they will be difficult if not impossible to alter, thus maintaining data integrity.
·       Implement comprehensive and fine-grained authorization management to ensure that the principle of least privilege is automatically implemented and maintained.  The open standard for the implementation of attribute based access control, the eXtensible Access Control Markup Language, or XACML, was first published in 2004, and there is a wide array of tools from which to choose when implementing this capability.
·       Ensure that email traffic is subjected not only to in-line spam filtration, but also to psycholinguisitic analysis intended to determine the degree to which a communication is deceptive.
·       Require that all personnel received mandatory training on good cyber hygiene and that continued compliance with cyber-hygiene standards is part of an annual or semi-annual performance evaluation.
·       Partner with industry to ensure a constant influx of innovative ideas.
It’s often said that government is only capable of broad, systemic action requiring years to develop and many more years to implement.  With respect to the current hostile state of cyberspace, the luxury of time simply doesn’t exist.  However, as can be seen by the improvements in military medical and hygiene standards, government is absolutely capable of implementing extremely effective solutions that merge both top-down and bottom-up approaches.  The battle for cyberspace can be won.  We simply have to, collaboratively, choose to win it.


Wednesday, September 10, 2014

It’s in the Requirements: Cyber Resiliency as a Design Element

This is the second installment of a two-part discussion of the threats and challenges involved with cybersecurity.  The first part of the discussion explored cyber threats and challenges using the Stuxnet attack as a lens.  This post picks up with an allegorical analysis of the cyber threat posed by nation-state attacks as well as ideas about how information systems can be built so that they are less tempting targets.

For me, and others such as Ruth Bader Ginsberg, Donald Douglas and Alan Dershowitz, growing up in Brooklyn was an education in itself.  In addition to practical matters such as what constituted an ideal slice of pizza, how to parallel park and how to tell which blocks to avoid as a pedestrian after dark, there were more philosophical lessons to be learned.  Take, for example, the case of Anthony Gianelli.  (Note:  Names have been changed to protect the innocent.)

Anthony, or Tony as he was called, was a hard working guy.  He had that common Brooklyn fondness for sitting on his stoop in the evenings and pontificating on weighty issues about the state of the world.  One week, as always, Tony played the lottery.  Only this week was different.  Tony won, and won big.  I won’t say just how much money Tony went home with after taxes, but it was bordering on life changing.  So what, you may ask, did Tony do with his winnings? 

For those readers hailing from that storied borough, the answer is both obvious and easy.  For everyone else… I’ll tell you.  Tony bought a car.  And not just any car.  Tony bought a pristine, brand-spanking-new Ferrari GTSi.  However, his trip home from the dealership was only the beginning.  Knowing that he had about a month before the car was delivered, Tony set about fortifying his garage.

Fortifying might have been a bit of an understatement.  Tony broke up the garage’s concrete floor and poured a new one - about eight feet deep.  Sunk deeply into the wet concrete were four hardened steel eye bolts.  The garage door was replaced with a high security model and a state of the art, sensor-based alarm system added.  During the construction process, Tony spent many an evening on his stoop declaiming enthusiastically about the high degree of security being engineered into his garage. 

The big day came and the Ferrari arrived.  Tony drove it in a manner that was almost, well, reverent.  At the end of the day, the ritual began.  Tony lovingly parked the car in the garage, ran hardened steel chains through the undercarriage and secured each chain to an eye bolt with a high security padlock.  The door was shut and hermetically sealed.  The alarm was set, Tony wished the car good night, and then took to the stoop, passionately discussing the Ferrari’s security.

One day, several months after taking delivery, Tony went down to the garage to greet the Ferrari.  To his horror and shock, the car was gone.  Not only was it gone, but there was no evidence of any burglary.  The door hadn’t been forced.  The alarm hadn’t been tripped.  The chains were neatly coiled around the eye bolts, the locks opened, ready for use.  Tony, predictably, went into mourning.

After several months and stages of grief, Tony became somewhat philosophical about the loss.  It was, he mused, a case of “easy come, easy go.”  And so, you can only imagine Tony’s surprise when he walked into his dark garage on the way to retrieve the newspaper one morning only to bump into something with delightful, albeit hard, curves.  Turning on the light, Tony stared and crossed himself.  The Ferrari was back.  In fact, it was all back.  The chains were looped through the undercarriage.  The alarm, which was now going off, had been set, and the door was still sealed.  It was as if the car had never left.  Except for one small detail.

Taped to the windshield was a note.  There were all of eight words:

If we really want it, we’ll take it.

Tony took his Ferrari and moved to New Jersey.
---
Tales of braggadocio and grand theft auto notwithstanding, the story about Tony’s Ferrari has an important nugget of advice for cyber defenders.  Tony ran into a certain kind of reality.  Specifically, he discovered what happens when an individual of significant but finite resources is at odds with an organization that has almost limitless time and resources.  This reality, deriving from the axiom that “given enough time and money, all things are possible,” also applies when cybersecurity intersects with geopolitics.  That is to say, when a nation-state puts your information system in the crosshairs of its cyber capabilities, there’s generally little that can be done about it.

That doesn’t mean that organizations should give up on cyber defense.  Dedicated, specific, targeted attacks by nation-states using Advanced Persistent Threats (e.g., “Stuxnet”) are rare.  The real cyber threats faced by commercial, government and military organizations – probes and penetration by external actors and data loss due to insider threats – are almost mundane in their ubiquity.  Moreover, these threats are so common that many security professionals simply assume that losses due to cyberattacks are just another terrain feature in cyberspace.

That assumption is premised on the ideas that cyber defense is inherently reactive and that that architecture of distributed systems (and, for that matter, the internet) must remain inherently static. 
That premise is inherently flawed. 

Technical standards and capabilities don’t remain static.  They continuously advance.  Many of the advances made over the last decade or so present engineers, architects, designers and developers with new options and choices when crafting responses to operational requirements.  Taken as a whole, this technical progress offers an ability to proactively design and integrally implement security in a manner that could alter much of the cybersecurity calculus. 

This isn’t to say that there is a single silver bullet.  Rather, there are a number of technologies that, operating in concert, offer designers and defenders significant advantages.  An exhaustive discussion of all these technologies could fill volumes (and has) and is beyond the scope of this post.  However, highlighting just a few provides a useful overview of the way things could, and should, be.

1.      Software is broken.  It’s created broken, it’s delivered broken and what’s worse, users become (unwitting) beta testers.  These flaws in the delivered product result in vulnerabilities which are exploited by hackers and malware authors.  In a disturbingly large proportion of cases, the delivery of flawed products can be traced to the nature of the software development life cycle itself.  In these cases, security verification and validation is the penultimate step prior to release.  As a result, it’s often rushed, resulting in flaws not being discovered.  Worse, it’s often too late or too expensive to fix a significant number of the flaws that are found.
 But what if security verification and validation could be pushed back to the beginning of the development lifecycle?  If we could ensure that the only code modules that entered the trunk were those that had passed the complete battery of functional and non-functional (e.g., performance, scalability, interoperability and security) tests, the ensuing increase in the quality of software products would be accompanied by a significant decrease in delivered vulnerabilities.
 The good news is that this is exactly what the DevOps PaaS delivers.  By leveraging a shared, Cloud-based integrated development environment (IDE), environmental variances between Dev, Test and Operations that inject vulnerabilities can be eliminated.  Next, by automating DevOps practices such as Continuous Build, Continuous Integration, Continuous Test and Continuous Design, the onus is shifted to the developer, who must deliver flawless code, from the tester who had previously been (unrealistically) expected to find all the flaws in the code.

2.      Many, if not most, critical systems are protected by access control systems that focus on authentication, or ensuring that the entity requesting access to the system is who it claims to be.  Authentication can be a powerful gate guard, sometimes requiring multiple concurrent methodologies (e.g., something you know, something you have, something you are, etc.).  The problem is that once a user is authenticated, these systems provide few, if any, controls or protections to system resources.  This vulnerability was exploited by both Bradley Manning and Edward Snowden.
 The answer is to add a layer that enforces fine-grained authorization and managing which resources can be accessed by authenticated users with a given set of attributes.  This mechanism, called attribute-based access control, or ABAC, is implemented through an OASIS open standard known as the eXtensible Access Control Markup Language (XACML).  XACML was first published in September 2003, and there are a significant number of commercial software packages (both proprietary and open source) that use it to bring ABAC’s powerful security to the enterprise.

3.     When vulnerabilities are discovered in an enterprise’s key software components, it can take a significant amount of time to disseminate countervailing security measures.  During this time, the enterprise remains vulnerable.  The challenge is to rapidly close the security gap while ensuring that the enterprise’s operations suffer as little disruption as possible.
 The answer is to apply access control security at the operating system level, enabling an access control regime that is dynamic and centrally controlled.  In principle, this is similar to what ABAC implements for enterprise resources.  In this case, however, the control takes place at the inter-process communication (IPC) level.  In practice, this means that the organization can, upon learning about a vulnerability or compromise, push out a new access control policy to all hosts.  The policy can both enable and disable specific IPC types.  The net result is that the compromised software is prevented from executing while replacement software is seamlessly enabled.

None of these things are a panacea to the cyber-vulnerability epidemic.  However, they all represent very real, tangible steps that engineers, designers and defenders can take to mitigate the risks faced while operating in an increasingly hostile environment.  They don’t solve everything.  But, taken in concert with other measures, they create a much more agile, resilient infrastructure.


And that beats moving to New Jersey.

Monday, September 1, 2014

STUXNET: ANATOMY OF A CYBER WEAPON



This is the first of a focused two part discussion of the threats and challenges involved with cyber security.  The exploration of cyber threats and challenges is conducted using the Stuxnet attack as a lens.  The following post picks up with an allegorical analysis of the cyber threat posed by nation-state attacks as well as ideas about how information systems can be built so that they are less tempting targets.

Stuxnet is widely described as the first cyber weapon.  In fact, Stuxnet was the culmination of an orchestrated campaign that employed an array of cyber weapons to achieve destructive effects against a specific industrial target.  This piece explores Stuxnet’s technology, its behavior and how it was used to execute a cyber-campaign against the Iranian uranium enrichment program.  This discussion will continue in a subsequent post describing an orthogonal view on the art and practice of security – one that proposes addressing security as a design-time concern with runtime impacts.

Stuxnet, discovered in June 2010, is a computer worm that was designed to attack industrial programmable logic controllers (PLC). PLCs automate electromechanical processes such as those used to control machinery on factory assembly lines, amusement park rides, or, in Stuxnet’s case, centrifuges for separating nuclear material.  Stuxnet’s impact was significant; forensic analyses conclude that it may have damaged or destroyed as many as 1,000 centrifuges at the Iranian nuclear enrichment facility located in Natanz.   Moreover, Stuxnet was not successfully contained, it has been “in the wild” and has appeared in several other countries, most notably Russia.

There are many aspects of the Stuxnet story, including who developed and deployed it and why.  While recent events seem to have definitively solved the attribution puzzle, Stuxnet’s operation and technology remain both clever and fascinating. 

A Stuxnet attack begins with a USB flash drive infected with the worm.  Why a flash drive?  Because the targeted networks are not usually connected to the internet.  These networks have an “air gap” physically separating them from the internet for security purposes.  That being said, USB drives don’t insert themselves into computers.  The essential transmission mechanism for the virus is, therefore, biological;  a user.   

I’m tempted to use the word “clueless” to describe such a user, but that wouldn’t be fair.  Most of us carbon-based, hominid, bipedal Terran life forms are inherently entropic – we’re hard-wired to seek the greatest return for the least amount of effort. In the case of a shiny new flash drive that’s just fallen into one’s lap, the first thing we’re inclined to do is to shove it into the nearest USB port to see what it contains.  And if that port just happens to be on your work computer, on an air-gapped network. . .well, you get the picture.

It’s now that Stuxnet goes to work, bypassing both the operating system’s (OS) inherent security measures and any anti-virus software that may be present.  Upon interrogation by the OS, it presents itself as a legitimate auto-run file.  Legitimacy, in the digital world, is conferred by means of a digital certificate.  A digital certificate (or identity certificate) is an electronic cryptographic document used to prove identity or legitimacy.  The certificate includes information about a public cryptographic key, information about its owner's identity, and the digital signature of an entity that has verified the certificate's contents are correct.  If the signature is valid, and the person or system examining the certificate trusts the signer, then it is assumed that the public cryptographic key or software signed with that key is safe for use.

Stuxnet proffers a stolen digital certificate to prove its trustworthiness.  Now vetted, the worm begins its own interrogation of the host system. :  Stuxnet confirms that the OS is a compatible version of Microsoft Windows and, if an anti-virus program is present, whether it is one that Stuxnet’s designers had previously compromised.  Upon receiving positive confirmation, Stuxnet downloads itself into the target computer.

It drops two files into the computer’s memory.  One of the files requests a download of the main Stuxnet archive file, while the other sets about camouflaging Stuxnet’s presence using a number of techniques, including modifying file creation and modification times to blend in with the surrounding system files and altering the Windows registry to ensure that the required Stuxnet files run on startup.  Once the archived file is downloaded, the Stuxnet worm unwraps itself to its full, executable form.

Meanwhile, the original Stuxnet infection is still on the USB flash drive.  After successfully infecting three separate computers, it commits “security suicide.”  That is, like a secret agent taking cyanide to ensure that she can’t be tortured to reveal her secrets, Stuxnet deletes itself from the flash drive to frustrate the efforts of malware analysts.

Internally to the target computer, Stuxnet has been busy.  It uses its rootkit to modify, and become part of the OS.  Stuxnet is now indistinguishable from Windows; it’s become part of the computer’s DNA.  It’s now that Stuxnet becomes a detective, exploring the computer and looking for certain files.  Specifically, Stuxnet is looking for industrial control system (ICS) software created by Siemens called Simatic PCS7 or Step 7 running on a Siemens Simatic Field PG notebook (a Windows-based system dedicated for ICS use).  

The problem facing Stuxnet at this point is that a computer can contain millions, if not tens of millions, of files and finding the right Step 7 file is a bit like looking for a needle in a haystack.  In order to systematize the search, Stuxnet needs to find a way to travel around the file system as it conducts its stealthy reconnaissance.  It does this by attaching itself to a very specific kind of process.:  One that is trusted at the highest levels by the OS and that looks at every single file on the computer.  Something like. . . 

. . .the scan process used by anti-virus software.  (In the attack on the facility in Natanz, Stuxnet compromised and used the scan processes from leading anti-virus programs.  (It’s worth noting that all of the companies whose products were compromised have long since remedied the vulnerabilities that Stuxnet exploited.)  Along the way, Stuxnet compromises every comparable process it comes across, pervading the computer’s memory and exploiting every resource available to execute the search.  

All the while, Stuxnet is constantly executing housekeeping functions.  When two Stuxnet worms meet, they compare version numbers, and the earlier version deletes itself from the system.   Stuxnet also continuously evaluates its system permission and access level.  If it finds that it does not have sufficient privileges, it uses a previously unknown system vulnerability (such a thing is called a “Zero-Day,” and will be discussed below) to grant itself the highest administrative privileges and rights.    If a local area network (LAN) connection is available, Stuxnet will communicate with Stuxnet worms on other computers and exchange updates – ensuring that the entire Stuxnet cohort running within the LAN is the most virulent and capable version.   If an Internet connection is found, Stuxnet reaches back to its command and control (C2) servers and uploads information about the infected computers, including their internet protocol (IP) addresses, OS types and whether or not Step 7 software has been found.

As noted earlier, Stuxnet relied on four Zero-Day vulnerabilities to conduct its attacks.  Zero-Days are of particular interest to hacker communities.:  Since they’re unknown, they are by definition almost impossible to defend against.  Stuxnet’s four Zero-Days included:


  • The Microsoft Windows shortcut automatic file execution vulnerability which allowed the worm to spread through removable flash drives;
  • A print spooler remote code execution vulnerability; and
  • TWO different privilege escalation vulnerabilities.

Once Stuxnet finds Step 7 software, it patiently waits and listens until a connection to a PLC is made.  When Stuxnet detects the connection, it penetrates the PLC and begins to wreak all sorts of havoc.  The code controlling frequency converters is modified and Stuxnet takes control of the converter drives.  What’s of great interest is Stuxnet’s method of camouflaging its control.   

Remember the scene in Mission Impossible, Ocean’s 11 and just about every other heist movie where the spies and/or thieves insert a video clip into the surveillance system?  They’re busy emptying the vault, but the hapless guard monitoring the video feed only sees undisturbed safe contents.  Stuxnet turned this little bit of fiction into reality.  Reporting signals indicating abnormal behavior sent by the PLC are intercepted by Stuxnet and in turn signals indicating nominal, normal behavior are sent to the monitoring software on the control computer.

Stuxnet is now in the position to effect a physical attack against the gas centrifuges.  To understand the attack it’s important to understand that centrifuges work by spinning at very high speeds and that maintaining these speeds within tolerance is critical to their safe operation.  Typically, gas centrifuges used to enrich uranium operate at between 807hz and 1,210hz, with 1,064hz as a generally accepted standard.

Stuxnet used the infected PLCs to cause the centrifuge rotors to spin at 1,410hz for short periods of time over a 27 day period.  At the end of the period, Stuxnet would cause the rotor speed to drop to 2hz for fifty minutes at a time.  Then the cycle repeated.  The result was that over time the centrifuge rotors became unbalanced, the motors wore out and in the worst cases, the centrifuges failed violently.

Stuxnet destroyed as much as twenty percent of the Iranian uranium enrichment capacity.  There are two really fascinating lessons that can be learned from the Stuxnet story.  The first is that cyber -attacks can and will have effects in the kinetic and/or physical realm.  Power grids, water purification facilities and other utilities are prime targets for such attacks.  The second is that within the current design and implementation paradigms by which software is created and deployed, if a bad actor with the resources of a nation-state wants to ruin your cyber-day, your day is pretty much going to be ruined.

But that assumes that we maintain the current paradigm of software development and deployment.  In my next post I’ll discuss ways to break the current paradigm and the implications for agile, resilient systems that can go into harm’s way, sustain a cyber-hit and continue to perform their missions.

Wednesday, July 16, 2014

Transformation: A Future Not Slaved to the Past

In his May 30, 2014 contribution to the Washington Post’s Innovations blog, Dominic Basulto lays out a convincing argument as to how cyber-warfare represents a new form of unobserved but continuous warfare in which our partners are also our enemies.  The logic within Basulto’s piece is flawless, and his conclusion, that the “mounting cyber-war with China is nothing less than the future of war” and that “war is everywhere, and yet nowhere because it is completely digital, existing only in the ether” is particularly powerful. 

Unfortunately, the argument, and its powerful conclusion, ultimately fails.  Not because of errors in the internal logic, but rather because the implicit external premise, that the both the architecture of the internet and the processes by which software is developed and deployed are, like the laws of physics, immutable.  From a security perspective, the piece portrays a world where security technology and those charged with its development, deployment and use are perpetually one step behind the attackers who can, will and do use vulnerabilities in both architecture and process to spy, steal and destroy. 

It’s a world that is, fortunately, more one of willful science fiction than of predetermined technological fate.  We live in an interesting age.  There are cyber threats everywhere, to be sure.  But our ability to craft a safe, stable and secure cyber environment is very much a matter of choice.  From a security perspective, the next page is unwritten and we get to decide what it says, no matter how disruptive.

As we begin to write, let’s start with some broadly-agreed givens: 

  • There’s nothing magical about cyber security;
  • There are no silver bullets; and
  • Solutions leading to a secure common, distributed computing environment demand investments of time and resources. 
Let’s also be both thoughtful and careful before we allow pen to touch paper.  What we don’t want to do is perpetuate outdated assumptions at the expense of innovative thought and execution.  For example, there’s a common assumption in the information technology (IT) industry in general and the security industry (ITSec) in particular that mirrors the flaw in Basulto’s fundamental premise; that new security solutions must be applied to computing and internet architectures comparable or identical to those that exist today.  The premise behind this idea, that “what is, is what must be,” is the driver behind the continued proliferation of insecure infrastructures and compromisable computing platforms.

There’s nothing quixotic – or new - about seeking disruptive change.  “Transformation” has been a buzzword in industry and government for at least a decade.  For example, the North Atlantic Treaty Organization (NATO) has had a command dedicated to just that since 2003.  The “Allied Command Transformation” is responsible for leading the military transformation of forces and capabilities, using new concepts and doctrines in order to improve NATO's military effectiveness.  Unfortunately, many transformation efforts are often diverse and fragmented, and yield few tangible benefits.  Fortunately, within the rubric of cyber security, it’s possible to focus on a relatively small number of transformational efforts.

Let’s look at just four examples.  While not a panacea, implementation of these four would have a very significant, ameliorating impact on the state of global cyber vulnerability.

1. Security as part of the development process

Software security vulnerabilities are essentially flaws in the delivered product.  These flaws are, with rare exception, inadvertent.  Often they are undetectable to the end user.  That is, while the software may fulfill all of its functional requirements, there may be hidden flaws in non-functional requirements such as interoperability, performance or security.  It is these flaws, or vulnerabilities, that are exploited by hackers.

In large part, software vulnerabilities derive from traditional software development lifecycles (SDLC) which either fail to emphasize non-functional requirements, use a waterfall model where testing is pushed to the end of the cycle, don’t have a clear set of required best coding practices, are lacking in code reviews or some combination of the four.  These shortcomings are systemic in nature, and are not a factor of developer skill level.  Addressing them requires a paradigm shift.

The DevOps Platform-as-a-Service (PaaS) represents such a shift.  A cloud-based DevOps PaaS enables a project owner to centrally define the nature of a development environment, eliminating unexpected differences between development, test and operational environments.  Critically, the DevOps PaaS also enables the project owner to define continuous test/continuous integration patterns that push the onus of meeting non-functional requirements back to the developer. 

In a nutshell, both functional and non-functional requirements are instantiated as software tests.  When a developer attempts to check a new or modified module into the version control system, a number of processes are executed.  First, the module is vetted against the test regime.  Failures are noted and logged, and the module’s promotion along the SDLC stops at that point.  The developer is notified as to which tests failed, which parts of the software are flawed and the nature of the flaws.  Assuming the module tests successfully, it is automatically integrated into the project trunk and the version incremented.

A procedural benefit of a DevOps approach is that requirements are continually reviewed, reevaluated, and refined.  While this is essential to managing and adapting to change, it has the additional benefits of fleshing out requirements that are initially not well understood and identifying previously obscured non-functional requirements.  In the end, requirements trump process; if you don’t have all your requirements specified, DevOps will only help so much.

The net result is that a significantly larger percentage of flaws are identified and remedied during development.  More importantly, flaw/vulnerability identification takes place across the functional – non-functional requirements spectrum.  Consequently, the number of vulnerabilities in delivered software products can be expected to drop.

2. Encryption will be ubiquitous and preserve confidentiality and enhance regulability

For consumers, and many enterprises, encryption is an added layer of security that requires an additional level of effort.  Human nature being what it is, the results of the calculus are generally that a lower level of effort is more valuable than an intangible security benefit.  Cyber-criminals (and intelligence agencies) bank on this.  What if this paradigm could be inverted such that encryption became the norm rather than the exception?

Encryption technologies offer the twin benefits of 1) preserving the confidentiality of communications and 2) providing a unique (and difficult to forge) means for a user to identify herself.   The confidentiality benefit is self-evident:  Encrypted communications are able to be seen and used only by those who have the necessary key.  Abusing those communications requires significantly more work on an attacker’s part.

The identification benefit ensures that all users of (and on) a particular service or network are identifiable via the possession and use of a unique credential.  This isn’t new or draconian.  For example, (legal) users of public thoroughfares must acquire a unique credential issued by the state:  a driver’s license.  The issuance of such credentials is dependent on the user’s provision of strong proof of identity (such as, in the case of a driver’s license, a birth certificate, passport or social security card). The encryption-based equivalent to a driver’s license, a digital signature, could be a required element, used to positively authenticate users before access to any electronic resources is granted. 

From a security perspective, a unique authentication credential provides the ability to tie actions taken by a particular entity to a particular person.  As a result, the ability to regulate illegal behavior increases while the ability to anonymously engage in such behavior is concomitantly curtailed.

3.  Attribute-based authorization management delivery at both the OS and application levels

Here’s a hypothetical.  Imagine that you own a hotel.  Now imagine that you’ve put an impressive and effective security fence around the hotel, with a single locking entry point, guarded by a particularly frightening Terminator-like entity with the ability to make unerring access control decisions based on the credentials proffered by putative guests.  Now imagine that the lock on the entry point is the only lock in the hotel.  Every other room on the property can be entered simply by turning the doorknob. 

The word “crazy” might be among the adjectives used to describe the scenario above.  Despite that characterization, this type of authentication-only security is routinely practiced on critical systems in both the public and private sectors.  Not only does it fail to mitigate the insider threat, but it is also antithetical to the basic information security principle of defense in depth.  Once inside the authentication perimeter, an attacker can go anywhere and do anything.

A solution that is rapidly gaining momentum at the application layer is the employment of attribute-based access control (ABAC) technologies based on the eXtensible Access Control Markup Language (XACML) standard.  In an ABAC implementation, every attempt by a user to access a resource is stopped and evaluated against a centrally stored (and controlling) access control policy relevant to both the requested resource and the nature – or attributes – a user is required to have in order to access the resource.  Access requests from users whose attributes match the policy requirements go through, those that do not are blocked.

A similar solution can be applied at the operating system level to allow or block read/write attempts across inter-process communications (IPC) based on policies matching the attributes of the initiating process and the target.  One example, known as Secure OS, is under development by Kaspersky Lab.  At either level, exploiting a system that implements ABAC is significantly more difficult for an attacker and helps to buy down the risk of operating in a hostile environment.

4.  Routine continuous assessment and monitoring on networks and systems


It’s not uncommon for attackers, once a system has been compromised, to exfiltrate large amounts of sensitive data over an extended period.  Often, this activity presents as routine system and network activity.  As it’s considered to be “normal,” security canaries aren’t alerted and the attack proceeds unimpeded. 

Part of the problem is that the quantification of system activity is generally binary. That is, it’s either up or it’s down.  And, while this is important in terms of knowing what capabilities are available to an enterprise at any given time, it doesn’t provide actionable intelligence as to how the system is being used (or abused) at any given time.  Fortunately, it’s essentially a Big Data problem, and Big Data tools and solutions are well understood. 

The solution comprises two discrete components.  First, an ongoing data collection and analysis activity is used to establish a baseline for normal user behavior, network loading, throughput and other metrics.   Once the baseline is established, collection activity is maintained, and the collected behavioral metrics are evaluated against the baseline on a continual basis.  Deviations from the norm exceeding a specified tolerance are reported, trigger automated defensive activity or some combination of the two.

Conclusion

To reiterate, these measures do not comprise a panacea.  Instead, they represent a change, a paradigm shift in the way computing and the internet are conceived, architected and deployed that offers the promise of a significant increase in security and stability.  More importantly, they represent a series of choices in how we implement and control our cyber environment.  The future, contrary to Basulto’s assumption, isn’t slaved to the past.