Monday, January 26, 2015

Treasures from the East: How Funded Meritocracy Can Change Government and Cybersecurity

For centuries, the trade winds, or Trades, have been the means by which the bounty of the East has enriched the West.  These riches were often tangible items such as precious metals, textiles, works of art and gemstones.  The most enduring itinerant wealth, however, has been ideas that fundamentally altered Western concepts of technology, law, government and education.  The westward migration of knowledge continues today, and may hold the key to economic innovation and a safer, more secure cyberspace.

There’s a long history of progressive ideas emanating from the East.  For example, western ideas of governance by a professional civil service are Chinese in origin.  The concept of a civil service meritocracy originated in China in 207 BCE.  Prior to that, official appointments were based on aristocratic recommendations and the majority of bureaucrats were titled peers.  As the empire grew and nepotism became rampant, the system broke down and government became increasingly inefficient and ineffective. 

The solution was the “Imperial Examination,” a sweeping testing system designed to select the best and brightest candidates for civil service.  Initiated in the Han dynasty, this system of examination and appointment became the primary path to state employment during the Tang dynasty (618 – 907 CE), and remained so until 1905. 

The examination curriculum ultimately covered five areas of study: military strategy, civil law, revenue and taxation, agriculture and geography, and the Confucian classics.  There were five testing levels, each increasing in scope and difficulty.  This hierarchy was intended to match candidates to levels of responsibility associated with prefecture, provincial, national and court-level appointments respectively.  This examination is regarded by historians as the first merit-based, standardized government occupational testing mechanism.

Unfortunately, innovative ideas for government travel less rapidly than the Trades.  More than a millennium passed before a comparable civil service meritocracy was implemented in the United States.  The Pendleton Civil Service Reform Act was passed in 1883 in response to the assassination of President Garfield by a civil service applicant, Charles Guiteau, who had been rejected under the previous patronage (or spoils) system.  The Act required applicants to pass an exam in order to be eligible for civil service jobs.  It also afforded protections against retaliatory, partisan or political dismissal, freeing civil servants from the influences of political patronage.  As a result (so the theory went), civil servants would be selected based on merit and the career service would operate in a politically neutral manner.

Another critically relevant Eastern innovation addresses the creation, nurturing and maintenance of an innovative, technically astute cyber workforce.  The Israeli Talpiot program is one of the most successful examples of national investment in cyber education and training in the world.

In the 1973 Yom Kippur War, Israeli forces were surprised by Egyptian use of sophisticated technology, including surface to air missiles and guided antitank missiles.  In response, the Israeli government set out to ensure that its forces would have a dominant technological superiority in all future conflicts.  In 1979, Israel implemented Talpiot.  Talpiot creates a synergy between the Israeli national defense infrastructure and the country’s most prestigious universities that produces an elite talent pool dedicated to the most pressing security technology issues.

Approximately 50 students (out of a candidate pool of 3,000 to 5,000) who demonstrate excellence in the natural sciences and mathematics are selected for Talpiot annually.  Their university tuition and fees are sponsored by the Israel Defense Force (IDF) (specifically the Israeli Air Force (IAF)) and they graduate with an officer’s commission.

The Talpiot application process begins after the equivalent of junior year in high school.  After an initial down selection, candidates are tested on basic knowledge as well as reasoning and analytic abilities.  Applicants who pass these tests then go through advanced screening.

Successful applicants enter a three year training cycle, which accounts for the three years of mandatory military service required of Israeli citizens.  While university classes are in session they pursue academic studies.  Military training takes place during the rest of the year.  Upon graduation and commissioning, the candidates spend an additional six years in regular IDF units where they assume senior roles in organizations dedicated to technical research and development (R&D). 

Talpiot builds on a unique, three-part curriculum that features academic, military and ethical cores.  The academic core is based around a bachelor’s degree in physics and mathematics.  The military core includes combat and specialized military occupational professional training, projects emphasizing both basic and applied research, and a thorough grounding in the Israeli technology and defense establishment.  The ethical core stresses Israeli culture, geography and history, leadership, the IDF mission, and core IAF and IDF values.

The academic core is rigorous.  Graduates earn a Bachelor of Science degree from the Faculty of Mathematics and Natural Sciences at the Hebrew University.  The course of study includes a degree in physics augmented with mathematics and computer science based studies. 

Upon completion of their studies, Talpiot graduates take positions with operational technology development organizations within the IDF.  In these roles, they conduct advanced technology research, develop advanced weapons, design algorithms and computer applications, or conduct systems analysis.  While most Talpiot alumni serve in R&D units, there is an operational option available.  Those choosing this option serve in army ground combat units, on naval ships and submarines, or as air force pilots.  After approximately three years of service in operational units, graduates are assigned to R&D organizations where they contribute the perspectives and insight gained in the field to the R&D effort.

Through Talpiot, Israel has gained a well-trained, highly competent cadre of technical specialists conducting R&D that is extraordinarily responsive to national security needs.  Talpiot research leads to rapid fielding of advanced technical solutions to both physical and cybersecurity problems.  Talpiot alumni are actively courted by global venture capital firms and have created a significant number of successful technology startups that have benefitted both the Israeli and global economies.

As with ideas of a civil service meritocracy, notions of state-sponsored training and education of a technocratic elite to meet public and private sector needs has moved west.  On January 1st, 2015, British national morning newspaper The Independent reported on ideas coming out of Whitehall and Government Communications Headquarters (GCHQ) (the UK’s counterpart to the US National Security Agency (NSA)). 

Impressed by Talpiot’s success in the defense and commercial sectors, the British government is seeking to emulate Talpiot with a variant of the successful Teach First program (itself an offshoot of the Teach for America program in the United States).  The program’s (informally known as “Spook First”) goal is to convince promising young university graduates to work for and with GCHQ to develop new technologies that can be transitioned to the commercial sector, driving economic growth.  After a two year commitment, program alumni would be free to move to the private sector.

Unfortunately, something appears to have been garbled in translation as the Talpiot concept moved west.  Britain’s best and brightest technical graduates, already courted by prospective employers, have little incentive to take a relatively low paying government position.  The British proposal does not cover a candidate’s educational expenses, which are not trivial.  University tuition in the UK averages approximately $14,000 per year.  And that’s without accounting for the cost of room, board, books and other living expenses.  Given that the UK does not have mandatory military service, there is little incentive to drive quality candidates into the Spook First program.  From a student’s perspective, Spook First just doesn’t add up.

 The keys to Talpiot’s success are clear: 
  •  a fully funded world-class education;
  •  leadership positions in exciting, relevant technical R&D organizations; and
  •  a high probability of venture capital funding for technology startups.

 The translation in Britain yields:  “We’ll let you play with us, in a low paying job, after you’ve paid your own way.”  This does not set the UK up for success.


And what of the United States?  Innovation is part of the American DNA.  Unfortunately, so are skyrocketing education costs (average annual cost for a private university in the US is $32,000 per year), a national critical infrastructure that is increasingly vulnerable to cyber-attack and a desperate shortage of qualified cybersecurity professionals.  Given this perfect storm, bringing Talpiot even further west in a way that both replicates all of its key components and applies them in a uniquely American way makes a great deal of sense.  Providing qualified students a means to afford higher education and a mechanism to translate drive and innovation into private sector growth is a winning proposition for students, the economy and the nation.

Tuesday, December 2, 2014

Air Power, Big Cyber and the Coming Collapse

Current cybersecurity organizational and operational topologies were presaged a century ago by proponents of then-new aviation technology.  Aviation proponents like Giulio Douhet and Billy Mitchell preached a gospel declaring the airplane to be an operational panacea that would fundamentally change armed conflict.  History has proved them wrong.  Aviation is now a valuable arrow in the combined arms quiver that dominates the battle space, but it does not stand alone.

Contemporary Cyber-Douhets proselytize a doctrine of cybersecurity independence.  They claim that current and coming crops of large, expensive cybersecurity programs will tame a hostile cyberspace.  As with Douhet and Mitchell, history is likely to be unkind to these cyber-pundits.  Unfortunately, bursting this bubble of misplaced expectations will require an extended, painful and costly process.  From this, however, it is likely that an integrated and effective cybersecurity doctrine will emerge.

The Vision of Air Power

Giulio Douhet was a visionary.  Born in 1869, Douhet began a tumultuous career in the Italian Army in 1882.  While serving on the General Staff shortly after the turn of the century, he became an air power evangelist, advocating for the creation of a separate air arm commanded by aviators.  Appalled by the Italian Army’s shocking reverses at the start of the First World War, Douhet publicly criticized military leadership and demanded an air power solution.  In 1921, Douhet was promoted to general and published his seminal work on aerial warfare, The Command of the Air.

The Command of the Air argued that air power was revolutionary, rendering conventional armies superfluous.  Forces on the ground would be overflown and population centers, military installations and government centers would be attacked with impunity.  As a result, industry, the transportation and communications infrastructure, government and the “will of the people” would be disrupted and the war won through the aviators’ efforts.

Douhet’s American contemporary, Billy Mitchell, was also a visionary.  Like Douhet, the First World War left Mitchell with a belief that air power would dominate warfare, and that strategic bombardment would become the nation’s primary threat.  Mitchell was vocal about his views on air power, and vociferously attacked both the Navy and the War Department for having myopic views on the employment of aerial assets.  In 1924, influenced by Douhet, Mitchell published his own book on the subject, Winged Defense.

Douhet Discredited

Less than 20 years later, the vision of irresistible and dominating strategic airpower shared by Douhet and Mitchell was weighed and found wanting.  Between 1939 and 1945, American and British bombers dropped almost 1.6 million tons of bombs on Germany. 914,637 tons were dropped in 1944 alone.  Despite this, 1944 was the year that the German economy peaked in terms of military production.  Independent the Allied air forces might have been, but strategically decisive they were not.

In contrast, the effectiveness of tactical air power, in which air forces were integrated and operated in direct support of ground operations, was far greater.  The German air force, the Luftwaffe, shattered Polish forces in 1939 when German ground forces were often fought to a standstill.  Similarly, the XIX Tactical Air Command, integrated with and in support of General Patton’s Third Army, damaged or destroyed 24,634 ground targets in a single month (April 1945).

This pattern was to repeat in subsequent wars.  The American experience in Vietnam reinforced the shortcomings of air power acting independently, demonstrating that strategic bombing is often ineffective even when conducted by modern air forces against weak foes.  In contrast, during operation Linebacker I in the spring of 1972, US Navy and Air Force aircraft were seminal in the defeat of a massive, conventional North Vietnamese offensive.  US air power destroyed North Vietnamese units and effectively interdicted their supply lines, resulting in the North’s decisive defeat.

Since Vietnam, air power integrated with land and sea operations has been extraordinarily effective in conflicts ranging from Grenada to the Balkans, both Gulf Wars and Afghanistan.  In fact, the only conflict in recent history where air power seems less than completely effective is the ongoing campaign against the Islamic State of Iraq and Syria (ISIS).  The distinction?  The ISIS campaign is being conducted by air forces alone, and not by an integrated combat team.

Big Cyber

While Douhet’s vision has been mooted by the realities of the physical battlespace, it has been resurrected in cyberspace with a de facto doctrine that separates cybersecurity operations from an organization’s business activities.  This doctrine, referred to here as “Big Cyber,” is marked by several characteristics, including:

  • The establishment of discrete agencies or business units with exclusive responsibility for the cybersecurity of the larger community to which they belong;
  • Centralized funding, planning and execution of cybersecurity activities that occur parallel to business activities;
  • Minimal or no declarative authority with respect to the cybersecurity posture of the larger community; and
  • A mandate placing great emphasis on the perimetric security of existing vulnerable systems with little emphasis on the development of secure, resilient systems.

Funding lines reflect the dominance of Big Cyber.  In fiscal year (FY) 2014, US Cyber Command’s budget increased to $447 million, more than double FY 2013’s $191 million.  At the same time, the Department of Homeland Security (DHS) cybersecurity operations budget was increased by $35.5 million to a total of $792 million.  Security budgets at companies with more than $100 million in revenues increased by an average of five percent in 2014, while in the healthcare sector cybersecurity spending rocketed by almost 67 percent.

Perhaps more important than the total amount of funding is how the funds are allocated.  The number of personnel allocated to security organizations is growing.  The US Department of Defense (DoD) initially forecast a need for 6,200 additional personnel to support its cyber mission.  Now, DoD anticipates an even greater requirement.  Plans for additional personnel in both the public and private sectors are, almost uniformly, to assign them to dedicated cyber organizations, and not to business units in direct support of the larger organization’s business goals.  Even more telling is the nature of the acquisition programs being funded.  With rare exception, in both the public and private sectors, they focus on developing monolithic, centralized security mechanisms.  While these programs may generate significant new capabilities, there is often no requirement for their adoption by operating or business units.

Put another way, organizations exist to pursue their business activities.  Finance companies exist to profit by managing money.  Pharmaceutical companies exist to profit through the development and sale of drugs.  The military exists to successfully engage, defeat and destroy the enemy in defense of national interests.  The careful reader will note that none of these descriptions used the word “cybersecurity.”  That’s because cybersecurity is a support function intended to enable the primary business activity.  When cybersecurity is perceived as an imposition or an external mandate competing with organizational business goals, it will be summarily discarded and the organization will remain vulnerable.

The Coming Collapse, and Why It’s a Good Thing

Because of this, it can confidently be predicted that Big Cyber will remain ineffective in mitigating the risks inherent to a hostile cyberspace and that it will inevitably collapse under its own weight.
Other than the significant waste of resources, that’s not entirely a bad thing.  As with air power, the collective understanding of cybersecurity is constantly evolving.  An understanding of air power’s evolution allows for the development of a “theory of historical enabler integration.”  Under this theory, an operational enabler’s effectiveness is proportional to the degree to which it is integrated with the organization’s business activities over time.

The Second World War tested (at great expense) and disproved Douhet’s theory of an independent air arm that could determine the course a conflict in its own right.  At the same time, air power embarked upon tighter integration with maneuver forces and, in that role, became increasingly effective.  The development of small, inexpensive armed drones that can be deployed at the tactical level and operated by junior personnel to provide organic air support is simply the latest instance of “historical enabler integration.”

Big Cyber is today where air power was in 1943.  Huge and heroic efforts are being made to bring about operational cybersecurity through the use of independent solutions that operate in parallel to business activities.  It’s likely that over time, empirical operational data will dictate that cybersecurity personnel and capabilities become more tightly integrated with the business activities they are intended to protect.  Cybersecurity solutions will be as prolific and as effective as productivity software is today. And cyber knowledge and expertise will be as common as the knowledge required to configure a smart phone.

Big Cyber, like Douhet’s views captured in The Command of the Air, should be applauded as a necessary phase in the development of an effective, integrated solution.  And, as with The Command of the Air, its discrediting and collapse will be cause for celebration.

Sunday, October 26, 2014

If the Vulnerability of Our National Critical Infrastructure to Cyber-attack Keeps You Up at Night. . .You're Not Alone.

The vulnerability of our national critical infrastructure to cyber-attack is a serious matter that demands attention from industry and elected leadership.  However, if any meaningful change is going to take place, it must be demanded and supported by all stakeholders.  Please join me in Washington, DC on Tuesday, October 28th, 2014 to discuss both the vulnerabilities faced by the electrical grid and to explore – with your assistance and involvement – the way ahead to a safer, cyber-resilient national critical infrastructure.  For more information please see:  www.kasperskygovforum.com

Quoted in the Columbia, South Carolina newspaper The State on October 24, 2014, American University history professor Alan Lichtman characterized the national response to the current Ebola outbreak:

“When caught unprepared in a crisis, Americans have a tendency to see things in apocalyptic terms. . . It may not be a uniquely American trait, but it’s one that appears we’re particularly conditioned to and bound to repeat.”
“We are a people of plenty. We’re the richest nation on Earth. . . We have unparalleled prosperity, yet we have this anxiety that it may not last. This external force over which we don’t seem to have any control can cut at the heart of American contentment and prosperity.”
Regardless of how extreme the American reaction to the Ebola outbreak is, or whether it’s warranted, it’s impossible to argue that local, state and federal governments are taking measures to deal with a real and present threat.  These measures, however, are inherently reactive, coming into force after the danger materialized on American shores.

In the case of a dangerous communicable disease, a reactive approach may be sufficient; time will tell.   By the time the public or private sector is able to react to a successful cyber-attack on our national critical infrastructure, it will be too late.  The damage will already be done and the effects will be catastrophic, wide-spread, and long-lasting.  Imagine tens of millions without heat, light, fuel or purified water during winter.  Imagine an inability to transport or distribute food and other necessities to and within large urban areas for months at a time.

Feeling uneasy?  Concerned?  A little worried around the edges?

If you are, you’re not alone.  There’s growing awareness of the perfect storm of vulnerabilities inherent to the American national critical infrastructure.  It results from the combination of a thoroughly interconnected society, a long standing emphasis on safety and reliability (often to the detriment of security) within industrial control systems (ICS) and a commercial software development model that routinely incorporates (and touts!) post-deployment security and vulnerability patching.

Fortunately, as we become more aware of our vulnerabilities, we are also becoming motivated to discover and implement solutions that address them.  These range from policy initiatives designed to degrade, reduce and eventually remove domains and service providers from which attacks and malware emanate to the development and implementation of new technologies, systems and networks that both render conventional attacks less effective and create resilient systems that can continue to operate in spite of an attack.

Securing the resources necessary to implement these solutions will require broad, grass-roots awareness of and enfranchisement in both the vulnerability and the path to a solution.

To help raise this awareness, Kaspersky Government Security Solutions, Inc. (KGSS), in cooperation with its sponsors and partners, is hosting the 2nd annual Kaspersky Government Cybersecurity Forum in Washington, DC on Tuesday, October 28th, 2014.  The event, which will be held at the Ronald Reagan Building and International Trade Center, is open to all at no cost.  Additionally, attendees who hold PMP, CSEP, ASEP and/or CISSP certifications may use conference participation to claim required continuing education credits toward those certifications.
For more information, please see:  www.kasperskygovforum.com.

Thanks, and I hope to see you there!

Tuesday, October 14, 2014

Securing Cyberspace with Lessons Learned from Civil War Medicine

Those who cannot remember the past are condemned to repeat it. – George Santayana

The total number of Union dead during the Civil War ranges from 360,222 (if you accept the Fox-Livermore estimates from the late 19th century) to the (approximately) 437,006 estimated by Dr. David Hacker in late 2011.  Predictably, many historians struggle with and debate the revision to more than a century’s worth of settled fact.  What is not being debated is that approximately two-thirds of those who perished (between 240,148 and 291,337 men) fell not to Confederate bullets, bayonets, sabers or shells but to disease. 

In historical retrospect, this is surprising as huge investments, and concomitant advances, were made in medicine during the war.  Indeed, when the war began, the United States Army Medical Corps numbered just 87 men – and promotion was based strictly on seniority, not on merit.  By the cessation of hostilities in 1865, more than eleven thousand doctors had served.

Union medical care improved dramatically during 1862.  By the end of the year each regiment was being regularly supplied with a standard set of medical supplies and had an integral medical staff.  In January 1863, division level hospitals were established, serving as a rendezvous point for transports to the general hospitals.

By 1865, there were 204 Union general hospitals, capable of treating 136,894 patients.  General hospitals were designed to accommodate massive numbers of wounded and sick men.  They were built as pavilions with separate buildings, where thousands of patients could be sheltered.  Each building was its own ward, with high vaulted ceilings and large air vents, accommodating about 60 patients.  Ultimately over a million men were treated in the general hospitals, and the collective fatality rate was below 10%. 

Given the rapid development of this tremendous medical capability, the fact that hundreds of thousands of Union soldiers succumbed to disease during the war seems counterintuitive.  However, even a cursory look at what passed for field sanitation is illuminating. 

Soldiers rarely bathed, and the same pots that were used for cooking were also used to boil clothing to remove lice.  Regulations about camp sanitation and overcrowding were ignored.  Each company was supposed to have a field latrine.  Some regiments dug no latrines.  In other cases the men went off into open spaces around the edge of the camp.  Inevitable infestations of flies followed, as did diseases and bacteria they spread to both men and rations.

The Army diet was high in calories and low in vitamins.  Fruits and fresh vegetables were notable by their absence.  The food part of the ration was fresh or preserved beef, salt pork, navy beans, coffee and hardtack; large, thick crackers, usually stale and often inhabited by weevils.  Preparation of the food was as bad as the food itself, hasty, undercooked and almost always fried. 

And so, despite substantial investments in very large, very visible medical programs, huge numbers of Union soldiers died of disease.  Why? Because these programs were inherently reactive, responding to, but not alleviating, the root cause of the problem, which was the inherently unhealthful lifestyle of the individual soldier in the field.

For those in the burgeoning cybersecurity industry, and especially those who work at the nexus of the public and private sectors, Santayana’s words ring especially true.  Much like the Army Medical Corps in the early 1860s, a crisis of epic proportions is faced.  The number of cyber-attacks mounted on an hourly basis against government departments and agencies as well as their contractors and the national critical infrastructure is staggering.  Reports of data breaches suffered by major retailers, banks or manufacturers are a weekly, if not daily, occurrence.  There’s an ongoing information hemorrhage, flowing out through porous perimeters and effective countermeasures remain elusive. 

This isn’t to say that large, visible efforts and investments aren’t being made.  There are eighteen sector-specific Information Sharing and Analysis Centers (known as ISACs), established pursuant to Presidential Policy Directive 63, whose ostensible purpose is to promote risk mitigation, incident response, alert and information sharing within the discrete national critical infrastructure sectors.  Similarly, the United States Computer Emergency Readiness Team (US-CERT) was created in 2003 by the Department of Homeland Security (DHS) to analyze and reduce cyber threats, vulnerabilities, disseminate cyber threat warning information and coordinate incident response activities.

These efforts pale, however, when compared to two ongoing programs intended to secure civilian government networks and systems.  The Continuous Diagnostics and Mitigation (CDM) program is intended to provide capabilities and tools that identify, prioritize and mitigate cybersecurity risks on an ongoing basis.  The Development, Operations and Maintenance services in support of the National Cybersecurity Protection System, or DOMino program is intended to continue DHS efforts to protect the federal .gov domain with an intrusion detection system that monitors the network gateways of government departments and agencies for unauthorized traffic and malicious activity.  As an example of the magnitude of resources allocated to these programs, CDM has a program ceiling of $6 billion. 

Acquisitions resourcing is matched by policy efforts.  In February 2014, the National Institute of Standards and Technology (NIST) released the first version of its Framework for Improving Critical Infrastructure Cybersecurity.  The widely touted document, a collaborative effort of a consortium of industry and government partners, provides standards, guidelines and practices to promote the protection of critical infrastructure.

The Department of Defense (DoD) has also overhauled its cybersecurity policies and guidance so as to be more responsive to the ongoing cybersecurity emergency.  In March 2014, the DoD declared its information assurance mechanism (the Defense Information Assurance Certification and Accreditation Process, or DIACAP) obsolete and replaced it with a set of policies and guidance called the "Risk Management Framework (RMF) for DoD Information Technology (IT)."  The RMF, which aligns with the NIST RMF, is intended to address IT security risks throughout the IT life cycle.
 
All of these programs are important, necessary and from a purely parochial cybersecurity perspective, very welcome.  However, they also represent the same sort of top-down and reactive approach to security that the Army Medical Corps displayed with respect to soldiers’ health during the Civil War.  That is not to say that this sort of approach is incorrect, but rather that it does not form the basis for a complete solution to the problem.  A complete solution requires concurrent, systemic applications of both top-down and bottom-up approaches.

This was recognized by the military healthcare community, and critical changes were put into place with respect to both the individual soldier’s hygiene and sanitation in the field and the overall military medical system.  As a result, while there were 62 deaths from disease per 1,000 Union soldiers (using the Fox-Livermore statistics) during the Civil War, the number dropped to 25.6 per 1,000 in the Spanish-American War, and 16.5 in the First World War.  By the Second World War, less than one American soldier per 1,000 died from disease.

The systemic machinery of government information technology is already responding to the cybersecurity epidemic.  If the overall cybersecurity treatment is to be effective, comparable changes and improvements must be made to the cyber-hygiene requirements at both the operational user and acquisitions program levels.  More precisely, just as compliance with the high-level, top-down security requirements is required for a program to gain or maintain authority to operate on a government network, compliance with low-level implementation guidelines should be required as well.

The good news is that most of these changes are readily implemented, and not matters of breakthrough research.  A non-exhaustive listing of a few examples:

·       Assume that a breach is not a matter of if, it’s a matter of when, and design all systems to continue to operate effectively despite the presence of attackers.
·       Encrypt everything.  This includes data at rest, data in transit and data in use.  This way, even if an attacker gains access to protected system resources they will be of little or no value upon exfiltration, thus maintaining confidentiality despite a breach.  Additionally, they will be difficult if not impossible to alter, thus maintaining data integrity.
·       Implement comprehensive and fine-grained authorization management to ensure that the principle of least privilege is automatically implemented and maintained.  The open standard for the implementation of attribute based access control, the eXtensible Access Control Markup Language, or XACML, was first published in 2004, and there is a wide array of tools from which to choose when implementing this capability.
·       Ensure that email traffic is subjected not only to in-line spam filtration, but also to psycholinguisitic analysis intended to determine the degree to which a communication is deceptive.
·       Require that all personnel received mandatory training on good cyber hygiene and that continued compliance with cyber-hygiene standards is part of an annual or semi-annual performance evaluation.
·       Partner with industry to ensure a constant influx of innovative ideas.
It’s often said that government is only capable of broad, systemic action requiring years to develop and many more years to implement.  With respect to the current hostile state of cyberspace, the luxury of time simply doesn’t exist.  However, as can be seen by the improvements in military medical and hygiene standards, government is absolutely capable of implementing extremely effective solutions that merge both top-down and bottom-up approaches.  The battle for cyberspace can be won.  We simply have to, collaboratively, choose to win it.


Wednesday, September 10, 2014

It’s in the Requirements: Cyber Resiliency as a Design Element

This is the second installment of a two-part discussion of the threats and challenges involved with cybersecurity.  The first part of the discussion explored cyber threats and challenges using the Stuxnet attack as a lens.  This post picks up with an allegorical analysis of the cyber threat posed by nation-state attacks as well as ideas about how information systems can be built so that they are less tempting targets.

For me, and others such as Ruth Bader Ginsberg, Donald Douglas and Alan Dershowitz, growing up in Brooklyn was an education in itself.  In addition to practical matters such as what constituted an ideal slice of pizza, how to parallel park and how to tell which blocks to avoid as a pedestrian after dark, there were more philosophical lessons to be learned.  Take, for example, the case of Anthony Gianelli.  (Note:  Names have been changed to protect the innocent.)

Anthony, or Tony as he was called, was a hard working guy.  He had that common Brooklyn fondness for sitting on his stoop in the evenings and pontificating on weighty issues about the state of the world.  One week, as always, Tony played the lottery.  Only this week was different.  Tony won, and won big.  I won’t say just how much money Tony went home with after taxes, but it was bordering on life changing.  So what, you may ask, did Tony do with his winnings? 

For those readers hailing from that storied borough, the answer is both obvious and easy.  For everyone else… I’ll tell you.  Tony bought a car.  And not just any car.  Tony bought a pristine, brand-spanking-new Ferrari GTSi.  However, his trip home from the dealership was only the beginning.  Knowing that he had about a month before the car was delivered, Tony set about fortifying his garage.

Fortifying might have been a bit of an understatement.  Tony broke up the garage’s concrete floor and poured a new one - about eight feet deep.  Sunk deeply into the wet concrete were four hardened steel eye bolts.  The garage door was replaced with a high security model and a state of the art, sensor-based alarm system added.  During the construction process, Tony spent many an evening on his stoop declaiming enthusiastically about the high degree of security being engineered into his garage. 

The big day came and the Ferrari arrived.  Tony drove it in a manner that was almost, well, reverent.  At the end of the day, the ritual began.  Tony lovingly parked the car in the garage, ran hardened steel chains through the undercarriage and secured each chain to an eye bolt with a high security padlock.  The door was shut and hermetically sealed.  The alarm was set, Tony wished the car good night, and then took to the stoop, passionately discussing the Ferrari’s security.

One day, several months after taking delivery, Tony went down to the garage to greet the Ferrari.  To his horror and shock, the car was gone.  Not only was it gone, but there was no evidence of any burglary.  The door hadn’t been forced.  The alarm hadn’t been tripped.  The chains were neatly coiled around the eye bolts, the locks opened, ready for use.  Tony, predictably, went into mourning.

After several months and stages of grief, Tony became somewhat philosophical about the loss.  It was, he mused, a case of “easy come, easy go.”  And so, you can only imagine Tony’s surprise when he walked into his dark garage on the way to retrieve the newspaper one morning only to bump into something with delightful, albeit hard, curves.  Turning on the light, Tony stared and crossed himself.  The Ferrari was back.  In fact, it was all back.  The chains were looped through the undercarriage.  The alarm, which was now going off, had been set, and the door was still sealed.  It was as if the car had never left.  Except for one small detail.

Taped to the windshield was a note.  There were all of eight words:

If we really want it, we’ll take it.

Tony took his Ferrari and moved to New Jersey.
---
Tales of braggadocio and grand theft auto notwithstanding, the story about Tony’s Ferrari has an important nugget of advice for cyber defenders.  Tony ran into a certain kind of reality.  Specifically, he discovered what happens when an individual of significant but finite resources is at odds with an organization that has almost limitless time and resources.  This reality, deriving from the axiom that “given enough time and money, all things are possible,” also applies when cybersecurity intersects with geopolitics.  That is to say, when a nation-state puts your information system in the crosshairs of its cyber capabilities, there’s generally little that can be done about it.

That doesn’t mean that organizations should give up on cyber defense.  Dedicated, specific, targeted attacks by nation-states using Advanced Persistent Threats (e.g., “Stuxnet”) are rare.  The real cyber threats faced by commercial, government and military organizations – probes and penetration by external actors and data loss due to insider threats – are almost mundane in their ubiquity.  Moreover, these threats are so common that many security professionals simply assume that losses due to cyberattacks are just another terrain feature in cyberspace.

That assumption is premised on the ideas that cyber defense is inherently reactive and that that architecture of distributed systems (and, for that matter, the internet) must remain inherently static. 
That premise is inherently flawed. 

Technical standards and capabilities don’t remain static.  They continuously advance.  Many of the advances made over the last decade or so present engineers, architects, designers and developers with new options and choices when crafting responses to operational requirements.  Taken as a whole, this technical progress offers an ability to proactively design and integrally implement security in a manner that could alter much of the cybersecurity calculus. 

This isn’t to say that there is a single silver bullet.  Rather, there are a number of technologies that, operating in concert, offer designers and defenders significant advantages.  An exhaustive discussion of all these technologies could fill volumes (and has) and is beyond the scope of this post.  However, highlighting just a few provides a useful overview of the way things could, and should, be.

1.      Software is broken.  It’s created broken, it’s delivered broken and what’s worse, users become (unwitting) beta testers.  These flaws in the delivered product result in vulnerabilities which are exploited by hackers and malware authors.  In a disturbingly large proportion of cases, the delivery of flawed products can be traced to the nature of the software development life cycle itself.  In these cases, security verification and validation is the penultimate step prior to release.  As a result, it’s often rushed, resulting in flaws not being discovered.  Worse, it’s often too late or too expensive to fix a significant number of the flaws that are found.
 But what if security verification and validation could be pushed back to the beginning of the development lifecycle?  If we could ensure that the only code modules that entered the trunk were those that had passed the complete battery of functional and non-functional (e.g., performance, scalability, interoperability and security) tests, the ensuing increase in the quality of software products would be accompanied by a significant decrease in delivered vulnerabilities.
 The good news is that this is exactly what the DevOps PaaS delivers.  By leveraging a shared, Cloud-based integrated development environment (IDE), environmental variances between Dev, Test and Operations that inject vulnerabilities can be eliminated.  Next, by automating DevOps practices such as Continuous Build, Continuous Integration, Continuous Test and Continuous Design, the onus is shifted to the developer, who must deliver flawless code, from the tester who had previously been (unrealistically) expected to find all the flaws in the code.

2.      Many, if not most, critical systems are protected by access control systems that focus on authentication, or ensuring that the entity requesting access to the system is who it claims to be.  Authentication can be a powerful gate guard, sometimes requiring multiple concurrent methodologies (e.g., something you know, something you have, something you are, etc.).  The problem is that once a user is authenticated, these systems provide few, if any, controls or protections to system resources.  This vulnerability was exploited by both Bradley Manning and Edward Snowden.
 The answer is to add a layer that enforces fine-grained authorization and managing which resources can be accessed by authenticated users with a given set of attributes.  This mechanism, called attribute-based access control, or ABAC, is implemented through an OASIS open standard known as the eXtensible Access Control Markup Language (XACML).  XACML was first published in September 2003, and there are a significant number of commercial software packages (both proprietary and open source) that use it to bring ABAC’s powerful security to the enterprise.

3.     When vulnerabilities are discovered in an enterprise’s key software components, it can take a significant amount of time to disseminate countervailing security measures.  During this time, the enterprise remains vulnerable.  The challenge is to rapidly close the security gap while ensuring that the enterprise’s operations suffer as little disruption as possible.
 The answer is to apply access control security at the operating system level, enabling an access control regime that is dynamic and centrally controlled.  In principle, this is similar to what ABAC implements for enterprise resources.  In this case, however, the control takes place at the inter-process communication (IPC) level.  In practice, this means that the organization can, upon learning about a vulnerability or compromise, push out a new access control policy to all hosts.  The policy can both enable and disable specific IPC types.  The net result is that the compromised software is prevented from executing while replacement software is seamlessly enabled.

None of these things are a panacea to the cyber-vulnerability epidemic.  However, they all represent very real, tangible steps that engineers, designers and defenders can take to mitigate the risks faced while operating in an increasingly hostile environment.  They don’t solve everything.  But, taken in concert with other measures, they create a much more agile, resilient infrastructure.


And that beats moving to New Jersey.

Monday, September 1, 2014

STUXNET: ANATOMY OF A CYBER WEAPON



This is the first of a focused two part discussion of the threats and challenges involved with cyber security.  The exploration of cyber threats and challenges is conducted using the Stuxnet attack as a lens.  The following post picks up with an allegorical analysis of the cyber threat posed by nation-state attacks as well as ideas about how information systems can be built so that they are less tempting targets.

Stuxnet is widely described as the first cyber weapon.  In fact, Stuxnet was the culmination of an orchestrated campaign that employed an array of cyber weapons to achieve destructive effects against a specific industrial target.  This piece explores Stuxnet’s technology, its behavior and how it was used to execute a cyber-campaign against the Iranian uranium enrichment program.  This discussion will continue in a subsequent post describing an orthogonal view on the art and practice of security – one that proposes addressing security as a design-time concern with runtime impacts.

Stuxnet, discovered in June 2010, is a computer worm that was designed to attack industrial programmable logic controllers (PLC). PLCs automate electromechanical processes such as those used to control machinery on factory assembly lines, amusement park rides, or, in Stuxnet’s case, centrifuges for separating nuclear material.  Stuxnet’s impact was significant; forensic analyses conclude that it may have damaged or destroyed as many as 1,000 centrifuges at the Iranian nuclear enrichment facility located in Natanz.   Moreover, Stuxnet was not successfully contained, it has been “in the wild” and has appeared in several other countries, most notably Russia.

There are many aspects of the Stuxnet story, including who developed and deployed it and why.  While recent events seem to have definitively solved the attribution puzzle, Stuxnet’s operation and technology remain both clever and fascinating. 

A Stuxnet attack begins with a USB flash drive infected with the worm.  Why a flash drive?  Because the targeted networks are not usually connected to the internet.  These networks have an “air gap” physically separating them from the internet for security purposes.  That being said, USB drives don’t insert themselves into computers.  The essential transmission mechanism for the virus is, therefore, biological;  a user.   

I’m tempted to use the word “clueless” to describe such a user, but that wouldn’t be fair.  Most of us carbon-based, hominid, bipedal Terran life forms are inherently entropic – we’re hard-wired to seek the greatest return for the least amount of effort. In the case of a shiny new flash drive that’s just fallen into one’s lap, the first thing we’re inclined to do is to shove it into the nearest USB port to see what it contains.  And if that port just happens to be on your work computer, on an air-gapped network. . .well, you get the picture.

It’s now that Stuxnet goes to work, bypassing both the operating system’s (OS) inherent security measures and any anti-virus software that may be present.  Upon interrogation by the OS, it presents itself as a legitimate auto-run file.  Legitimacy, in the digital world, is conferred by means of a digital certificate.  A digital certificate (or identity certificate) is an electronic cryptographic document used to prove identity or legitimacy.  The certificate includes information about a public cryptographic key, information about its owner's identity, and the digital signature of an entity that has verified the certificate's contents are correct.  If the signature is valid, and the person or system examining the certificate trusts the signer, then it is assumed that the public cryptographic key or software signed with that key is safe for use.

Stuxnet proffers a stolen digital certificate to prove its trustworthiness.  Now vetted, the worm begins its own interrogation of the host system. :  Stuxnet confirms that the OS is a compatible version of Microsoft Windows and, if an anti-virus program is present, whether it is one that Stuxnet’s designers had previously compromised.  Upon receiving positive confirmation, Stuxnet downloads itself into the target computer.

It drops two files into the computer’s memory.  One of the files requests a download of the main Stuxnet archive file, while the other sets about camouflaging Stuxnet’s presence using a number of techniques, including modifying file creation and modification times to blend in with the surrounding system files and altering the Windows registry to ensure that the required Stuxnet files run on startup.  Once the archived file is downloaded, the Stuxnet worm unwraps itself to its full, executable form.

Meanwhile, the original Stuxnet infection is still on the USB flash drive.  After successfully infecting three separate computers, it commits “security suicide.”  That is, like a secret agent taking cyanide to ensure that she can’t be tortured to reveal her secrets, Stuxnet deletes itself from the flash drive to frustrate the efforts of malware analysts.

Internally to the target computer, Stuxnet has been busy.  It uses its rootkit to modify, and become part of the OS.  Stuxnet is now indistinguishable from Windows; it’s become part of the computer’s DNA.  It’s now that Stuxnet becomes a detective, exploring the computer and looking for certain files.  Specifically, Stuxnet is looking for industrial control system (ICS) software created by Siemens called Simatic PCS7 or Step 7 running on a Siemens Simatic Field PG notebook (a Windows-based system dedicated for ICS use).  

The problem facing Stuxnet at this point is that a computer can contain millions, if not tens of millions, of files and finding the right Step 7 file is a bit like looking for a needle in a haystack.  In order to systematize the search, Stuxnet needs to find a way to travel around the file system as it conducts its stealthy reconnaissance.  It does this by attaching itself to a very specific kind of process.:  One that is trusted at the highest levels by the OS and that looks at every single file on the computer.  Something like. . . 

. . .the scan process used by anti-virus software.  (In the attack on the facility in Natanz, Stuxnet compromised and used the scan processes from leading anti-virus programs.  (It’s worth noting that all of the companies whose products were compromised have long since remedied the vulnerabilities that Stuxnet exploited.)  Along the way, Stuxnet compromises every comparable process it comes across, pervading the computer’s memory and exploiting every resource available to execute the search.  

All the while, Stuxnet is constantly executing housekeeping functions.  When two Stuxnet worms meet, they compare version numbers, and the earlier version deletes itself from the system.   Stuxnet also continuously evaluates its system permission and access level.  If it finds that it does not have sufficient privileges, it uses a previously unknown system vulnerability (such a thing is called a “Zero-Day,” and will be discussed below) to grant itself the highest administrative privileges and rights.    If a local area network (LAN) connection is available, Stuxnet will communicate with Stuxnet worms on other computers and exchange updates – ensuring that the entire Stuxnet cohort running within the LAN is the most virulent and capable version.   If an Internet connection is found, Stuxnet reaches back to its command and control (C2) servers and uploads information about the infected computers, including their internet protocol (IP) addresses, OS types and whether or not Step 7 software has been found.

As noted earlier, Stuxnet relied on four Zero-Day vulnerabilities to conduct its attacks.  Zero-Days are of particular interest to hacker communities.:  Since they’re unknown, they are by definition almost impossible to defend against.  Stuxnet’s four Zero-Days included:


  • The Microsoft Windows shortcut automatic file execution vulnerability which allowed the worm to spread through removable flash drives;
  • A print spooler remote code execution vulnerability; and
  • TWO different privilege escalation vulnerabilities.

Once Stuxnet finds Step 7 software, it patiently waits and listens until a connection to a PLC is made.  When Stuxnet detects the connection, it penetrates the PLC and begins to wreak all sorts of havoc.  The code controlling frequency converters is modified and Stuxnet takes control of the converter drives.  What’s of great interest is Stuxnet’s method of camouflaging its control.   

Remember the scene in Mission Impossible, Ocean’s 11 and just about every other heist movie where the spies and/or thieves insert a video clip into the surveillance system?  They’re busy emptying the vault, but the hapless guard monitoring the video feed only sees undisturbed safe contents.  Stuxnet turned this little bit of fiction into reality.  Reporting signals indicating abnormal behavior sent by the PLC are intercepted by Stuxnet and in turn signals indicating nominal, normal behavior are sent to the monitoring software on the control computer.

Stuxnet is now in the position to effect a physical attack against the gas centrifuges.  To understand the attack it’s important to understand that centrifuges work by spinning at very high speeds and that maintaining these speeds within tolerance is critical to their safe operation.  Typically, gas centrifuges used to enrich uranium operate at between 807hz and 1,210hz, with 1,064hz as a generally accepted standard.

Stuxnet used the infected PLCs to cause the centrifuge rotors to spin at 1,410hz for short periods of time over a 27 day period.  At the end of the period, Stuxnet would cause the rotor speed to drop to 2hz for fifty minutes at a time.  Then the cycle repeated.  The result was that over time the centrifuge rotors became unbalanced, the motors wore out and in the worst cases, the centrifuges failed violently.

Stuxnet destroyed as much as twenty percent of the Iranian uranium enrichment capacity.  There are two really fascinating lessons that can be learned from the Stuxnet story.  The first is that cyber -attacks can and will have effects in the kinetic and/or physical realm.  Power grids, water purification facilities and other utilities are prime targets for such attacks.  The second is that within the current design and implementation paradigms by which software is created and deployed, if a bad actor with the resources of a nation-state wants to ruin your cyber-day, your day is pretty much going to be ruined.

But that assumes that we maintain the current paradigm of software development and deployment.  In my next post I’ll discuss ways to break the current paradigm and the implications for agile, resilient systems that can go into harm’s way, sustain a cyber-hit and continue to perform their missions.