Thursday, June 6, 2013

Embracing a Resourceful Information Security Culture


Following the news, it’s difficult to escape a sense that the defense community is mired in a Sisyphean game of information security (INFOSEC) catch-up.  It seems that as soon as a policy is embraced by government agencies and their industry partners, new threats emerge and existing dangers increase in magnitude.  

Reminders are ubiquitous: June 3 saw the opening of Bradley Manning’s court martial.  Manning, an Army private first class, is accused having illegally downloaded and forwarded huge amounts of classified information to Wikileaks.  A week or so earlier, the Washington Post published a report indicating that Chinese hackers had successfully penetrated at least a dozen high profile American weapon systems, including those tasked with critical air defense, battlefield mobility and maritime dominance responsibilities.  The list could go on.

Fortunately, many of the community’s INFOSEC challenges are philosophical and fiscal rather than technical.  As such, they can be addressed by harnessing currently available resources, which, in an era that pits escalating requirements against sequestered budgets, is fortunate indeed.  The remainder of this article discusses four key improvement vectors that will enable the defense community – government and industry - to begin to address looming cyber and INFOSEC concerns, specifically: 
  • Acquisitions staffing reform;
  • Baked in INFOSEC;
  • Automated auditing and monitoring; and
  • The use of open source software.

Acquisitions Staffing Reform

Impeccably trained by organizations such as the Defense Acquisition University (DAU), the US Department of Defense (DoD) fields what is arguably the finest team of program managers and acquisition professionals in the world.  These people, who are ultimately coordinated by the Under Secretary of Defense for Acquisition, Technology and Logistics (AT&L) are extremely well versed the art of buying goods and services.  Assisting and advising the program managers are military and civilian experts who are charged with ensuring that the goods and services received are of best value and most use to the end users.

Unfortunately, these advisers, while well versed in operational needs and utility are generally not technical experts with regard to software and computing technology, especially as it applies to cybersecurity or its constituent disciplines such as infrastructure security, application security (AppSec) or malware detection and remediation.  As a result, program offices are placed in the position of relying on the same engineers and developers who are paid to develop systems for technical advice.  The contractors, in turn, may have a conflict of interest between the roles of advisor and solutions provider.

Solving this problem requires a fundamental augmentation to the acquisition community’s capabilities in terms of technical expertise.  Near term resources are available from existing DoD-affiliated expert organizations such as university affiliated research centers (UARC) and federally funded research and development centers (FFRDC).  It is likely that acquisition community will need expand this resource pool (either through direct government hires or through the use of contracted technical validation expertise) to fully represent the technical INFOSEC skill set.  

The silver lining to acquisitions staffing reform is the promise of overall lower system acquisitions costs as INFOSEC gaps are identified and remedied at the requirements level instead of after coding, an important benefit in the era of sequestration.

"Baked in" INFOSEC

The current model for incorporating and validating INFOSEC requirements and capabilities focuses on the tail end of the software development life cycle (SDLC).  While there are some proactive measures, such as the use of approved software tooling or adherence to Security Technical Implementation Guides (STIG) issued by the Defense Information Systems Agency (DISA), most INFOSEC activity takes place after a system has been developed.  

Typically, when a developer completes a new system (or a modification to an existing one), it is submitted for certification and accreditation (C&A) review.  The review includes security validation testing against the required information assurance controls (IAC).  At the conclusion of testing a list of vulnerabilities and mitigations is created, which are then negotiated into actions by the government program manager, the developer and the reviewing organization. 

The fixes are then applied to a completed, coded system, often requiring significant rework, time and expense.  More troublingly, many of the fixes take the form of a security applique, layered on top of the existing system.  These security appliques often compromise the system’s mission utility by increasing operator workload.  Systems featuring this applique approach are also inherently less secure than those that have incorporated INFOSEC mechanisms into their architectures from the beginning. The National Institute of Standards and Technology (NIST) recognized the benefits of designed-in INFOSEC in Special Publication 800-27 Rev A,Engineering Principles for Infomation Technology Security (A Baseline for Achieving Security), which specifies that security should be an integral part of overall system design.

Applying – and validating – INFOSEC capabilities early in the SDLC, at the requirements an architectural levels, not only creates more secure systems, it also cuts down on the expense of rework often necessary to meet C&A guidelines, resulting in more fiscally responsible systems acquisition.  The achievement of “baked in” INFOSEC rests on what we’ll call a “strategic security triad.”  The triad consists of:
  • Requirements stemming from authoritative laws, regulations, policies and guidance (LRPG);
  • A DoD-wide library of modular, standards-based, approved INFOSEC implementation patterns; and
  • DevOps principles of continuous integration and automated testing.
Fortunately for the community, all three legs of the triad are represented by resources that are currently and economically available:  

The DoD is replete with mature, forward leaning INFOSEC LRPG. Typical of these is the Defense Information Enterprise Architecture (IEA) published by the DoD CIO’s office.  The document mandates basic principles of secured access such as assured identity, threat assessment, policy-based access control and centralized identity, credential and access management (ICAM).

An implementation pattern library would include pre-approved INFOSEC tools as well as requirements for what the tools have to accomplish for the system, and how they are to be implemented.  There are many INFOSEC tools currently available from industry.  These include things such as IBM’s Security Framework product line and CA Technologies’ Identity Minder.   

Interesting from a government perspective is the emergence of supported open source products into the INFOSEC space.  These tools offer the promise of open standards, modular design and implementation and enterprise class performance – all with zero acquisition cost.  Typical of this line is the WSO2 Security and Identity Gateway Solution.  This solution integrates four WSO2 products, the WSO2 Enterprise Service Bus, the WSO2 Governance Registry, the WSO2 Identity Server and the WSO2 Business Activity Monitor (BAM) to provide a complete AppSec solution including authentication, authorization, auditing, monitoring, content-based filtering, input validation, throttling, caching and Single Sign-On.  

The last leg of the triad, continuous integration and automated testing, requires tooling that can implement strong programmatic governance.  By requiring all of a program’s developers to use a common, Cloud based development platform that includes automated testing, integration and build tools, both functional and non-functional INFOSEC requirements (such as ports, protocols and services settings) can be validated before a module is accepted for integration into the application’s trunk.  Modules not meeting the requirements are rejected, with a report indicating what the developer needs to address.  As a result, C&A testing is an ongoing, integral part of development, and end-state C&A activities are dramatically curtailed.  An example of such a DevOps platform is the WSO2 App Factory.

Automated Auditing and Monitoring

Among NIST’s computer security principles is the need to implement audit mechanisms to detect unauthorized use.  Implied in this requirement is the need to notify the administration and response team as soon as a breach or other unauthorized activity is detected.  Given the size and scope of system activity and transaction logs, it is necessary to automate this process to achieve the necessary timeliness.  

Essentially, auditing and monitoring is a Big Data analytics problem.  Fortunately for the defense community, industry has been focusing on Big Data Analytics for a number of years.  There are a number of analytics platforms available for this task such as Google’s Dremel, Apache Drill and the WSO2 BAM.  Like the others, WSO2 BAM provides the capability to perform rapid analysis on large scale data sets.  To this, however, WSO2 BAM adds alerting and customizable dashboarding capabilities, which can be used to ensure near-real-time notification of suspect events.  When compared to currently acceptable auditing paradigms, some of which allow for a week or more between manual inspection of security logs, this represents a significant capability improvement.

Open Source Software

The most obvious benefit associated with open source software is cost.  Generally available without licensing fees, the use of open source software makes it much more cost effective to incorporate advanced INFOSEC capabilities than proprietary software.  However, while open source software contributes to lowering a system’s total cost of ownership (TCO) it is not without cost as development and production support services generally require paid subscriptions. 

Open source software has an added security benefit that is particularly compelling.  Specifically, open design and source code enable broad based, detailed code inspection and the rapid detection of both flaws and threats.  The idea that proprietary software is more secure because the source code is hidden just doesn’t stand up to scrutiny.  NIST’s selection of the Rijndael block cipher as the Advanced Encryption Standard (AES) in 2000 followed a nearly three year process in which a number of algorithms were publicly discussed, debated and cryptanalyzed.  In another case, Borland published and widely sold the InterBase database for seven years.  In 2000, InterBase was open-sourced as the Firebird project. Within five months of the product being open sourced, a hard coded backdoor (username “politically,” password “correct”) was discovered.

Conclusion

Strong system INFOSEC is in the best interest of the entire defense community.  While there have been both doctrinal and cultural fits and starts with respect to effective, community-wide policies, a fortunate confluence of technology, development philosophy and leadership exists that can allow the closure of critical security gaps.  As computer security expert John Pescatore noted at the June 4 Kaspersky Government Forum, we need to find the balance between perfect solutions that work, but are untenable and solutions that work and are tenable, but might not be perfect.   Acquisitions staffing reform, baked in INFOSEC, automated auditing and monitoring and the use of open source software are the first big steps toward tenable.

Wednesday, May 29, 2013

Future Proof by Design: Loose Coupling and Ensuring Long Term Success



Technical future proofing assumes constant change in both operational environments and technology.  The impacts of change are mitigated by emphasizing versatility and adaptability in system architecture.  This tenet applies equally to both hardware and software systems.  By ensuring that system components are modular, and that they interact using standard interface patterns and in a loosely coupled manner, components can be modified or readily swapped for more capable variants across the entire system lifecycle.  As a result, the system as a whole remains functional and useful well past the time when the original configuration has become obsolete.  This article illustrates risks associated with inadequate future proofing and provides an overview of tools and techniques that promote long lasting, future proofed software systems.

The Parable of the Joint Strike Fighter

F-35 C-Variant Testing; courtesy of www.jsf.mil
During a sidebar conversation at the recent DC Open SourceCommunity Summit, a lively discussion ensued concerning the troubled F-35 JointStrike Fighter (JSF) program.  The JSF program began in 2001, with a projected budget of $233 billion and an estimated initial operational capability (IOC) taking place between 2010 and 2012.  As chronicled by Government Accountability Office (GAO) in a report published on April 17, 2013, the program’s price tag is now nearing $400 billion and IOC is not expected until 2017 or 2018. 

During testimony before the US Congress on April 24, 2013, the JSF Program Manager, US Air Force Lieutenant General Christopher Bogdan, indicated that problems with the JSF’s complex softwaresuite created risks to the 2017 IOC date.
The ever expanding numbers and constantly slipping dates do not cast either the defense acquisitions community or industry in the most flattering light.  However, righteous taxpayer anger and legislators’ ire masks what may be the most troubling aspect of the JSF lifecycle. Specifically, the JSF may be operationally obsolete by the time it is delivered to combat squadrons.  In the twelve years since program inception, armaments developments in both the United States and abroad may render the aircraft significantly less effective than planned.

Courtesy www.defenceweb.co.za
In 2010, it was reported that the People’s Republic of China had deployed the Dong-Feng (DF) 21D mobile medium-range ballistic missile(MRBM).  The DF-21D, with a 1,500 nautical mile range, is designed to target and destroy aircraft carriers.  The threat to the JSF’s utility is clear:  The F-35 has a combat radius of approximately 600 nautical miles.  Carriers operating F-35s in a hostile Pacific Rim would have to steam through – and then loiter within – a danger zone of approximately a thousand miles in order to deploy their strike aircraft.

Courtesy www.navytimes.com
More recently, on May14, 2013, the Associated Press reported that, for the first time ever, the US Navy had launched an unmanned aircraft from an aircraft carrier at sea.  The stealthy aircraft, called the X-47B, is capable of speeds approaching Mach 1 (it’s officially described as “high subsonic”), has a range in excess of 2,100 nautical miles, contains internal weapons bays and is expected to serve as the prototype for the Unmanned Carrier Launched Airborne Surveillance and Strike System (UCLASS).  UCLASS is expected to enter service in 2019.  

The UCLASS’ existence raises uncomfortable questions about the JSF:  Why is a relatively short ranged manned aircraft, whose limitations expose expensive and vulnerable capital ships to catastrophic danger necessary when a long range, stealthy, unmanned platform is available to complete many of the same missions?

The fact that the JSF program wasn’t future proofed from an operational perspective isn’t surprising.  It isn’t realistic to expect accurate technological (or budgeting) predictions a decade in advance (the X-47B first flew in 2011).  What is surprising is that the program wasn’t designed from a componentized, modular perspective that would have guaranteed continued relevance and capability.  Instead, the F-35 is a tightly integrated package representing the state of the conceptual art responding to operational concerns of the very early 21st century.

Software and Future Proofing

The issues and architectural requirements attendant to future proofing aren’t unique to hardware in general or weapon systems in particular.  Many legacy software systems suffer from similar issues stemming from monolithic, tightly coupled designs suited explicitly for one or more specific clearly defined purposes.  Unfortunately, these design paradigms result in heavy sustainment burdens, vendor lock-in situations and difficulties with adaptation.

Fortunately for software acquisitions program managers, industry has evolved architectural paradigms intended to alleviate software future proofing concerns.  Examples of these paradigms are service oriented architectures (SOA) and Cloud computing concepts such as software as a service (SaaS) and platforms as a service (PaaS).  However, at the heart of all of these future proofing methodologies is a design tenet known as “loose coupling.”

“Coupling” is a technical term of art that refers to the degree of direct knowledge that system components need to have of one another in order to interoperate.  Loose coupling is a design tenet that ensures that minimizes dependencies between a system’s components to the least practical extent.  Its objective is to eliminate the impact a change on one component has on the rest of the system.  Limiting the number of interconnections between components through adherence to loose coupling principles brings long term benefits such as easier problem isolation, simplified testing and lower maintenance and sustainment costs.

The extent of a system’s coupling can be measured in terms of the degree to which a component can be changed without adverse effects to the rest of the system.  Such changes might include adding, removing, renaming or reconfiguring components or modifying a component’s internal characteristics or the way it interfaces with other components.  The more readily individual components can be modified or swapped without impacting the rest of the system, the greater the degree of loose coupling.  With respect to future-proofing, greater degrees of loose coupling translate to more efficient and less costly modifications, with shorter cycle times, to account for changing requirements.  Greater degrees of loose coupling equate to greater future proofing.

Loose Coupling Tools and Components

There are many ways to enhance loose coupling.  For example, data exchanges could mandated to use a flexible data format such as XML or JSON that allows subscribers to publish the definitions of how to use the data as a schema.  Another method is to use dedicated middleware that allows for the isolation of system components.  Within the SOA paradigm, there are three widely used middleware tools that promote loose coupling:  The enterprise service bus (ESB), the data service and the message broker.  A brief explanation of each of these is worthwhile.

ESB

The ESB is a software component that provides data transport, transformation and mediation services between interoperating software services, applications and components within a heterogeneous and complex application landscape.  All interactions between system components take place through the ESB, isolating the components from one another.  As a result, changes to a component are confined to the component and any mediation adapters within the ESB that transform the component’s outputs into formats useful to the rest of the system.  All other system components are shielded from the change.  In addition to their integration capabilities, ESBs also provide services such as routing, gateway functionality for messages, services, application programming interfaces (API) and security and monitoring.  Many vendors provide ESB offerings.  Interestingly some of the most capable are also open source, such as the WSO2 ESB, the Mulesoft ESB and OpenESB.

Data Services

Data services are integration mechanisms addressing the challenges of heterogeneous and disparate data stores that underlie legacy application integration efforts.  Rather than requiring a lengthy and expensive data migration and normalization effort, architects can integrate legacy, external and objective data stores using data services that retrieve and expose data as web services or REST resources.  These data services provide loose coupling between the data stores and the rest of the system.  Changes to either a data store or a data consumer are isolated by the data service.
Moreover, the data that data services expose is not limited to traditional SQL relational database management systems (RDBMS).  NoSQL databases, CSV/Excel files, RDF triplestores/quadstores and web page scrapes can all serve as data sources.  Data services can also integrate security concepts such as authentication, authorization, confidentiality, integrity and encryption using open standards such as Basic Authorization over HTTPS, WS-Security, WS-Trust, WS-Secure Conversation, WS-Policy, WS-PolicyAttachment and WS-SecurityPolicy.  Typical of data service implementation tools (and also open source) is the WSO2 Data ServicesServer.

Message Broker

Message brokers sit between components, mediating communication and providing loose coupling by minimizing the awareness that components need to have of each other in order to be able to exchange messages.  Many message brokers also provide temporal decoupling, allowing for asynchronous communications.  At its most basic, a broker takes incoming messages from applications and provides some form of communications mediation, such as:


  • Message routing;
  • Message distribution;
  • Support for both request-response and publish-subscribe messaging.

If these seem similar to the capabilities offered by an ESB, it’s because they are.  Message brokers and ESBs have different technological roots, but have some overlap in the capabilities they provide.  There are, however a couple of basic differences.  A message broker (generally) provides only message distribution capabilities, either in the form of queues (for single consumers) or topics (for event distribution/multiple consumers).  An ESB can work with a message broker or use more direct messaging mechanisms such as HTTP.  Additionally, ESBs generally provide content mediation and transformation capabilities.  Unfortunately, there isn’t an industry standard for where an ESB ends and a message broker begins.  Open source message brokers include the Fuse Message Broker, the WSO2 Message Broker and Mosquitto.

Loose Coupling and Future Proofing

In the end, future proofing is about ensuring that a delivered system is readily adaptable with respect to changing technologies, changing missions and changing operational environments.  There are a number of requirements implicit to adaptability:
  • The system must be functionally modifiable such that it can complete missions for which it was not originally designed;
  • The cost of modification must be kept to a minimum; and
  • The time required for the modification must be kept to a minimum.
Following the chain to its logical conclusion future proofing equates to adaptability equates to loose coupling!  While the F-35 may be both operationally and technically tightly coupled, the wide availability of middleware tools promoting decoupling – many of them open source – ensure that military and government software programs do not have to be. 




Friday, May 3, 2013

Parallel Tracks: National Cyber Security Policy and the Implementation of Secure Software

When it comes to cyber operations, we’re finally on target with respect to policy.  Practical implementation is another matter.  Fortunately, there is a way ahead, and one that leverages open source offerings to control cost and increase availability.

On 26 April 2013, the Associated Press published a story proclaiming that American military academies are “grooming future officers for warfare in cyberspace.”  The article highlighted the increased emphasis being placed on cyber operations by the US Department of Defense, quoting a recently commissioned US Air Force officer who had given up plans to become a fighter pilot:  ““It’s a challenge, and for people who like a challenge, it’s the only place to be.”

Inspirational as Lieutenant Keefer’s story is, the reality is that American military cyber-preparedness is still in its infancy. It was only in 2012 that US Naval Academy began requiring freshmen to take a cybersecurity course, or, for that matter offered a cyber operations major.  Upper classmen will not be required to take additional cyber-focused courses until 2014.  A statement made by the Academy’s superintendent, Vice Admiral Michael Miller, that “There’s a great deal of interest, much more than we could possibly, initially, entertain” indicates not only student interest but, troublingly, a lack of resource allocation. 

Combined, the lack of resourcing and the relatively small academic emphasis placed on cyber operations paint a very different picture than the publicized student enthusiasm.  It’s the military; commanders and leaders have broad latitude to make rapid, sweeping changes when subjected to the demands of either politics or tactical realities.  Today, the United States has a modular Army, women are authorized to serve in combat roles and a service member’s sexual orientation is a non-issue. 

What we do not have is a cohesive national cyber operations policy that addresses force structure, operational doctrine, the implementation of information assurance policies and the cyber-operations education of leadership in the acquisitions, training, doctrine and operational communities.  Compounding the problem is an inability to attract the necessary talent to the nation’s premier cyber defense organization, US Cyber Command.  According to a recent Defense News article, Cyber Command is still nearly 4,000 personnel short of the “a proper cyber force to adequately give capability to the national command authorities, to the COCOMs (Combatant Commands), and defend the nation.”

Fortunately, there is broad recognition at the policy-making level of the need to harden not only the national defense cyber posture, but that of critical commercial infrastructure as well.  Conferences such as the recent International Engagement on Cyber, held at Georgetown University, and the upcoming Government Cybersecurity Forum, are well attended by government, industry and academia.  Everyone agrees as to the nature of the threat and the need for both proactive and reactive response.  Agreement at the policy level, however, is not the same as the implementation of concrete measures and technologies that mitigate the dangers inherent to today’s connected environment.  

Complicating matters is the fact that all implementations aren’t equal.  Setting a standard is not the same as implementing.  Implementing on a scale and cost applicable to large business or government is often not feasible for small or medium business.  And implementations that adversely impact productivity are bound to be resisted by organizations operating under temporal and/or fiscal constraints.  And in the defense and intelligence sector these days, everyone operates under increasingly tight temporal and fiscal constraints.  In the end, it often seems that the only acceptable cyber defense implementations will be ones that are temporally transparent to users and, as much as possible, fiscally transparent to managers and executives at operating organizations.

Fortunately, there is a path ahead for organizations seeking such a transparent defense.  More accurately, there are two paths ahead, one that addresses runtime concerns and another that addresses design-time issues.

Runtime Concerns

Runtime for modern distributed systems is characterized by a constant flow of message traffic between system components.  Typically, a message represents a request for a resource, such as a particular data entity or processing capability.  These messages may adhere to any of a number of standards.  At the most basic level, securing such an environment requires the validation of the identity of a message sender against a predefined list of users (human or machine) who are permitted to make requests against system resources.  This identity validation, or authentication, may take many forms, such as the provision of a valid username and password pair, of a valid digital certificate or of a valid biometric signature.

Authentication on its own is not a strong enough security mechanism.  Alone, it creates an environment where any authenticated user can access any system resource.  The Wikileaks breach resulted from just such an environment.  To harden systems, an access control, or authorization, scheme is often added to the authentication scheme.  When properly applied, authorization mechanisms ensure that the principle of minimum privilege is applied.  That is, that authenticated users have access to only those system resources consistent with and necessary for their job duties.  The authorization scheme preferred by the US Department of Defense (DoD) is called Policy Based Access Control (PBAC). (PBAC is synonymous with Attribute Based Access Control (ABAC)).

In a PBAC scenario, an authenticated user makes a request for a resource.  The request is halted by a systemic gate guard or enforcement point.  The enforcement point requests an access control decision from a decision making point.  The decision maker evaluates information about the requestor and the resource with respect to a predefined access control policy and renders a decision, which is relayed to the enforcement point.  The enforcement point implements the decision as either a go or a no-go for the request.

The PBAC scenario happens transparently with respect to the user. Importantly, with the incorporation of modern interface definition languages such as Apache Thrift, very little system latency is generated with the additional access control processing.  PBAC is usually implemented through the use of open standards such as the eXtensible Access Control Markup Language (XACML) and the Security Assertion Markup Language (SAML).  PBAC implementations are in use with government, military and commercial enterprises throughout the world.

As can be seen, PBAC helps to ensure the core information security principles of confidentiality (only authorized users have access to the requested resources), authenticity (only properly authenticated users can make requests for resources) and non-repudiation (the resource request is tied to a specific, authenticated user).  What PBAC doesn’t do is help to ensure message integrity or system availability.    As noted, the core of distributed (and that includes service oriented or Cloud-based) systems is the exchange of messages. It’s not difficult to imagine a scenario where legitimate messages contain a malware payload, a problem not addressed by traditional PBAC implementations. 

However, the PBAC architecture provides a useful archetype for addressing this threat.  PBAC is premised on the primary gate guard, or enforcement point.  This mechanism stops – or mediates - all incoming requests for an access control check.  (Mediation is a standard data processing pattern where by data in transit is operated upon before it arrives at its final destination.) 

Instead of conceiving of a PBAC scheme as THE security gateway, architects could conceive of it as phase one mediation of the security gate way process.  Upon successful authorization mediation, the request would pass to phase two mediation, where it would be scanned for malware payloads.  Clear messages would be allowed to proceed, while infected messages would be quarantined and the administrator notified.

Design Time Concerns

For systems earmarked for use by government or military organizations the completion of coding and functional testing is not, to paraphrase Winston Churchill said, the end. It is not even the beginning of the end.  It is merely the end of the beginning. Following the development effort, the system is turned over to a certification and accreditation (C&A) process that can take up to eighteen months and cost more than a million dollars.  The C&A process is meant to ensure that the system complies with applicable security paradigms and standards and that it is appropriately hardened. 

Problematically, the C&A process often creates a laundry list of security holes that must be patched prior to acceptance.  This can result in developers closing only gaps noted, and not truly securing the system.  Cybersecurity, in this case, becomes an overlay, not something that was “baked into” the system from the beginning.  What’s really needed is a way to demonstrate that cybersecurity and information assurance requirements are met by the software as it is being developed.

In this case, the defense industry could take a page from commercial industry’s DevOps community.  DevOps principles stress continuous delivery.  In order to achieve continuous delivery, everything possible must be automated, allowing the achievement of continuous development, continuous integration and continuous test.  The critical elements for addressing the defense C&A process are continuous, automated test and integration.  In this environment, not only the software functionality but also the organization’s governance principles are embodied in the automated test regime.  For the defense community, these principles include the cybersecurity and information assurance requirements flowing from DoD Directive 8500.01E (and related documents).

The objective DevOps environment would be instantiated with governed, distributed, Cloud-based development platform.  In this environment, when a developer checks in a code module, it is automatically tested against not only functional requirements, but security (and interoperability and performance) requirements embodied in the DevOps platform’s test regime.  If it doesn’t meet all of the requirements, it is rejected, and the developer is provided a report indicating why the module failed.  The implications of such an environment are significant.  Potentially, the only mechanism that needs to be formally certified and accredited is the DevOps platform. Any software issuing from that trusted platform would be automatically certified and accredited.  As the platform could be certified independently and prior to the commencement of development activities, no independent C&A test period would be necessary for the delivered system, and fielding could begin as soon as the coding was complete.  This would add an unprecedented level of agility to the defense software acquisition process.

The advantages are magnified when the emerging government and defense mobile environments are considered.  A program might produce dozens of apps each month.  Currently, the C&A overhead associated with such a volume of independent software deliveries is, simply, crushing.  A certified, governed, DevOps style development environment would allow the rapid and continuous delivery of trusted, certified apps.

Affordability

Software packages for organizations seeking to implement transparent, effective cyber-defense mechanisms in both runtime (PBAC + malware mediation) and design-time environments exists today.  The savvy program manger’s first question will – and should – be “How much is this going to cost me?”  The short answer is that there doesn’t have to be any acquisition cost at all. 
A good example of the runtime solution is the WSO2 Security and Identity Gateway Solution.  This solution is an implementation pattern the leverages standard SOA components including an enterprise service bus (ESB), a governance registry, a business activity monitoring tool and an identity and access management (IdAM) management component to deliver:

  • Centralized authentication;
  • Centralized PBAC;
  • Collaboration between different security protocols;
  • Throttling;
  • Standards-based single sign on;
  • Caching;
  • Content based filtering; and
  • Schema based input validation.
An example of the design-time solution can be seen in the WSO2 App Factory.  App Factory is a governed, distributed development environment designed from the ground up to operate in the Cloud.  Effectively, it is a DevOps Platform-as-a-Service (PaaS).  It provides complete application lifecycle management in a manner consistent with organizational policies and governance.  It does so in a completely automated manner, while maintaining man-in-the-loop control.  Specific capabilities include:

  • Product and team management;
  • Software development workflow;
  • Governance and compliance;
  • Development status monitoring and reporting;
  • Code development;
  • Issue tracking;
  • Configuration management;
  • Continuous build;
  • Continuous integration;
  • Continuous automated test; and
  • Continuous deployment.
All of WSO2’s products are 100% open source, and as a result, there are no licensing fees.  The open source promise doesn’t stop there, of course.  For example:  SUSE provides a complete, open source enterprise Linux operating system as well as a Cloud environment.  PostgreSQL provides an enterprise level, spatially enabled database.  The Apache Accumulo project offers a highly scalable, fast and secure NoSQL product.  All of these products are free – as in both beer and speech.

Conclusion

An overall national policy with respect to cyber operations (and cyber warfare) remains an ongoing effort.  This does not obviate the ongoing threat posed by both nation-states and non-state actors, nor should it prevent proactive member of the defense community and commercial industry from adopting software development and implementation patterns that dramatically improve an organization’s security.  Such patterns can be implemented both rapidly and cost effectively through the use of readily available open source products. More importantly, they can be implemented in such a way as to minimize disruption to the user and the organization.


Friday, April 19, 2013

Affordable Public Safety: Leveraging Open Source Software to Support Law Enforcement Surveillance Tools

Surveillance systems are specialized data analytics tools leveraging many of the processes and components found in commercial enterprises, defense organizations and the intelligence community.  The recent tragedy in Boston ensures an increased demand for such systems.  Fortunately, many of the systems’ core components can be satisfied by enterprise grade open source software that comes as part of a unified platform.  By eliminating both licensing costs and improving platform productivity, total cost of ownership (TCO) is significantly reduced, allowing access to modern security tools and techniques can be extended to smaller agencies and jurisdictions. 

Commenting on the April 15th Boston Marathon bombing during an interview with MSNBC’s Andrea Mitchell, US Representative Peter King (R-NY) expressed a belief that Americans are going to have to get used to many more surveillance cameras in public spaces:
So, I do think we need more cameras. We have to stay ahead of the terrorists and I do know in New York, the Lower Manhattan Security Initiative, which is based on cameras, the outstanding work that results from that. So yes, I do favor more cameras. They're a great law enforcement method and device. And again, it keeps us ahead of the terrorists, who are constantly trying to kill us.
Questions of domestic policy and civil liberties aside, Representative King’s inclination toward additional surveillance mechanisms has a number of interesting systemic ripple effects.  Understanding these effects requires closer examination of a surveillance system’s constituent components and the nature of the value it provides.

A generic surveillance system consists of four core capabilities (Such systems can, of course, be further decomposed.)  These include:

Collection:  The acquisition of data about the locale or subject of interest.  Representative King’s cameras are one type of collection mechanism, gathering geospatially and temporally referenced imagery and video data.  Other collection mechanisms might acquire radio-frequency data, including such things as cell phone conversations, text messages or emails sent over Wi-Fi and mobile data networks and laser spectrometers, collecting information about what people have done or eaten based on residues on skin and clothing.
Analysis:  Unanalyzed data, like an unmined vein of gold, is little more than potential value.  Analysis tools, like the crushing and precipitation mechanisms in a gold mining operation, both identify relevant events within the overall set of collected data and make sense of the identified events within an operational context.  Analysis, by transforming data into information and information into knowledge, provides the critical element of “what does this mean to me at this time.”
Decision Support:  Most law enforcement and emergency response organizations have doctrines and policies outlining the expected nature and scope of a response to a given type of incident.  Once  analysis has identified the type and magnitude of an event, it’s simply a matter of applying logic consistent with the organization’s business rules to arrive at  a doctrinally valid recommended course of action.
Dissemination:   The best analysis and business rules engines are useless if results and recommendations aren’t placed in the hands of people and organizations with the means to influence events in a timely manner.  Dissemination mechanisms not only ensure timely delivery of critical information, but also preserve the core attributes of information security.  It must ensure that the information being distributed is available only to authorized entities (confidentiality), that it is not altered or corrupted in any way while in transit (integrity), that it can be retrieved when necessary (availability), that both sides of the dissemination transaction have confidence in the identity of the other (authenticity) and that an undeniable audit trail of the transaction exists (non-repudiation).
Using the generic system as a vantage point, it’s easy to see that Representative King’s desire for more cameras exposes only the tip of the security and surveillance iceberg.  An effective surveillance system must solve all four problems concurrently if it is to successfully fulfill its operational requirements.  Having more cameras addresses only the collection issue.  Additionally, fielding a greatly augmented collection capability prior to developing robust analysis, decision support and dissemination capabilities can overwhelm analyst resources and frustrate timely data analysis and dissemination. 

As an illustration, suppose that Representative King gets his way and the number of cameras for a given area is greatly increased, without concomitant improvements to the back end analysis, decision support and dissemination capabilities.  For a system deploying 100 cameras, 2,400 hours of video are collected every day (and 16,800 every week).

Boston’s police department, among the 20 largest in the United States, has about 2,800 uniformed and civilian personnel.  Theoretically, all the video could be reviewed in the course of a single eight hour shift – assuming that the city was willing to withdraw every single police employee from the street and dedicate them to the task, that every employee was a qualified imagery analyst and that only a single analysis pass was necessary.  Realistically, the requirement to manually analyze that much data could overwhelm even the New York Police Department’s much larger forensics investigation division.  (This problem is not unique to law enforcement.  In 2011, US Air Force surveillance systems collected approximately 720 years of video data over Afghanistan.)

However, even a significantly augmented analyst force doesn’t address the fact that current surveillance architectures are inherently reactive. That is, they provide excellent investigative and forensic tools to establish the nucleus of operative facts after an event has taken place but are not preventative or prophylactic in nature.  Law enforcement’s goal with respect to mass casualty events is to ensure that they remain inchoate; that terrorist plans are never realized.  Based on this, we can safely speculate that what Representative King is really seeking is a significantly improved surveillance architecture, of which the collection hardware is only part.  Such an architecture might include image pattern recognition software capable of identifying backpacks or duffel bags or laser spectrometer capable of detective explosives residue from hundreds of feet away.  Categorized by capability, other architectural components include:

Analysis
  • A pattern recognition tool;
  • A real-time data analytics engine; and
  • A storage mechanism capable of handling large data sets that come in at a very high velocity.
Decision Support

  • A business rules processor capable of storing rule sets representing doctrine and executing rules in the context of analyzed data; and
  • A business process engine capable of implementing processes indicated by the business rules engine.

Dissemination
  • An integration and transport mechanism capable of delivering decision support data to a diverse set of applications and endpoints; and
  • A security mechanism ensuring that information can only be transmitted, stored or acted upon by authenticated and authorized system entities.
As can be seen, effective surveillance systems have a number of infrastructural middleware sub-components operating in parallel.  The attendant software development effort isn’t trivial; the sheer volume and variety of components is a significant cost driver.  Each sub-component can require specific expertise, which in turn can require employees with special (and expensive) skills and knowledge.  Additionally, each sub-component  may come with a discrete licensing fee.  Requirements for specialized knowledge and licensing fees combine to create a TCO that may be beyond the budgetary means of many agencies.

Part of the answer lies in building the surveillance system around a highly productive, highly integrated platform that provides dedicated products leveraging a consistent, composable core.  For example, if the integration/transport, security and business rules mechanisms share a common core providing key enterprise service oriented architecture (SOA) functionality (e.g., mechanisms to provide and consume services, mediation, service orchestration, service governance, business process management, service monitoring and support for open standards such as WS-* and REST), less expertise on individual products is required, and fewer expensive experts are needed on the payroll. 

There are additional platform characteristics that can mitigate TCO:

  • By using an open source platform, licensing fees are eliminated;
  • By using a platform based on open standards, expensive vendor lock-in is avoided and innovation, adaptability and flexibility are promoted; and
  • Configurable components offer greater productivity than those requiring custom integration code.
Theory and Practice

Fortunately for law enforcement and the security industry, open source enterprise middleware based on a common, composable and configurable platform exists in practice as well as in theory, and it’s possible to map the requirements outlined above to existing, available and – importantly – supported software products:


Requirement
Example Software Product
Notes
Analysis
Pattern recognition
Open Pattern Recognition project shares algorithms of image processing, computer vision, natural language processing, pattern recognition, machine learning and  related fields.  
*Not open source
Real time analytics

High volume data storage
 Accumulo is a NoSQL database developed by the NSA and open sourced through the Apache Software Foundation.  It offers cell level security.
Decision Support
Business rule management

Business process management

Dissemination
Integration and Transport

Security & Identity



Conclusion

The terrible events in Boston, and the subsequent identification of the suspects testify to the requirement for and effectiveness of surveillance systems.  Two issues become clear:  The need to improve the processing of surveillance data in a manner that helps prevent terrorist incidents from taking place, and the need to provide systems that are affordable to agencies of all sizes and budgets.  Fortunately, technical advances, coupled with the proliferation of high quality open source software offers the promise of achieving both in the near future.