Wednesday, May 29, 2013

Future Proof by Design: Loose Coupling and Ensuring Long Term Success



Technical future proofing assumes constant change in both operational environments and technology.  The impacts of change are mitigated by emphasizing versatility and adaptability in system architecture.  This tenet applies equally to both hardware and software systems.  By ensuring that system components are modular, and that they interact using standard interface patterns and in a loosely coupled manner, components can be modified or readily swapped for more capable variants across the entire system lifecycle.  As a result, the system as a whole remains functional and useful well past the time when the original configuration has become obsolete.  This article illustrates risks associated with inadequate future proofing and provides an overview of tools and techniques that promote long lasting, future proofed software systems.

The Parable of the Joint Strike Fighter

F-35 C-Variant Testing; courtesy of www.jsf.mil
During a sidebar conversation at the recent DC Open SourceCommunity Summit, a lively discussion ensued concerning the troubled F-35 JointStrike Fighter (JSF) program.  The JSF program began in 2001, with a projected budget of $233 billion and an estimated initial operational capability (IOC) taking place between 2010 and 2012.  As chronicled by Government Accountability Office (GAO) in a report published on April 17, 2013, the program’s price tag is now nearing $400 billion and IOC is not expected until 2017 or 2018. 

During testimony before the US Congress on April 24, 2013, the JSF Program Manager, US Air Force Lieutenant General Christopher Bogdan, indicated that problems with the JSF’s complex softwaresuite created risks to the 2017 IOC date.
The ever expanding numbers and constantly slipping dates do not cast either the defense acquisitions community or industry in the most flattering light.  However, righteous taxpayer anger and legislators’ ire masks what may be the most troubling aspect of the JSF lifecycle. Specifically, the JSF may be operationally obsolete by the time it is delivered to combat squadrons.  In the twelve years since program inception, armaments developments in both the United States and abroad may render the aircraft significantly less effective than planned.

Courtesy www.defenceweb.co.za
In 2010, it was reported that the People’s Republic of China had deployed the Dong-Feng (DF) 21D mobile medium-range ballistic missile(MRBM).  The DF-21D, with a 1,500 nautical mile range, is designed to target and destroy aircraft carriers.  The threat to the JSF’s utility is clear:  The F-35 has a combat radius of approximately 600 nautical miles.  Carriers operating F-35s in a hostile Pacific Rim would have to steam through – and then loiter within – a danger zone of approximately a thousand miles in order to deploy their strike aircraft.

Courtesy www.navytimes.com
More recently, on May14, 2013, the Associated Press reported that, for the first time ever, the US Navy had launched an unmanned aircraft from an aircraft carrier at sea.  The stealthy aircraft, called the X-47B, is capable of speeds approaching Mach 1 (it’s officially described as “high subsonic”), has a range in excess of 2,100 nautical miles, contains internal weapons bays and is expected to serve as the prototype for the Unmanned Carrier Launched Airborne Surveillance and Strike System (UCLASS).  UCLASS is expected to enter service in 2019.  

The UCLASS’ existence raises uncomfortable questions about the JSF:  Why is a relatively short ranged manned aircraft, whose limitations expose expensive and vulnerable capital ships to catastrophic danger necessary when a long range, stealthy, unmanned platform is available to complete many of the same missions?

The fact that the JSF program wasn’t future proofed from an operational perspective isn’t surprising.  It isn’t realistic to expect accurate technological (or budgeting) predictions a decade in advance (the X-47B first flew in 2011).  What is surprising is that the program wasn’t designed from a componentized, modular perspective that would have guaranteed continued relevance and capability.  Instead, the F-35 is a tightly integrated package representing the state of the conceptual art responding to operational concerns of the very early 21st century.

Software and Future Proofing

The issues and architectural requirements attendant to future proofing aren’t unique to hardware in general or weapon systems in particular.  Many legacy software systems suffer from similar issues stemming from monolithic, tightly coupled designs suited explicitly for one or more specific clearly defined purposes.  Unfortunately, these design paradigms result in heavy sustainment burdens, vendor lock-in situations and difficulties with adaptation.

Fortunately for software acquisitions program managers, industry has evolved architectural paradigms intended to alleviate software future proofing concerns.  Examples of these paradigms are service oriented architectures (SOA) and Cloud computing concepts such as software as a service (SaaS) and platforms as a service (PaaS).  However, at the heart of all of these future proofing methodologies is a design tenet known as “loose coupling.”

“Coupling” is a technical term of art that refers to the degree of direct knowledge that system components need to have of one another in order to interoperate.  Loose coupling is a design tenet that ensures that minimizes dependencies between a system’s components to the least practical extent.  Its objective is to eliminate the impact a change on one component has on the rest of the system.  Limiting the number of interconnections between components through adherence to loose coupling principles brings long term benefits such as easier problem isolation, simplified testing and lower maintenance and sustainment costs.

The extent of a system’s coupling can be measured in terms of the degree to which a component can be changed without adverse effects to the rest of the system.  Such changes might include adding, removing, renaming or reconfiguring components or modifying a component’s internal characteristics or the way it interfaces with other components.  The more readily individual components can be modified or swapped without impacting the rest of the system, the greater the degree of loose coupling.  With respect to future-proofing, greater degrees of loose coupling translate to more efficient and less costly modifications, with shorter cycle times, to account for changing requirements.  Greater degrees of loose coupling equate to greater future proofing.

Loose Coupling Tools and Components

There are many ways to enhance loose coupling.  For example, data exchanges could mandated to use a flexible data format such as XML or JSON that allows subscribers to publish the definitions of how to use the data as a schema.  Another method is to use dedicated middleware that allows for the isolation of system components.  Within the SOA paradigm, there are three widely used middleware tools that promote loose coupling:  The enterprise service bus (ESB), the data service and the message broker.  A brief explanation of each of these is worthwhile.

ESB

The ESB is a software component that provides data transport, transformation and mediation services between interoperating software services, applications and components within a heterogeneous and complex application landscape.  All interactions between system components take place through the ESB, isolating the components from one another.  As a result, changes to a component are confined to the component and any mediation adapters within the ESB that transform the component’s outputs into formats useful to the rest of the system.  All other system components are shielded from the change.  In addition to their integration capabilities, ESBs also provide services such as routing, gateway functionality for messages, services, application programming interfaces (API) and security and monitoring.  Many vendors provide ESB offerings.  Interestingly some of the most capable are also open source, such as the WSO2 ESB, the Mulesoft ESB and OpenESB.

Data Services

Data services are integration mechanisms addressing the challenges of heterogeneous and disparate data stores that underlie legacy application integration efforts.  Rather than requiring a lengthy and expensive data migration and normalization effort, architects can integrate legacy, external and objective data stores using data services that retrieve and expose data as web services or REST resources.  These data services provide loose coupling between the data stores and the rest of the system.  Changes to either a data store or a data consumer are isolated by the data service.
Moreover, the data that data services expose is not limited to traditional SQL relational database management systems (RDBMS).  NoSQL databases, CSV/Excel files, RDF triplestores/quadstores and web page scrapes can all serve as data sources.  Data services can also integrate security concepts such as authentication, authorization, confidentiality, integrity and encryption using open standards such as Basic Authorization over HTTPS, WS-Security, WS-Trust, WS-Secure Conversation, WS-Policy, WS-PolicyAttachment and WS-SecurityPolicy.  Typical of data service implementation tools (and also open source) is the WSO2 Data ServicesServer.

Message Broker

Message brokers sit between components, mediating communication and providing loose coupling by minimizing the awareness that components need to have of each other in order to be able to exchange messages.  Many message brokers also provide temporal decoupling, allowing for asynchronous communications.  At its most basic, a broker takes incoming messages from applications and provides some form of communications mediation, such as:


  • Message routing;
  • Message distribution;
  • Support for both request-response and publish-subscribe messaging.

If these seem similar to the capabilities offered by an ESB, it’s because they are.  Message brokers and ESBs have different technological roots, but have some overlap in the capabilities they provide.  There are, however a couple of basic differences.  A message broker (generally) provides only message distribution capabilities, either in the form of queues (for single consumers) or topics (for event distribution/multiple consumers).  An ESB can work with a message broker or use more direct messaging mechanisms such as HTTP.  Additionally, ESBs generally provide content mediation and transformation capabilities.  Unfortunately, there isn’t an industry standard for where an ESB ends and a message broker begins.  Open source message brokers include the Fuse Message Broker, the WSO2 Message Broker and Mosquitto.

Loose Coupling and Future Proofing

In the end, future proofing is about ensuring that a delivered system is readily adaptable with respect to changing technologies, changing missions and changing operational environments.  There are a number of requirements implicit to adaptability:
  • The system must be functionally modifiable such that it can complete missions for which it was not originally designed;
  • The cost of modification must be kept to a minimum; and
  • The time required for the modification must be kept to a minimum.
Following the chain to its logical conclusion future proofing equates to adaptability equates to loose coupling!  While the F-35 may be both operationally and technically tightly coupled, the wide availability of middleware tools promoting decoupling – many of them open source – ensure that military and government software programs do not have to be. 




Friday, May 3, 2013

Parallel Tracks: National Cyber Security Policy and the Implementation of Secure Software

When it comes to cyber operations, we’re finally on target with respect to policy.  Practical implementation is another matter.  Fortunately, there is a way ahead, and one that leverages open source offerings to control cost and increase availability.

On 26 April 2013, the Associated Press published a story proclaiming that American military academies are “grooming future officers for warfare in cyberspace.”  The article highlighted the increased emphasis being placed on cyber operations by the US Department of Defense, quoting a recently commissioned US Air Force officer who had given up plans to become a fighter pilot:  ““It’s a challenge, and for people who like a challenge, it’s the only place to be.”

Inspirational as Lieutenant Keefer’s story is, the reality is that American military cyber-preparedness is still in its infancy. It was only in 2012 that US Naval Academy began requiring freshmen to take a cybersecurity course, or, for that matter offered a cyber operations major.  Upper classmen will not be required to take additional cyber-focused courses until 2014.  A statement made by the Academy’s superintendent, Vice Admiral Michael Miller, that “There’s a great deal of interest, much more than we could possibly, initially, entertain” indicates not only student interest but, troublingly, a lack of resource allocation. 

Combined, the lack of resourcing and the relatively small academic emphasis placed on cyber operations paint a very different picture than the publicized student enthusiasm.  It’s the military; commanders and leaders have broad latitude to make rapid, sweeping changes when subjected to the demands of either politics or tactical realities.  Today, the United States has a modular Army, women are authorized to serve in combat roles and a service member’s sexual orientation is a non-issue. 

What we do not have is a cohesive national cyber operations policy that addresses force structure, operational doctrine, the implementation of information assurance policies and the cyber-operations education of leadership in the acquisitions, training, doctrine and operational communities.  Compounding the problem is an inability to attract the necessary talent to the nation’s premier cyber defense organization, US Cyber Command.  According to a recent Defense News article, Cyber Command is still nearly 4,000 personnel short of the “a proper cyber force to adequately give capability to the national command authorities, to the COCOMs (Combatant Commands), and defend the nation.”

Fortunately, there is broad recognition at the policy-making level of the need to harden not only the national defense cyber posture, but that of critical commercial infrastructure as well.  Conferences such as the recent International Engagement on Cyber, held at Georgetown University, and the upcoming Government Cybersecurity Forum, are well attended by government, industry and academia.  Everyone agrees as to the nature of the threat and the need for both proactive and reactive response.  Agreement at the policy level, however, is not the same as the implementation of concrete measures and technologies that mitigate the dangers inherent to today’s connected environment.  

Complicating matters is the fact that all implementations aren’t equal.  Setting a standard is not the same as implementing.  Implementing on a scale and cost applicable to large business or government is often not feasible for small or medium business.  And implementations that adversely impact productivity are bound to be resisted by organizations operating under temporal and/or fiscal constraints.  And in the defense and intelligence sector these days, everyone operates under increasingly tight temporal and fiscal constraints.  In the end, it often seems that the only acceptable cyber defense implementations will be ones that are temporally transparent to users and, as much as possible, fiscally transparent to managers and executives at operating organizations.

Fortunately, there is a path ahead for organizations seeking such a transparent defense.  More accurately, there are two paths ahead, one that addresses runtime concerns and another that addresses design-time issues.

Runtime Concerns

Runtime for modern distributed systems is characterized by a constant flow of message traffic between system components.  Typically, a message represents a request for a resource, such as a particular data entity or processing capability.  These messages may adhere to any of a number of standards.  At the most basic level, securing such an environment requires the validation of the identity of a message sender against a predefined list of users (human or machine) who are permitted to make requests against system resources.  This identity validation, or authentication, may take many forms, such as the provision of a valid username and password pair, of a valid digital certificate or of a valid biometric signature.

Authentication on its own is not a strong enough security mechanism.  Alone, it creates an environment where any authenticated user can access any system resource.  The Wikileaks breach resulted from just such an environment.  To harden systems, an access control, or authorization, scheme is often added to the authentication scheme.  When properly applied, authorization mechanisms ensure that the principle of minimum privilege is applied.  That is, that authenticated users have access to only those system resources consistent with and necessary for their job duties.  The authorization scheme preferred by the US Department of Defense (DoD) is called Policy Based Access Control (PBAC). (PBAC is synonymous with Attribute Based Access Control (ABAC)).

In a PBAC scenario, an authenticated user makes a request for a resource.  The request is halted by a systemic gate guard or enforcement point.  The enforcement point requests an access control decision from a decision making point.  The decision maker evaluates information about the requestor and the resource with respect to a predefined access control policy and renders a decision, which is relayed to the enforcement point.  The enforcement point implements the decision as either a go or a no-go for the request.

The PBAC scenario happens transparently with respect to the user. Importantly, with the incorporation of modern interface definition languages such as Apache Thrift, very little system latency is generated with the additional access control processing.  PBAC is usually implemented through the use of open standards such as the eXtensible Access Control Markup Language (XACML) and the Security Assertion Markup Language (SAML).  PBAC implementations are in use with government, military and commercial enterprises throughout the world.

As can be seen, PBAC helps to ensure the core information security principles of confidentiality (only authorized users have access to the requested resources), authenticity (only properly authenticated users can make requests for resources) and non-repudiation (the resource request is tied to a specific, authenticated user).  What PBAC doesn’t do is help to ensure message integrity or system availability.    As noted, the core of distributed (and that includes service oriented or Cloud-based) systems is the exchange of messages. It’s not difficult to imagine a scenario where legitimate messages contain a malware payload, a problem not addressed by traditional PBAC implementations. 

However, the PBAC architecture provides a useful archetype for addressing this threat.  PBAC is premised on the primary gate guard, or enforcement point.  This mechanism stops – or mediates - all incoming requests for an access control check.  (Mediation is a standard data processing pattern where by data in transit is operated upon before it arrives at its final destination.) 

Instead of conceiving of a PBAC scheme as THE security gateway, architects could conceive of it as phase one mediation of the security gate way process.  Upon successful authorization mediation, the request would pass to phase two mediation, where it would be scanned for malware payloads.  Clear messages would be allowed to proceed, while infected messages would be quarantined and the administrator notified.

Design Time Concerns

For systems earmarked for use by government or military organizations the completion of coding and functional testing is not, to paraphrase Winston Churchill said, the end. It is not even the beginning of the end.  It is merely the end of the beginning. Following the development effort, the system is turned over to a certification and accreditation (C&A) process that can take up to eighteen months and cost more than a million dollars.  The C&A process is meant to ensure that the system complies with applicable security paradigms and standards and that it is appropriately hardened. 

Problematically, the C&A process often creates a laundry list of security holes that must be patched prior to acceptance.  This can result in developers closing only gaps noted, and not truly securing the system.  Cybersecurity, in this case, becomes an overlay, not something that was “baked into” the system from the beginning.  What’s really needed is a way to demonstrate that cybersecurity and information assurance requirements are met by the software as it is being developed.

In this case, the defense industry could take a page from commercial industry’s DevOps community.  DevOps principles stress continuous delivery.  In order to achieve continuous delivery, everything possible must be automated, allowing the achievement of continuous development, continuous integration and continuous test.  The critical elements for addressing the defense C&A process are continuous, automated test and integration.  In this environment, not only the software functionality but also the organization’s governance principles are embodied in the automated test regime.  For the defense community, these principles include the cybersecurity and information assurance requirements flowing from DoD Directive 8500.01E (and related documents).

The objective DevOps environment would be instantiated with governed, distributed, Cloud-based development platform.  In this environment, when a developer checks in a code module, it is automatically tested against not only functional requirements, but security (and interoperability and performance) requirements embodied in the DevOps platform’s test regime.  If it doesn’t meet all of the requirements, it is rejected, and the developer is provided a report indicating why the module failed.  The implications of such an environment are significant.  Potentially, the only mechanism that needs to be formally certified and accredited is the DevOps platform. Any software issuing from that trusted platform would be automatically certified and accredited.  As the platform could be certified independently and prior to the commencement of development activities, no independent C&A test period would be necessary for the delivered system, and fielding could begin as soon as the coding was complete.  This would add an unprecedented level of agility to the defense software acquisition process.

The advantages are magnified when the emerging government and defense mobile environments are considered.  A program might produce dozens of apps each month.  Currently, the C&A overhead associated with such a volume of independent software deliveries is, simply, crushing.  A certified, governed, DevOps style development environment would allow the rapid and continuous delivery of trusted, certified apps.

Affordability

Software packages for organizations seeking to implement transparent, effective cyber-defense mechanisms in both runtime (PBAC + malware mediation) and design-time environments exists today.  The savvy program manger’s first question will – and should – be “How much is this going to cost me?”  The short answer is that there doesn’t have to be any acquisition cost at all. 
A good example of the runtime solution is the WSO2 Security and Identity Gateway Solution.  This solution is an implementation pattern the leverages standard SOA components including an enterprise service bus (ESB), a governance registry, a business activity monitoring tool and an identity and access management (IdAM) management component to deliver:

  • Centralized authentication;
  • Centralized PBAC;
  • Collaboration between different security protocols;
  • Throttling;
  • Standards-based single sign on;
  • Caching;
  • Content based filtering; and
  • Schema based input validation.
An example of the design-time solution can be seen in the WSO2 App Factory.  App Factory is a governed, distributed development environment designed from the ground up to operate in the Cloud.  Effectively, it is a DevOps Platform-as-a-Service (PaaS).  It provides complete application lifecycle management in a manner consistent with organizational policies and governance.  It does so in a completely automated manner, while maintaining man-in-the-loop control.  Specific capabilities include:

  • Product and team management;
  • Software development workflow;
  • Governance and compliance;
  • Development status monitoring and reporting;
  • Code development;
  • Issue tracking;
  • Configuration management;
  • Continuous build;
  • Continuous integration;
  • Continuous automated test; and
  • Continuous deployment.
All of WSO2’s products are 100% open source, and as a result, there are no licensing fees.  The open source promise doesn’t stop there, of course.  For example:  SUSE provides a complete, open source enterprise Linux operating system as well as a Cloud environment.  PostgreSQL provides an enterprise level, spatially enabled database.  The Apache Accumulo project offers a highly scalable, fast and secure NoSQL product.  All of these products are free – as in both beer and speech.

Conclusion

An overall national policy with respect to cyber operations (and cyber warfare) remains an ongoing effort.  This does not obviate the ongoing threat posed by both nation-states and non-state actors, nor should it prevent proactive member of the defense community and commercial industry from adopting software development and implementation patterns that dramatically improve an organization’s security.  Such patterns can be implemented both rapidly and cost effectively through the use of readily available open source products. More importantly, they can be implemented in such a way as to minimize disruption to the user and the organization.