Friday, April 19, 2013

Affordable Public Safety: Leveraging Open Source Software to Support Law Enforcement Surveillance Tools

Surveillance systems are specialized data analytics tools leveraging many of the processes and components found in commercial enterprises, defense organizations and the intelligence community.  The recent tragedy in Boston ensures an increased demand for such systems.  Fortunately, many of the systems’ core components can be satisfied by enterprise grade open source software that comes as part of a unified platform.  By eliminating both licensing costs and improving platform productivity, total cost of ownership (TCO) is significantly reduced, allowing access to modern security tools and techniques can be extended to smaller agencies and jurisdictions. 

Commenting on the April 15th Boston Marathon bombing during an interview with MSNBC’s Andrea Mitchell, US Representative Peter King (R-NY) expressed a belief that Americans are going to have to get used to many more surveillance cameras in public spaces:
So, I do think we need more cameras. We have to stay ahead of the terrorists and I do know in New York, the Lower Manhattan Security Initiative, which is based on cameras, the outstanding work that results from that. So yes, I do favor more cameras. They're a great law enforcement method and device. And again, it keeps us ahead of the terrorists, who are constantly trying to kill us.
Questions of domestic policy and civil liberties aside, Representative King’s inclination toward additional surveillance mechanisms has a number of interesting systemic ripple effects.  Understanding these effects requires closer examination of a surveillance system’s constituent components and the nature of the value it provides.

A generic surveillance system consists of four core capabilities (Such systems can, of course, be further decomposed.)  These include:

Collection:  The acquisition of data about the locale or subject of interest.  Representative King’s cameras are one type of collection mechanism, gathering geospatially and temporally referenced imagery and video data.  Other collection mechanisms might acquire radio-frequency data, including such things as cell phone conversations, text messages or emails sent over Wi-Fi and mobile data networks and laser spectrometers, collecting information about what people have done or eaten based on residues on skin and clothing.
Analysis:  Unanalyzed data, like an unmined vein of gold, is little more than potential value.  Analysis tools, like the crushing and precipitation mechanisms in a gold mining operation, both identify relevant events within the overall set of collected data and make sense of the identified events within an operational context.  Analysis, by transforming data into information and information into knowledge, provides the critical element of “what does this mean to me at this time.”
Decision Support:  Most law enforcement and emergency response organizations have doctrines and policies outlining the expected nature and scope of a response to a given type of incident.  Once  analysis has identified the type and magnitude of an event, it’s simply a matter of applying logic consistent with the organization’s business rules to arrive at  a doctrinally valid recommended course of action.
Dissemination:   The best analysis and business rules engines are useless if results and recommendations aren’t placed in the hands of people and organizations with the means to influence events in a timely manner.  Dissemination mechanisms not only ensure timely delivery of critical information, but also preserve the core attributes of information security.  It must ensure that the information being distributed is available only to authorized entities (confidentiality), that it is not altered or corrupted in any way while in transit (integrity), that it can be retrieved when necessary (availability), that both sides of the dissemination transaction have confidence in the identity of the other (authenticity) and that an undeniable audit trail of the transaction exists (non-repudiation).
Using the generic system as a vantage point, it’s easy to see that Representative King’s desire for more cameras exposes only the tip of the security and surveillance iceberg.  An effective surveillance system must solve all four problems concurrently if it is to successfully fulfill its operational requirements.  Having more cameras addresses only the collection issue.  Additionally, fielding a greatly augmented collection capability prior to developing robust analysis, decision support and dissemination capabilities can overwhelm analyst resources and frustrate timely data analysis and dissemination. 

As an illustration, suppose that Representative King gets his way and the number of cameras for a given area is greatly increased, without concomitant improvements to the back end analysis, decision support and dissemination capabilities.  For a system deploying 100 cameras, 2,400 hours of video are collected every day (and 16,800 every week).

Boston’s police department, among the 20 largest in the United States, has about 2,800 uniformed and civilian personnel.  Theoretically, all the video could be reviewed in the course of a single eight hour shift – assuming that the city was willing to withdraw every single police employee from the street and dedicate them to the task, that every employee was a qualified imagery analyst and that only a single analysis pass was necessary.  Realistically, the requirement to manually analyze that much data could overwhelm even the New York Police Department’s much larger forensics investigation division.  (This problem is not unique to law enforcement.  In 2011, US Air Force surveillance systems collected approximately 720 years of video data over Afghanistan.)

However, even a significantly augmented analyst force doesn’t address the fact that current surveillance architectures are inherently reactive. That is, they provide excellent investigative and forensic tools to establish the nucleus of operative facts after an event has taken place but are not preventative or prophylactic in nature.  Law enforcement’s goal with respect to mass casualty events is to ensure that they remain inchoate; that terrorist plans are never realized.  Based on this, we can safely speculate that what Representative King is really seeking is a significantly improved surveillance architecture, of which the collection hardware is only part.  Such an architecture might include image pattern recognition software capable of identifying backpacks or duffel bags or laser spectrometer capable of detective explosives residue from hundreds of feet away.  Categorized by capability, other architectural components include:

Analysis
  • A pattern recognition tool;
  • A real-time data analytics engine; and
  • A storage mechanism capable of handling large data sets that come in at a very high velocity.
Decision Support

  • A business rules processor capable of storing rule sets representing doctrine and executing rules in the context of analyzed data; and
  • A business process engine capable of implementing processes indicated by the business rules engine.

Dissemination
  • An integration and transport mechanism capable of delivering decision support data to a diverse set of applications and endpoints; and
  • A security mechanism ensuring that information can only be transmitted, stored or acted upon by authenticated and authorized system entities.
As can be seen, effective surveillance systems have a number of infrastructural middleware sub-components operating in parallel.  The attendant software development effort isn’t trivial; the sheer volume and variety of components is a significant cost driver.  Each sub-component can require specific expertise, which in turn can require employees with special (and expensive) skills and knowledge.  Additionally, each sub-component  may come with a discrete licensing fee.  Requirements for specialized knowledge and licensing fees combine to create a TCO that may be beyond the budgetary means of many agencies.

Part of the answer lies in building the surveillance system around a highly productive, highly integrated platform that provides dedicated products leveraging a consistent, composable core.  For example, if the integration/transport, security and business rules mechanisms share a common core providing key enterprise service oriented architecture (SOA) functionality (e.g., mechanisms to provide and consume services, mediation, service orchestration, service governance, business process management, service monitoring and support for open standards such as WS-* and REST), less expertise on individual products is required, and fewer expensive experts are needed on the payroll. 

There are additional platform characteristics that can mitigate TCO:

  • By using an open source platform, licensing fees are eliminated;
  • By using a platform based on open standards, expensive vendor lock-in is avoided and innovation, adaptability and flexibility are promoted; and
  • Configurable components offer greater productivity than those requiring custom integration code.
Theory and Practice

Fortunately for law enforcement and the security industry, open source enterprise middleware based on a common, composable and configurable platform exists in practice as well as in theory, and it’s possible to map the requirements outlined above to existing, available and – importantly – supported software products:


Requirement
Example Software Product
Notes
Analysis
Pattern recognition
Open Pattern Recognition project shares algorithms of image processing, computer vision, natural language processing, pattern recognition, machine learning and  related fields.  
*Not open source
Real time analytics

High volume data storage
 Accumulo is a NoSQL database developed by the NSA and open sourced through the Apache Software Foundation.  It offers cell level security.
Decision Support
Business rule management

Business process management

Dissemination
Integration and Transport

Security & Identity



Conclusion

The terrible events in Boston, and the subsequent identification of the suspects testify to the requirement for and effectiveness of surveillance systems.  Two issues become clear:  The need to improve the processing of surveillance data in a manner that helps prevent terrorist incidents from taking place, and the need to provide systems that are affordable to agencies of all sizes and budgets.  Fortunately, technical advances, coupled with the proliferation of high quality open source software offers the promise of achieving both in the near future.

Sunday, April 14, 2013

DevOps and the Future of Defense and Government Software Development Programs



A frustration inherent to working within the defense industry is the sense that we’re regularly required to design and implement systems using ideas and technologies long since adopted and vetted by the commercial sector.  Defense information and communications practitioners often envy their commercial counterparts’ ability to rapidly integrate new concepts that drive down costs and timelines while improving the quality of delivered products.  Sometimes, however, this pattern doesn’t hold true.  An emergent, and promising, methodology known as “DevOps” appears to be a recombination of defense industry methodologies dating from the mid-1990s.  What sets DevOps apart from earlier process instantiations is the emergence of enabling software tools.  These tools not only significantly improve the likelihood of project success, but also provide a means to automatically integrate certification and accreditation (C&A) requirements, thus promising significant cost and time savings to government and defense programs.

I often joke with my commercial sector colleagues that my corporate title should be “Director of Impenetrable Acronyms, Morphemes and Portmanteaus.”  After all, I come fully equipped with phrases like “BLUF, if we’re going to propose DOD-wide C5I OPSEC components based on SOA for a RSTA Bat tothe SES level, our IA, CM and QA stories need to be squared away.  Everybody HOOAH?”  A few glassy-eyed stares and I remember that not everyone is a defense geek.  Recently, however, the tables were turned when I found myself on the receiving end of the phrase “DevOps.”  Despite my initial (and fervent) hopes, DevOps has nothing to do with black helicopters, covert or kinetic action and everything to do with providing a means to increase cooperation and understanding between business and technology practitioners in a manner that dramatically increases the productivity of both.

Enterprise DevOps is a development methodology that mates operational requirements (i.e., What business purpose is the objective software supposed to accomplish?), legislative, regulatory, policy and guidance requirements (i.e., What constraints are there in the manner in which the business purpose must be accomplished?) with traditional information technology development concerns including design time and runtime resources, solution engineering and coding. 

The goal is a collaborative environment that is marked by:


  • Membership comprising constituents from both the operational and development communities
  • Freely flowing communication; and
  • Iterative, incremental and continuous development, test and deployment.


The collaborative environment is expected to result in a more rapid delivery of capability to the operational community with a concomitant reduction in defects and errors.

That’s great.  And to my colleagues in the commercial sector, I say (with a smile!):   What took you so long?

In May 1995, Secretary of Defense William Perry directed "a fundamental change in the way the Department acquires goods and services.  The concepts of Integrated Process and Product Development (IPPD) and Integrated Product Teams (IPTs) shall be applied throughout the acquisition process to the maximum extent practicable."  The tangible artifact resulting from this directive was the DoD Guide to Integrated Productand Process Development (Version 1.0), dated February 5, 1996.

Among other things, the Guide specifies the implementation of Integrated Product Teams (IPT).   The Guide describes an IPT as follows:


An Integrated Product Team (IPT) is a multidisciplinary group of people who are collectively responsible for delivering a defined product or process.  The IPT is composed of people who plan, execute, and implement life-cycle decisions for the system being acquired.  It includes empowered representatives (stakeholders) from all of the functional areas involved with the product—all who have a stake in the success of the program, such as design, manufacturing, test and evaluation (T&E), and logistics personnel, and, especially, the customer. 


If that sounds a lot like the description of a DevOps team, comprising members across the development and operational communities, it should.  Comparable principles drive the two concepts:  Fostering free and open communication, bridging often parochial disciplinary silos and leveraging the synergies resulting from collective awareness.  The difference is that while IPTs are generally the embodiment of human-centric organizational principles and tenets, DevOps principles assume the integration of automated tools from the beginning. 

DevOps platforms incorporate techniques including self-service configuration, automated provisioning, continuous build, continuous integration, continuous delivery, automated release management, and incremental testing. Like the IPT, DevOps responds to the interdependence of software development and business operations. It then extends the IPT concept with automation capabilities that aid with in rapid production, certification and deployment of software products and services. Flickr, for example, developed DevOps capability to support a business requirement of ten deployments per day.

The core of a DevOps platform is a standardized development environment that automates, as much as possible, different operational and development processes.  In doing so, these toolkits address and automate product delivery, quality testing, feature development and maintenance releases. It should come as no surprise that core DevOps concepts come from the Enterprise Systems Management and Agile software development models.

DevOps platforms really come into their own with respect to programmatic governance.    The rulesets governing continuous test, continuous build and continuous integration activities are (and must be!) reflections of organizational policies.  With respect to government and military software programs, DevOps platforms offer the promise of automating the C&A process within the context of development activities.  In a nutshell, each time a module is checked in, it is not only tested against functional requirements, but vetted against the organization’s overarching C&A requirements as well.  Modules failing either operational or C&A vetting are simply not accepted, and the developer receives near real time feedback.  Corrections are made at the time of coding, reducing or eliminating the need for expensive and time consuming verification, regression and C&A testing activities and shortening the time from development to operational deployment.  Additionally, the platforms incorporate strong man-in-the-loop processes, ensuring that no code is promoted from development to test to deployment without positive control of authorized persons.

Speaking at the 2013 International Engagement on Cyber at Georgetown University on 10 April 2013, US Department of Defense (DoD) Chief InformationOfficer Teresa (Teri) Takai posed the question “Why isn’t information assurance (IA) embedded in the acquisitions process?”  The answer could be that the acquisitions community has not yet fully embraced DevOps tools and platforms.  By certifying that the platform meets C&A requirements before the coding and testing cycles begin, IA becomes a fully embedded, inescapable and transparent part of the process.

This embedded, and therefore less expensive and time consuming IA/C&A process becomes critical as defense and government organizations move ever more rapidly toward the implementation of large scale mobile networks with accompanying smartphone, tablet and app ecosystems.  The current DoD software C&A process can take anywhere from six to eighteen months and may cost a program as much as a million dollars.  This level of effort makes sense when thinking of large and/or monolithic multi-year, multimillion dollar software projects that create applications comprising hundreds of thousands or millions of lines of code.

When it comes to small mobile apps whose deployed size is measured in terms of a few megabytes, whose development time may be a month or less and whose value derives, at least in part from being available to meet an immediate need, the associated financial and temporal burdens of the current IA and C&A regimes become unduly onerous.  Fortunately, the burden can be significantly ameliorated by adopting a regime in which the certification and accreditation of a program’s DevOps platform is extended to software products issuing from that platform.

DevOps platforms are, happily, not wishful thinking of vaporware.  An example is the WSO2 AppFactory.  The 100% open source App Factory embodies both programmatic governance and application lifecycle management capabilities including:

  • Project and Team Management;
  • Software Development Workflow;
  • Governance and Compliance;
  • Development Status Dashboarding;
  • Code Development;
  • Issue Tracking;
  • Source Control;
  • Continuous Build;
  • Continuous Integration;
  • Test Automation; and
  • Continuous Deployment.


DevOps repackages the best of earlier defense and government development methodologies and combines it with software platforms that allow for distributed, governed development in a manner that embeds IA and C&A processes.  For the defense and government sector, DevOps offers the promise of a reduction in the time required to field new capabilities and lower program cost profiles.  Importantly, DevOps platforms such as the WSO2 App Factory are situated to be a key enabler for defense and government mobile networks.

Friday, April 5, 2013

Overcoming the Governance Challenge of Providing Mobile Apps to the Warfighter



The US Department of Defense (DoD) is making rapid progress toward the establishment of a department-wide mobile device service that will serve both classified and unclassified communications.  The mobility plan, which is being developed by the Defense Information Systems Agency (DISA), features a converged infrastructure that will transition its classified support components over from the National Security Agency (NSA).  

This is great news for both the warfighters and defense system developers.  DISA’s implementation of a multi-domain mobile network represents a successful balance of new technology adoption and safety and security requirements.  Importantly, it signals a sea change away from an ingrained technological conservatism that has long been the hallmark of the defense acquisitions community. 

The overall vision is breathtaking:  DISA not only wants to expand wireless functionality across the DoD and the services, but also to replace legacy infrastructure such as laptop computers and desktop telephones.  According to Jennifer Carter, DISA’s Component Acquisition Executive:

The goal behind mobility is to establish an integrated infrastructure that can be leveraged to get the mobile device to have the capabilities that the warfighter needs, to bring that capability to them [i.e., the warfighters] – the information they need, the functionality they need – right at their fingertips at the tactical edge.

Unfortunately, the implementation of mobile networks solves only part of the problem. In order to make the networks valuable, two things have to exist:  A strategy for approving devices to operate on the network and an app ecosystem that leverages the power of the devices and the network.   The device strategy seems to be well in hand.  Between October 2012 and September 2013, the new DISA mobile network will support about 5,000 unclassified and 1,500 classified devices.  This number is expected to jump to over 100,000 in FY 14.  Plans for the future include both expanding the number of supported devices (by orders of magnitude) and adding additional types of devices, such as tablets.

The app strategy is less well defined.  While DISA recognizes the need to manage apps (it’s in the middle of a procurement process for an app store), it is still somewhat stymied by the administrative and technical burdens imposed by the DoD Information Assurance Certification and Accreditation Process (DIACAP).  DIACAP is the (DoD) administrative process that ensures that risk management and mitigation activities are applied to information systems and applications that will run on DoD and component service networks. DIACAP defines a department-wide formal and standard set of activities, general tasks as well as a management structure for the certification and accreditation (C&A) of a system to ensure that it will maintain the required information assurance (IA) posture throughout the system's life cycle.

DIACAP is an essential and useful security mechanism; a critical part of the overall protection mechanism that enables vital national security systems to keep functioning.  It’s also very thorough and very detailed with no fewer than five different phases and fifteen constituent activities.  

DIACAP’s high level of scrutiny and detail oriented approach results in a significant cost and time burden.  How significant?  A development effort to produce a significant version update to a software application might encompass ten developers, five systems engineers and a program management staff.  Once coding is complete, the product is submitted for C&A testing.  This effort can easily take four full time effectives (FTE) from six to eight months, as well as the use of specialized government labs.  After this effort, staffing the completed C&A package can take another four to six months.  And that’s for a program that has well vetted IA processes in place.  For a program starting from scratch, tack on another four or six months.

For an application with hundreds of thousands or millions of lines of code developed over a long period of time, the standard DIACAP level of scrutiny and effort makes sense.  However, when it comes to a small app for a mobile device that might be developed in a week’s time, it’s harder to see the justification for what appears to be a disproportionate IA administrative and technical burden.

Luckily for app developers, DoD IA mechanisms allow for an abbreviated qualification effort, where appropriate risk mitigations are baked into the software development process, resulting in dramatically shorter and less expensive C&A effort.  The question for acquisitions program managers in general, and for the mangers of DISA’s cloud infrastructure in particular, is how to apply these procedural mitigations – effectively a development governance process – in a manner that is consistent, repeatable and documented in such a manner as to satisfy IA requirements.

Fortunately, industry faces similar development governance problems.  These requirements led to the development of Cloud-based distributed development environments designed from the ground up to ensure that development efforts were framed within the context of organization’s business rules.  An example of such a distributed development environment is the WSO2 App Factory.

Cloud-based, the WSO2 App Factory operates as a set of pluggable applications on top of a runtime Platform-as-a-Service (PaaS) framework.  It integrates a development forge, enterprise best practices and a Cloud runtime.  Additionally, it ships with open source version control (Subversion, Git), continuous integration (Jenkins, Bamboo), continuous build (Ant, Maven),Test Automation (Selenium) and project management and bug tracking (Redmine) tools.

Additionally, the WSO2 App Factory provides a customizable, extensible governance and compliance modeling framework, project and portfolio dashboards and an App Store for deploying services and applications built within the WSO2 App Factory framework.  It’s also open source, meaning that there are zero acquisition costs associated with the WSO2 App Factory.

For DISA, numerous positive results stem from using such a tool.   The obvious benefit is that IA requirements can be rolled into the governance framework, ensuring that no app built within the environment gets deployed or published without adhering to the required IA standards.  In addition to this, however, it provides a mechanism to require developer organizations to adhere to a single set of department-wide organizational policies and values when developing apps.  Vagaries and resulting risks associated with service and component interpretations of the IA policies are therefore eliminated.  Additionally, costs and time burdens associated with redundant, service-level implementations of IA mechanisms are eliminated, resulting in a leaner, faster and more cost efficient method for delivering capability to the warfighter.

DISA is to be applauded for implementing a mobile infrastructure for the warfighter.  The next step is to provide an environment in which the creativity and capacity of industry to provide solution apps can be efficiently harnessed, robustly governed and rapidly converted into combat capability.