Wednesday, January 30, 2013

App Factories and the Transformation of Legacy Systems

As government organizations in the defense, intelligence, security and law enforcement sectors take stock of their existing information technology (IT) portfolios, a debacle begins to emerge.  Due to design-time architectural decisions, the sustainment burdens associated with many legacy systems are becoming unsupportable.  This may be due to the use of obsolete programming languages, monolithic architectures, tight coupling between components, the extensive use of proprietary interfaces or some combination of all four.  In the end, the reasons matter less than the fact that we live in time of shrinking budgets.  Program offices that migrate their systems to sustainable architectures will be successful; program offices that do not will fail.

There is general agreement as to the path that this migration will take.  Service oriented architectures (SOA) and Cloud capabilities provide a technical framework supporting the way ahead.  More specifically, it is expected that programs will migrate to a bifurcated environment consisting of an integration platform upon which sets of modular, composable and interoperable domain (or “warfighting”) services will operate.  The integration platforms are well understood, providing generic core functionality such as storage, data access, transport, identity and access management (IdAM), application and service hosting, runtime governance and monitoring and analytics.  At first blush, the domain services effort seem similarly straightforward.  And, from a program management perspective, the basic process is well understood: 

  • Business process modeling is conducted to analyze the warfighting capabilities of each legacy system;
  • Atomic functions are identified as candidate services;
  • User stories are written describing each service and attendant acceptance criteria and access control rules; and
  • The service is coded, tested and deployed.   
Unfortunately, the larger problems associated with the migration to domain services are less technical and more programmatic.  Fortunately, there are tools and mechanisms available to program offices that both solve these problems and offer the promise of overall development and sustainment cost reduction. 

The acquisitions environment in which these services will be (and are being) developed is complex:  A single program can have literally dozens of developer organizations, some of which are from industry and some of which are from government.  Each organization may be tasked with the scheduled production of domain services, infrastructure capabilities, the integration of legacy systems or some combination of all three.  At the top of the pyramid is the program office, which struggles to manage the spectrum of lifecycle activities ranging from project and team management to software development, verification, validation and integration, deployment and retirement across a geographically, organizationally and culturally diverse programmatic landscape.  In sum, the challenge facing acquisitions commands seeking to migrate and modernize legacy systems is one of organization and governance.

Unlike the case of the integration platform, the issue with the migration to domain services is less one of WHAT software should be produced and more of HOW the software production should be managed.  The elements of the software production process are generally common across organizations:  Projects are created and kicked off.  A cycle of continuous build followed by testing, migration between development, test and operational environments is followed.   What is required is a mechanism to ensure that production process is common across the entire program and subject to the oversight and control of the government program manager.

Fortunately, there are Cloud-based development toolkits available designed to improve distributed production processes and software delivery through the provision of automation and governance structures that promote architectural best practices, encourage collaboration, reduce process friction, and monitor compliance with organizational and regulatory requirements.  Typically these toolkits are deployed in a secure Cloud environment that allows all authorized team members managed access to the development environment. 

Among the features provided by these Cloud tools are on-demand, self-service and Cloud provisioning, continuous build, continuous integration, continuous test and continuous delivery to user self-service portals and app stores.  The result is a significantly simplified developer experience leading to increased developer productivity, shorter project cycle times and more efficient resource usage.

The tools also provide for governed, iterative lifecycle management across the scope of the program.  They do this by providing a standardized set of architectural templates and application platform services as well as a common – and customizable - set of software performance metrics and analytics.  The well governed environment fosters cross-organizational development collaboration.

By imposing the use of a governed distributed development tool, the government program manager can ensure critical programmatic predictability, reducing risks inherent to assumed communication, and, therefore, total programmatic cost.  Among the governance controls supported by these tools are:
  • Enforcement of the use of the standard infrastructure services provided by the integration platform;
  • Enforcement of the use of a common development process (e.g., continuous build, continuous test, code coverage, discrepancy reporting, etc.);
  • Compliance with DoD and service component security and information assurance (IA) instructions;
  • Implementation of a program or service component standard provisioning lifecycle (e.g., deployment to test environments followed by promotion to operational app store); and
  • Enforcement of a man-in-the-loop application approval process including developer submission, review gates and approval checklists and automated test execution.

Similar to the exemplar integration platform, there are open source offerings meeting the requirements for a Cloud-based distributed development toolkit.  An excellent example is the WSO2 App Factory.  The WSO2 App Factory provides complete enterprise-level DevOps support inlcluding:

·        Project and team management;
·        Software development workflow;
·        Governance and compliance;
·        Development dashboards;
·        Code development;
·        Issue tracking;
·        Source and version control;
·        Continuous build;
·        Continuous integration;
·        Test automation; and
·        Continuous deployment.

The WSO2 App Factory is Cloud-based, operating as a set of pluggable applications on top of a runtime Platform-as-a-Service (PaaS) framework.  It integrates a development forge, enterprise best practices and a Cloud runtime.  Additionally, it ships with open source version control (Subversion, Git), continuous integration (Jenkins, Bamboo), continuous build (Ant, Maven),Test Automation (Selenium) and project management and bug tracking (Redmine) tools.

Additionally, the WSO2 App Factory provides a customizable, extensible governance and compliance modeling framework, project and portfolio dashboards and an App Store for deploying services and applications built within the WSO2 App Factory framework.  As with other open source products, there is zero acquisition cost associated with the WSO2 App Factory.

Through the use of such distributed development toolkits, overall control of the project is placed squarely back in the hands of the program manager.  Technical and programmatic governance is automated, freeing up valuable management resources.  Additionally, visibility and situational awareness are dramatically improved across both horizontal and vertical dimensions of the program. The bottom line, however, is the efficiency gains offered by these toolkits:  Well governed programs result in an overall reduction in both development and sustainment costs.

Friday, January 25, 2013

More Than a Better Mousetrap: The Data Core Behind Integrated Air Defense, Part III

In Part One of this this three part series, I discussed the nature of the integrated air defense problem, introduced the associated cognitive processes and collates them within the knowledge management process.

In Part Two,  I talked about middleware tools applicable to integrated air defense systems and set out a notional architecture for a middleware powered integrated air defense system.

Part Three, below, discusses  available middleware options, provide illustrations of real world integrated air defense systems and offer some concluding thoughts.

One More Consideration

Now that we’ve seen what middleware tools can do, and how they might be employed in an IADS scenario, it’s worth discussing WHY middleware and WHICH middleware.  The “why” has been known for years.  Service oriented middleware allows the creation of loosely coupled systems with a separation of concerns between operational (i.e., warfighting) capabilities, system infrastructure and management capabilities and data.  This simplifies and allows for considerable economies with respect to system creation, maintenance and sustainment over time.  Middleware allows systems in general, and IADS’ in particular to be created more economically, managed more effectively and, when necessary, modified more efficiently.  As importantly, it creates a transition path by which monolithic legacy systems can be evolved toward modular, composable and reusable architectures.

Which middleware is a slightly more tricky question – but one that the US Department of Defense (DoD) may be answering for itself.  In 2009, the DoD CIO issued landmark guidance with respect to the use of open source software, declaring it to be equivalent in kind and type to the proprietary offerings previously favored.  Open source software, because of its reliance upon and adherence to open standards,  generally provides significant advantages in terms of extensibility, interoperability and adaptability.  This is in marked contrast to proprietary software which often derives a business advantage from non-standard, proprietary implementations which create a vendor lock-in situation.

Between the drawdown following the war in Iraq and the gradual withdrawal from Afghanistan and the nearly realized threat of sequestration contained in the Budget Control Act of 2011, defense budgets are shrinking across the board.  While modernization of legacy systems along service oriented lines reduces the total cost of ownership, savings are often offset by the prodigious acquisition costs of proprietary software.  Affordability and guidance from the top unequivocally militate toward the adoption and use of open source middleware.

Real World IADS

In November 2012, Palestinian militants fired more than 1,500 surface to surface missiles against civilian targets in Israel.  The missiles’ sophistication ranged from 240mm mortar shells and homemade Qassam rockets to former Soviet 122mm Grad and Iranian 333mm Fajr-5 artillery rockets.  In response to the attacks, the Israelis employed their Iron Dome system.  Iron Dome is a mobile all-weather counter-rocket, artillery and mortar (C-RAM)/short-range air defense (SHORAD) system intended to defeat rockets and artillery shells out to a range of 45 miles.   It is an integrated sensor/C5I/weapon system consisting of a combination detection and tracking radar, a battle management and weapon control system and a launcher for the Tamir interceptor missile.  During the Israel Defense Force(IDF) Operation Pillar of Defense (14 – 21 November 2012), Iron Dome made 421interceptions, achieving an estimated success rate in excess of 85%.

Iron Dome is only one component of the Israeli IADS.  On 25 November 2012, the IDF announced the successful test of the advanced David’s Sling SAM system.  David’s Sling is intended to operate at ranges between 45 and 150 miles to counter the threat posed by medium and long-range and thus to bridge the gap between Iron Dome and the Arrow 2, Block4 anti-ballistic missile (ABM) system, which is designed to intercept long range ballistic missiles outside the Earth’s atmosphere.  David’s Sling will replace older Hawk and Patriot SAM systems currently in Israeli service.

Together with the manned aircraft of the Israeli Air Force (IAF), Iron Dome, David’s Sling and Arrow make up the sensor and shooter components of the Israeli IADS.  Interestingly, each system has its own sensors, weapons and C5I components.  The C5I components are, concurrently,  data consumers, receiving inputs from each system’s organic sensors, data providers to the national level C5I systems at IAF headquarters and dispensers of wisdom to the launchers and weapons. 

If we apply the technology stack for the notional middleware-powered IADS discussed in part II of this series to the Israeli context, we can see application not only at the national level, but also within each system deployment and at regional level headquarters as well.  The benefits of software re-use are obvious.  What is less obvious, but no less critical, are the benefits of using open source software to drive down acquisition costs.  Put another way, licensing fees for proprietary software components could easily equate to the cost of dozens of interceptor missiles.  It’s pretty obvious where the money is better spent.

At the risk of dating myself, I saw Star Wars when it first hit theaters – which is probably why the idea of the bad guys being plucked from the air by beams of light resonates with me.  As a result, I’d really like to believe that directed energy is the key to protecting home and hearth against aerial incursion.  Unfortunately, I just don’t think it’s the case.  As has been proven in many other battlefield domains (Blue Force Tracker, anyone?) it’s the brilliance of bits and bytes that wins battles.  Air defense is no different; it is the ability to identify meaningful patterns in a sea of data, apply operational logic and execute command and control in an efficient manner that will determine success or failure in the field.   That being said, it is the ability of the acquisitions community to deliver and sustain affordable and readily adaptable systems to men and women in uniform that will determine ultimate victory.

Wednesday, January 23, 2013

More Than a Better Mousetrap: The Data Core Behind Integrated Air Defense, Part II

In Part One of this this three part series, I discussed the nature of the integrated air defense problem, introduced the associated cognitive processes and collates them within the knowledge management process.

Part Two, below, discusses middleware tools applicable to integrated air defense systems and sets out a notional architecture for a middleware powered integrated air defense system.

IADS Automation Middleware

A successful IADS system must integrate software components that automate the three cognitive processes outlined in Part One (i.e., situational awareness (SA), sensemaking and wisdom application).  Obviously the components must be highly performing in terms of throughput and bandwidth use, robust and scalable.  Fortunately, the middleware industry has developed knowledge management solutions for commercial industry that readily translate to the IADS problem space:
  • Deriving SA from large volumes of real-time data is achieved by filtering the data in real-time to identify operationally relevant events through the use of event processors;
  • Sensemaking occurs when business rules are applied to relevant events to create operationally relevant knowledge by business rule servers; and
  • Wisdom is applied when experience (in the form of organizational processes and procedures) is applied to knowledge to create and implement optimized courses of action by business process managers. 

Complex Event Processors

Esper (Java)
Nesper (.NET)
Open Source; GNU General Public License
Oracle Event Processing
WebSphere Business Events
Open Source; Apache License

Business Rules Servers

Corticon BRMS
Oracle Business Rules
InRule Technology
Open Source; Apache License

Business Process Managers
Software AG
webMethods BPMS
Oracle Business Process Management
IBM WebSphere Process Server
Open Source; Apache License

It’s worth a quick look at how each of these products functions before delving into the information architecture of a notional IADS. 

Event processors listen to events and detect patterns based on predefined rules in near-real time. They do not store all the events. (Event storage for later exploitation is the province of business activity monitors.)  There are three basic models:
  • Simple event processing, which implements simple filters (“Is this aircraft friendly or hostile?”)
  • Event stream processing (ESP), which aggregates and joins multiple event streams; and
  • Complex event processing, which processes multiple event streams to identify meaningful patterns using complex conditions and temporal windows (“There have been aircraft detected AND the aircraft are of a given type AND the aircraft are in a given area AND the aircraft have failed IFF interrogation”).
Complex event processors (CEP) are designed to process tens of thousands (or more) events per second with a latency of milliseconds or less.

Business rules provide meaning and guidance for an organization’s activities based on any given set of facts.  Business rules servers (BRS) use a data stream to define the nucleus of facts upon which rules (declaratively defined conditions or situations) will operate and identify the actions to be taken, or inferences to be derived based on the results of applying the rules.  The rules server allows for the operational or business logic to be externalized and abstracted to a higher level, allowing operational users (i.e., uniformed service personnel) to manage them (as opposed to programmers). 

Business process managers (BPM) have both design-time and runtime capabilities.  During design-time, they allow operational users to envision, define, control and adapt the courses of action that will be taken in response to any given set of events.  During runtime, these tools execute the operational workflows in a manner compliant with standards such as WS-BPEL and WS-HumanTask.  More importantly, they allow for the coordinated execution of operational workflows with many different enterprise services and human interactions. 

The Middleware Powered IADS –  Notional Architecture 

Now that we’ve collected the pieces and capabilities for an IADS, let’s see how they fit together.  We’ll start by setting out the pieces of our (simplified for illustrative purposes) IADS.  We have:
  •  Targets, which are any aircraft that can potentially be engaged by the IADS;
  •  Acquisition sensors, which detect, identify and locate potential aerial targets;
  •  Tracking sensors, which lock onto and provide precise targeting information on a specific aerial target;
  • Weapons, which span the range from surface to air missiles (SAM) to AAA and directed energy weapons; and 
  • C5I nodes, which orchestrate, control and manage AAW engagements for a distributed IADS.
IADS Operational View

During normal operations, acquisition sensors are constantly sweeping the skies.  They send a continual series of data streams to the C5I node, including contact, IFF, system health and status and logistics reports.  These reports represent events that are evaluated in real-time by the IADS’ CEP.  Many patterns identified by the CEP are simply discarded.  For example, a biological contact identified as a flock of seagulls is of no interest to air defenders.

Other patterns are passed to the BRS for additional processing.  In peacetime, the only rule sets analysis results received may be those requiring the BPM to invoke a storage process to be invoked for audit and logging purposes.  A commercial aircraft flying an established airway on a scheduled route may generate contact and IFF events, but it will not trigger any military action, but command guidance could require that the fact of the aircraft’s passage be logged.

It is during a hostile event that all the components of the IADS come into play.  The CEP receives contact report events that correlate with negative IFF report events from the acquisition sensors.  The resultant patterns indicate hostile action, and are passed to the business rules server.  The BRS applies logic along the lines of:
  •  Are the incoming aircraft hostile?  YES.
    •  If no, log event.
    • If yes, identify the defense sector to which the aircraft are heading.
  •  Are they heading to sector A?  NO.
    •  If no, evaluate against sector B
    •  If yes, alert tracking radars 1, 3, 5 and 9
    •  If yes, alert SAM sites 22, 34, 18 and 15
    •  If yes, alert AAA sites 345, 928 and 220
    •  If yes, log event
  •  Are they heading to sector B?  YES.
    • If no, evaluate against sector C
    • If yes, alert tracking radars 2, 4, 7 and 8
    • If yes, alert SAM sites 77, 51, 92 and 76
    • If yes, alert AAA sites 999, 226 and 262
    •  If yes, log event
Each time that the BRS indicates that a process action is necessary (e.g., alert, log, etc.), information is passed to the BPM, which invokes the necessary process.  In this case, alerting information is sent to tracking sensors and weapons. 

IADS System Functionality View
Once actuated, the tracking sensors begin sending a continual stream of data back to the C5I nodes, which again, employ the CEP, BRS and BPM.  Identified event patterns in this case might include the fact of a successful lock on by a tracking radar and the arrival of a locked target within range of one or more weapons.  The BRS might employ a logic chain that looks like this: 

  •  Is the target within range of any weapon systems? YES.
    • If no, re-evaluate target in 0.5 seconds
    • If yes, identify weapon systems within range
    • If yes, is target suitable for SAM? YES.
      • If no, evaluate target against AAA 
      • Identify SAM sites within range.
      • Identify SAM sites in ready to launch status
      • Identify number of missiles remaining on each SAM site in ready to launch status 
      • Prioritize SAM sites by number of missiles remaining 
      • Send weapons free command to SAM site with greatest number of missiles remaining
In this case, the BPM was instructed to instantiate logistics processes (e.g., which weapons are in what locations, supply status) as well as health and status processes (e.g., who is in ready to launch status) and operational processes (e.g., send weapons free order).

Launch events are reported to the C5I node by the weapons, which also report the success or failure of each engagement.  The BRS may play a role by evaluating the results of the engagement with respect to the need to re-engage (i.e., Was the engagement successful? If no, re-engage and log result. If yes, go to weapons tight and log result.), whereas the BPM will be responsible for executing logging processes and sending operational commands.

I hope you enjoyed Part II  of this series, and found it informative.  Please come back for Part III on Friday, 25 January 2013, in which I will be discussing  available middleware options, provide illustrations of real world integrated air defense systems and offer some concluding thoughts.