Kanban from a Trench

Introduction

After forming part of a software project team that got underway with a significant project at the BBC Worldwide in 2013, I felt that it would be useful to release a few notes about my experience of Kanban and how it helps to deliver good quality software to customers.

Here are some notes that I prepared in order to deliver a Kanban motivational speech to an engineering community on a subsequent project outside of the BBC.

Kanban from a Trench

Kanban is technique for managing the software development process. It doesn’t tell us how to develop software but provides techniques that help us to enforce what we, (as a software development team), agree is the right formula to develop acceptable quality software for our customers.

As an example, we may determine that an adequate software engineering process requires the following:

  • Well understood and agreed requirements to be the foundation of onward development.
  • Development is sympathetic to an evolving well understood architecture that’s published for the entire team to work from.
  • We practice JIT development/engineering – only do what is required when it’s required.
  • A set of coherent, well planned and engineered tests are built and maintained – from unit tests to UAT.
  • Deployment and release management is actively managed through planning and coordination.
  • Feature development priority adjustments are possible in order that our agile team can respond quickly to change instigated by key stake holders.
  • Peer review of all team assets – code, documentation, user manuals and tests – as a minimum.
  • Feature designs are documented with light-weight descriptions and pictures – bullet points and photos of whiteboards. Sometimes, formal UML diagrams can be justified, i.e. for state or sequence representations.
  • The development process must evolve with the team, product and organisational aspirations.

The Kanban Board

For the Kanban projects in which I have been involved, our project board typically consists of five key areas:

  • a) a team pool (a collection of names),
  • b) a group of associated columns that represent our software engineering process,
  • c) some streams,
  • d) a set of feature tasks in various stages and
  • e) a calendar of significant events.

bk-1

a) Team Pool: The team pool is a collection of names/avatars, each one a picture and name of a person whom is available for work on tasks – each team member usually has two avatars. You can work on at least one task at a time, often we have capacity to take on another task in parallel.

These avatars help teach us about how we work as a team. For example, If an attempt is made to assign an avatar to more than two tasks, or is very frequently switched between tasks before tasks are finished – the board is very likely telling us something that we need to understand and deal with. Maybe we have insufficient resources in the team, maybe we have knowledge concentrated in one person (exclusive knowledge). Regardless, we need to record this and if it continues to happen, take some corrective action to resolve it.

b) Group of Associated Columns: These columns represent phases or major milestones in our software development process. Kanban does not prescribe these columns as listed here below but suggests what might be a good starting point for our team to move forward from – the process will grow and evolve with the team.

bk-2

As an example, the following columns were used for the BBC Worldwide project in which I was involved:

Inventory – A collection of features that will have a broad level description in order that it can be understood (three-point estimation only at this stage). No further detail is required right now – remember, JIT.

Identify – Features that we’ve agreed to spend some further time on. We need to understand if the feature is date driven, what the value proposition is, how to understand if the goal is achieved or not (once it’s deployed we need to measure success), identify significant dependencies (other work/functions etc). Which release version/date is targeted etc.

Analyse and Design – During this stage we will define and refine acceptance criteria – these will be written on the card. We now need to gather sufficient requirements such that the customer will get what they want (and hopefully asked for) and we understand how to build it. We will discuss and record BDD scenarios in preparation of QA. Analysis and design will be peer reviewed with a minimum agreed number of team members whom have business, technical and architectural knowledge relevant to the feature undergoing development. If necessary, the feature will be broken down into smaller units/tasks, each consisting of work that will last (approximately) between 0.5 and 5 days. Wireframes (from the whiteboard) will be drawn, described and agreed/approved by the customer. The feature will be reviewed in terms of architecture. The general test approach will be agreed, also consider technical debt, i.e. missing tests for existing code packages. Agree documentation of the feature, it’s setup, configuration, user manual entries etc. Agree to commit to build the feature.

Develop and QA - Each new class, service, technical feature must have been written, have passing tests and executable documentation where required. Automated tests will be created and triggered by schedule. Tasks will not leave this column until they have been added into ‘develop’ (a Git branch) and all tests green-ticked.

Ready to Deploy – Team agrees that they’re happy for the feature to go into the next release and a demo to the customer is conducted.

One of the key aspects of Kanban is Work In Progress (WIP) limits. Kanban tells us that it’s better not to take on too much and that we should always consider finishing things in preference to starting new ones. This is often quite a challenge for teams new to Kanban because it means instead of taking on a new task, you may have to go and pair with someone that’s already working on a task. We all know that pairing should be encouraged, this is one route towards that. Pairing work may be coding, testing, documentation, QA, infrastructure, etc. WIP limits are defined for each column, they may have different values depending on the team size, columns and our experience of what works. We can increase WIP limits but beware consequences such as blocking. Instead of increasing WIP limits, get work finished instead.

It’s often convenient to decompose a column into two parts, one to show in-progress tasks and one to show those that have been completed but are not ready to move into the next column.

c) Some streams: It’s sometimes easier to track and read the board if groups of related tasks are bunched together in a stream. If during analyse and design a feature is broken down into, e.g. 8 sub-tasks, each of these sub tasks can be grouped into a single stream so we can readily see they are related.

bk-3

d) Feature tasks: The focus of the Kanban board is the tasks that represent feature development. The aim is to pull tasks through the system (from the right-hand side) rather than push them in from the left.

Each task card contains a progress history. It’s not unusual for cards to have extensions attached to the reverse of them, design diagrams, JIRA numbers, dependencies, dates for movement along the board.

To further aid recognition, it’s quite useful to use colour coded cards such that feature work, documentation, infrastructure and bugs can be easily recognised.

It’s really important that the Kanban work board is maintained up to date, so this is something that usually takes place each morning – a stand-up. Whomever is running the stand-up will go through the tasks on the board and ask for an update from any person working on the task.

e) Project Calendar: The entire team needs to know if somebody is sick, on holiday or that a release is imminent.

How do we do maintenance?

We could consider that two classes of bug exist, those directly related to features currently being undertaken and those related to historical feature development. Bugs related to current feature development should be fixed by the feature crew/team working on it, up to and including the User Acceptance Test phase, i.e. post deployment.

The primary difference between product feature development and historical bug fixing seems to be stability of day-to-day work tasks and their duration – frequently changing priorities (intra-day) and generally, very short develop/test life-cycle. Ideally, we’d expect features to undergo development, in part or whole, during a period of days or weeks. Context switching is expensive so we want engineers to remain focused on the feature(s) that they’re working on rather than frequently switching between features and historic bugs.

Therefore, having two project boards (or two streams on a single board), one representing core feature development and one representing bugs could suit quite well. The aspects that are different in these two regards, that need separating, are the units of work, rather than team members. Therefore, it’s a sensible approach to enable team members to be available to work on both core feature development and historical bugs as work-loads and priorities dictate.

Existing Technical Debt

We need to work out a mechanism that promotes, encourages and tracks necessary refactoring in order that it’s well socialised, publicised and coordinated. We could track this as a stream on the main board or use a distinct Kanban board to track it?

Estimation

For Inventory tasks, three-point estimation will be used. Best case, worst case and expected. We’ll use this to help drive which features to take on first given an arbitrary set of features that require development for a given release. This estimation can be further refined when breaking a task down in the Analysis and Design stages. This information will help feed into a release tracking board that represents feature dependencies and release dates.

Architecture

At any stage of any feature or release, it should be possible to view an architecture diagram to use for alignment of feature development. Interesting aspects are current architecture, aspirational next step architecture and final state architecture. Of course all three of these will move over time as the product and its environment matures.

As with all other work, an architectural conjecture will be proposed and presented to the team as part of the general review process. None of this work is hidden, it will all appear as tasks on the Kanban board.

Demos

It should have been made clear from column exit criteria, but running a demo for the customer or his/her representative is a pre-requisite to a feature being added to a release.

GitFlow

At what stage does software make it into ‘develop’, release and support. The Kanban board needs to be well aligned with our release process, or vice-versa.

Following a series of probing and explorative questions, the team commenced trying to understand what it would take in order to establish engineering satisfaction for software production.

What does the team consider as adequate product engineering?

Collectively, the team have identified the following criteria as those that are essential to building a product of sufficient quality such that we are happy to give it to our customers.

The groupings are somewhat artificial, but give us an idea of the type and spread of areas for which we have collectively, accumulated important engineering criteria.

Screen Shot 2014-01-02 at 16.58.05

The Kanban Board

The physical Kanban board will be the master copy of feature development status. We need to build and place a board in an area that’s always accessible, as several team members have observed it would not be a good idea to have this located in a meeting room.

JIRA tickets will be created for the highest-level feature names but at the end of the process, i.e. when they hit the last column (I.e. UAT). Generally, we will not get bogged down in keeping JIRA up-to-date with the physical board – communication is key and that can’t be replaced by an issue tracking system so let’s not try.

From the previous examples, we will now define the columns that we’re going to use and assign exit criteria to each of them.

Board Management

Although, as a software development team collectively, we will own the process, the Kanban board needs to be actively managed along with the teams using the board. Sometimes we will require arbitration, encouragement, steering and decisions in order to operate. If an agile coach is not available, we need to identify someone to adopt the role of coach or manager.

Acknowledgements

I’d like to say a big thanks to Sabina Kamber Salamanca and Kevin Ryan for all of the knowledge sharing and guidance that led me to discover a better way to engineer software – inspirational.

Spring Framework Property Configuration

Architecting software development projects often requires a mechanism to manage configuration properties in such a way that they can be defined and overridden depending on the environment in which they’re being used. This requirement often driven by a need to use different resources at each stage of the development life-cycle, i.e. development, test and production. This article describes a scheme that allows properties to be defined and overridden in a simple way within properties @Configuration classes.

Using the following class as an example, the @PropertySource annotation triggers an attempt to load property values from two properties files. One is a simple literal classpath name entry, the second is also a classpath name entry but uses an embedded expression to allow selection of the appropriate classpath locatable file at runtime:

  1. “classpath:properties/app.properties”
  2. “classpath:properties/app-${spring.profiles.active:default}.properties”.

The @PropertySource annotation triggers property file loading in definition order – consequently ”classpath:properties/app.properties” is loaded first and ”classpath:properties/app-${spring.profiles.active:default}.properties” second.

package com.greendot.properties;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.PropertySource;
import org.springframework.context.support.PropertySourcesPlaceholderConfigurer;

/**
 * Created on behalf of GreenDot Software Ltd.
 *
 * @author matt.d.vickery@greendotsoftware.co.uk
 * @since 08/07/2013
 */
@Configuration
@PropertySource(value = {
        "classpath:properties/app.properties",
        "classpath:properties/app-${spring.profiles.active:default}.properties"
})
public class PropertiesConfiguration {

    private final Logger logger = LoggerFactory.getLogger(this.getClass());

    @Bean
    public PropertySourcesPlaceholderConfigurer getProperties() {
        return new PropertySourcesPlaceholderConfigurer();
    }
}

 

Overriding is supported using this scheme as any property values loaded from the properties file ‘app.properties’ will be overridden by any property values loaded from ‘app-${spring.profiles.active:default}.properties’ that use the same properties key.

As for expansion of the variable expression (${spring.profiles.active:default}), the variable value will be populated at runtime according to the value set for the relevant Java System Property (i.e. -Dspring.profiles.active=test). You may observe that any value can be used for this property, the following code examples use default, test and production as possible values that make sense for the problem. If no value is set, then default will be used, determined by the definition ’..:default}.properties’. This example uses the property key ‘spring.profiles.active’ specifically in order that Spring Framework Profiles can be used through the same configuration mechanism.

Notice that the example also uses a PropertySourcesPlaceholderConfigurer @Bean, this is made available in order that @Value annotations can be used in other Spring bean classes – an example follows.

A suitable mechanism to demonstrate properties configuration is using a Unit Test, the following properties will be used to exercise the PropertiesConfiguration class.

The content of app.properties is:

1
2
3
4
mongo.db.port=27017
mongo.db.name=catalogue
mongo.db.logon=mvickery
mongo.db.password=sugar

The content of app-default.properties is:

1
mongo.db.server=localhost

The content of app-test.properties is:

1
2
3
mongo.db.server=testhost.greendotsoftware.co.uk
mongo.db.logon=tester
mongo.db.password=tpassword

The content of app-production.properties is:

1
2
3
mongo.db.server=prodhost.greendotsoftware.co.uk
mongo.db.logon=operations
mongo.db.password=opassword

 

As an example of how properties management works with this scheme, the following test class loads properties files through the @ContextConfiguration loading of the PropertiesConfiguration class we defined above. The class is then run by the JUnit class runner utility SpringJUnit4ClassRunner, this means that the test can be run with a Spring context with beans loaded from any referenced @Configuration classes, e.g. PropertiesConfiguration.class. Furthermore, the @Value annotation triggers autowiring of property values into annotated variables such as dbName, dbServer etc.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
package com.greendot.properties;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.core.env.Environment;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

import static org.hamcrest.CoreMatchers.is;
import static org.springframework.test.util.MatcherAssertionErrors.assertThat;

/**
 * Created on behalf of GreenDot Software Ltd.
 *
 * @author matt.d.vickery@greendotsoftware.co.uk
 * @since 08/07/2013
 */
@ContextConfiguration(classes = {
        PropertiesConfiguration.class
})
@RunWith(SpringJUnit4ClassRunner.class)
public class PropertiesConfigurationTest {

    private static final String MONGO_DB_SERVER = "mongo.db.server";
    private static final String MONGO_DB_NAME = "mongo.db.name";
    private static final String MONGO_DB_LOGON = "mongo.db.logon";
    private static final String MONGO_DB_PASSWORD = "mongo.db.password";

    @Value("${"+MONGO_DB_NAME+"}")
    private String dbName;
    @Value("${"+MONGO_DB_SERVER+"}")
    private String dbServer;
    @Value("${"+MONGO_DB_LOGON+"}")
    private String dbLogon;
    @Value("${"+MONGO_DB_PASSWORD+"}")
    private String dbPassword;

    @Test
    public void defaultProfile() {
        assertThat(dbName, is("catalogue"));
        assertThat(dbServer, is("localhost"));
        assertThat(dbLogon, is("mvickery"));
        assertThat(dbPassword, is("sugar"));
    }

    @Test
    public void productionProfile() {

        System.setProperty("spring.profiles.active", "production");

        AnnotationConfigApplicationContext context = new AnnotationConfigApplicationContext();
        context.register(PropertiesConfiguration.class);
        context.refresh();
        assertThat(getProperty(context, MONGO_DB_SERVER), is("prodhost.greendotsoftware.co.uk"));
        assertThat(getProperty(context, MONGO_DB_NAME), is("catalogue"));
        assertThat(getProperty(context, MONGO_DB_LOGON), is("operations"));
        assertThat(getProperty(context, MONGO_DB_PASSWORD), is("opassword"));
    }

    @Test
    public void testProfile() {

        System.setProperty("spring.profiles.active", "test");

        AnnotationConfigApplicationContext context = new AnnotationConfigApplicationContext();
        context.register(PropertiesConfiguration.class);
        context.refresh();
        assertThat(getProperty(context, MONGO_DB_SERVER), is("testhost.greendotsoftware.co.uk"));
        assertThat(getProperty(context, MONGO_DB_NAME), is("catalogue"));
        assertThat(getProperty(context, MONGO_DB_LOGON), is("tester"));
        assertThat(getProperty(context, MONGO_DB_PASSWORD), is("tpassword"));
    }

    private String getProperty(final AnnotationConfigApplicationContext context, final String property) {
        return context.getBean(Environment.class).getProperty(property);
    }
}

 

There are three test methods in this class. The first method tests loading of ‘default‘ properties – as the value of ‘spring.profiles.properties‘ is null as runtime. We expect properties found in ‘app.properties‘ to be loaded first along with a single additional property loaded from ‘app-default.properties‘ immediately aftwards.

C24 and mongoDB – Agile Message Management (Vickery, Porter, Roberts)

AUTHORS

Matt Vickery – C24 Technologies & Incept5

Daniel Roberts – 10gen

Iain Porter – C24 Technologies & Incept5

ABSTRACT

C24 and mongoDB  - Agile Message Management. This article presents and demonstrates a natural combination of two enterprise software products  - C24 Technologies C24 iO Studio and 10gen’s mongoDB. Both technologies will be introduced, key feature sets outlined and a practical demonstration of their combined capability provided through working code. The primary driver for promoting these two technologies is that they form a powerful toolset that has underpinned agile software delivery for several significant enterprise applications that required non-trivial messaging and data storage capability.

INTRODUCTION

Both C24 iO Studio and mongoDB are enterprise software products that in their respective technology fields lead the way. This article will articulate their primary features and the reason that the technology support that they provide has become increasingly significant. It will start to become obvious that a class-leading, robust, message parsing and transformation capability coupled with a document-oriented database is a compelling technological combination.

Furthermore, both of these products are highly flexible from an architectural perspective. Application software can make use of both technologies through simple to use APIs, which is shown in the sample project presented in this article. Both C24 iO and mongoDB have wide support amongst the Spring community and so both can be used in a Spring container context. Both products also enjoy support from the enterprise software community and can be easily integrated, as an example, into SpringSource’s Spring Integration, MuleSoft’s Mule, RedHat’s Fuse and Apache’s Camel integration platforms using adapters that those vendors have implemented and provide to their customer base.

This article will present typical challenges facing technology architects whom are building applications that need to take advantage of technologies that support gaining a competitive advantage through agile construction and rapid delivery. This article will concentrate on advocating the right tools for that challenge.

TYPICAL DATA INTEGRATION CHALLENGES IN SOFTWARE SYSTEMS

Inherent in many Message Driven Architectures (MDA) is a messaging toolkit that must deal with both simple and complex messaging, whether it be, for example, FIX and SWIFT from the financial industry, ACORD from the insurance industry or SS7 from the telecommunications industry.

Message Parsing & Business Validation Rules

Building software around message standards (whether formal, de facto or custom) has generally proven to be highly error-prone and costly. Building software parsers is error prone because it is highly complex.  Whilst message parsers are necessary to translate raw messages into well-defined message models,  without validation capability,  a message parser has limited use. Therefore, software is also required to support the application of business rules so that they can be applied to form a statement of validation for each message.

Building software parsers and validation rule capabilities is costly because many of these standards are frequently updated and you then have to keep pace with that change. Keeping pace generally means updating, testing and releasing code. Furthermore, for some message standards, compliance failure can be very costly, for example SWIFT compliance failure is calculated as a fixed value fine per message.

Message Transformation

A typical  MDA  use case is  one where messages must be transformed from one message type to another. Even using, what appears to be direct field mapping, data often has to go through a process of cleansing, enrichment and type change. On a technical level, and as an example, many developers have had experience of trying to resolve differences between values based on different type systems. Because we have different type systems, an XSD date type is different from a JDBC date type is different from a Java date type is different from an ISO-8601 date type. Further examples of type system variance exist around numeric types; this can be particularly serious for financial applications.

Although source  messages often require some type of validation, an entire set of rules is often required to validate target transformation messages as well. The most agile and efficient approach would be to use a single mechanism.

The primary point made in this section is that data transformation is never as easy as one might imagine. Tooling support, provided by experts in the field, not only provides a competitive advantage through improved time-to-market deliveries  but also saves costs associated  with maintenance and getting parsing and validation wrong.

Key Motivators

With a motive to provide agile, robust and rapid time-to-market software solutions, we are going to advocate using an industry leading messaging toolkit instead of building bespoke parser, transformation and validation rule capability.

Message Storage

Another key technology employed in a typical MDA solution is one that provides message persistence. Most enterprise applications are required to store messages for at least, a short period of time.  That storage requirement may be triggered on message entry to the system or subsequent to having been processed through business logic.

A number of short-term storage technologies and strategies are available for selection. Any database must be able to perform adequately, cope with schema change easily, scale-out to commodity server infrastructure and not cost more than the servers on which they run.

Regarding long-term storage, and as an example, some applications used by the finance industry have requirements to store messages for a number of years in order to meet regulatory requirements. Some fairly typical problems associated with this requirement are the cost of database licenses and servers (a distinct archive DBMS is typically employed), scalability for a growing data storage requirement and having capability to cope with a statically defined schema  being used within an evolving business.

 A number of vendors compete in this technology space. An important key differentiator for choosing a technology, and probably the first to consider is, fit-for-purpose. Although it is entirely possible to take a message and build a 4th normal form relational model and then a schema design from it, enterprises are beginning to demand access to technologies that are more suited to agile delivery and storage in a native structure.

 Following on from fit-for-purpose requirements, the chosen storage technology must support performance requirements, schema  change through business  evolution, horizontal scalability (scale-out) and have a reasonable license cost.

BREAKING AWAY FROM THE RDBMS – Agile operational data stores with flexible dynamic schemas

Introduction

During the last fifteen to twenty years, the Relational Database Management System (RDBMS) has provided a capability that has seen it become the dominant storage technology in the enterprise application market. Vendor offerings in this space have become rich and plentiful; they include Oracle RDBMS, IBM DB2 and Sybase ASE amongst others. Other lesser known vendors have also appeared within the last 10 years but are founded upon the same roots; a typical relational model implementation along with a standards based Structured Query Language (SQL) provision.

Static Versus Dynamic Schemas

Modern software development requires a highly efficient response to requirements change; this is usually achieved via an agile development process. In order to be most successful, experience has  proven that agile development needs to be supplemented and facilitated with agile products, frameworks and tools.

A great example of this is the significant development advantage that can be leveraged by using dynamic schema capabilities such as those provided by document databases. In order to express entities in a document database, rather than formally specifying tables and attributes in a static Data Definition Language (DDL) script, the document is the schema.

The dynamic schema capability also means that costly data migrations are not mandatory and maintenance burdens are eased.

Scalability

Most relational database vendors  provide vertical scaling capability.  Vertical  scaling, or scale-up, has proven to be very costly as it is usually provided by purchase of bigger servers with more CPU, RAM, faster  storage and  high-speed networking. The subsequent cost of migration of data to the new bigger, faster server is also sometimes substantial. If you have a requirement  to scale-out to multiple servers because  of growing data requirements it becomes increasingly complex and consequently, increasingly expensive. Furthermore, and from a technical perspective,  RDBMS solutions are table driven which makes  high-performance access along with sensible data distribution very difficult to achieve.

Conversely, document databases naturally support  scale-out, data is generally co-located, and partitioning (or sharding) data across distributed nodes is  generally  far  less complex. Furthermore, rather than requiring high-powered servers for scale-up, document databases can be scaled-out using commodity hardware.

Performance

Regarding  both  document  and relational  databases, two aspects of performance are interesting. The first aspect can be described by considering a query that must navigate a single object versus one that must traverse tables that need to be joined prior to projection. Reading data from a single relational table is very fast but as soon as joins across tables are required, performance significantly decreases compared to the equivalent operation using a document database.

The second interesting aspect of performance can be described by considering a growing installation of commodity hardware servers that are being used to house big  data sets. Because documents are stored without undergoing  normalization, it’s easy to distribute documents across very large clusters of servers and provide linear scalability for both reads and writes.

ORM Layers

For some types of application, a key challenge that faces the relational model (or at least its vendor implementation) is the much discussed and debated ‘impedance mismatch’. This challenge is typically overcome by the introduction of a new layer that brokers between application objects and relational tables – the Object Relational Mapping (ORM) layer. ORM layers are key in the technology stack but only exist because two technologies don’t naturally fit together. Rather cleverly, some RDBMS vendors extended their business around the necessity for ORM, consider Oracle’s TopLink product for example. As an ORM product, Hibernate became popular but many developers that required reasonable levels of performance discovered that it performed very poorly compared to their application software - it became a bottleneck. Relational tables and joins have become such a key aspect of performance that very specialist knowledge is required to design schemas, scale horizontally, write (distributed and non-distributed) queries and tune databases around them. Performance is further complicated by the introduction of an ORM layer – you certainly can’t ignore it.

Non-relational Data

As an extreme example of an application that does not fit typical relational database facilities, designing a relational schema that would store FpML or SEPAs ISO-20022 messages would require entities with thousands of attributes spread across a huge number of tables. This is completely impractical unless the chosen storage design treats the message as a complete entity or document.

Costly & Heavyweight Product Sets

As RDBMSs have matured, some large vendors have added extensive capability and built large organisations around their products. The extensive capability now includes not just the core RDBMS but modelling and design tools, management and monitoring tools, BI tools, cluster management tools, analytic tools, public issue tracking and support tools and highly skilled professional services. This software tool capability has to be funded and that’s typically done through licencing. Some of the RDBMS products have become so complex that vendor PS is required to design and tune them.

Architectural Results

Many customers requiring data storage for their documents are writing software against a technology that forces the impedance mismatch to be resolved. They resolve this technology mismatch by introducing an ORM layer in their application, merely because they are using two technologies that don’t fit together. Furthermore, customers then purchase an RDBMS from a mainstream vendor that has built a global company around extended capability, much of which is not required for simple document storage. Expensive consultancy is often required to get the performance that customers need. Scalability is often restricted to scale-up  because data has been normalised into tables that can’t be easily distributed. This, in turn, removes potential for using commodity hardware to scale-out.

A New Paradigm

The software industry is undergoing somewhat of an evolution in the thinking around the fundamentals of storage technologies. For projects  that require a different fit between application messaging and the underlying storage technology, document databases look very attractive and are gaining significant traction  in today’s agile driven market. Several very interesting aspects arise from this:

  • Dynamic schemas fit well with agile development and lessen the project development and maintenance burden.
  • Object graph traversal type queries  are computationally cheaper for document databases than the equivalent using relational structures.
  • Scale-out, using commodity hardware is a more cost effective approach than typical RDBMS scale-up.
  • An ORM layer is not necessary; there is no impedance mismatch problem to solve.
  • Large, expensive product sets are not required – purchase and use only what you need.
  • Queries can be expressed in much more simple terms, normalization is not exposed to the query writer through having to join tables together to build documents.

From RDBMS to Document Databases

There’s no doubt that  developers  coming from an RDBMS & SQL background will be challenged (albeit briefly) attempting to understanding this new technology – it is a significant and fundamental paradigm shift. However, learning to use document databases is proving to be very compelling to developers because it delivers results quickly and supports change instantly through dynamic schemas. Installation and setup is usually trivial.

C24 INTEGRATION OBJECTS (IO) – DATA MODELLING AND MANAGEMENT

C24 Technologies is a software house specialising in standards-based messaging and integration solutions aimed at the wholesale financial services markets.  C24 Integration Objects (C24 iO) Studio is a data modelling, meta-data management, transformation and messaging integration toolkit based on Java binding technology.

C24 iO Studio can be found in production use at more than twenty blue chip financial services customers worldwide.

Major features that result in C24 iO Studio being one of the leading players in its market space are:

  • Graphical based  message model construction using typical  XSD like syntax, complex types, simple types and attributes.
  • Graphical based  transformation construction using source and target messages. Drag and drop mapping links between source and target messages and apply functions on those links to apply updates, enrichment and type conversions to message fields. A large palette of functions is available for use by transformation designers.
  • Out-of-the-box standards library support, this means that you can start processing complex financial messages without writing a single line of custom parser code. Furthermore, financial messaging standards can be enforced using C24 iO validation rules; these are also all included out-of-the-box. C24 Technologies maintain these standards libraries throughout the year, when new standards are published they update the models and release them to the customer base.
  • Rich  validation facilities, for standards libraries or custom models, a set of rich validation languages exist that mean you can go well beyond the capabilities of technologies such as the XSD constraint language.
  • Each and every one of the standards based message models are tested to a degree that would surprise even the most test oriented developer.

10GEN MONGODB

MongoDB (from “humongous”) is a scalable, high-performance, open source NoSQL database. 10gen has an extensive list of customers that use MongoDB across a number of vertical industries,  including, but not limited to: SecondMarket, Athena Capital Research, Equilar, SAP, MTV and craigslist.

Key Features:

  • Document-oriented storage – JSON-style documents with dynamic schemas.
  • Full Index Support – Index on any attribute, just like you’re used to.
  • Replication & High Availability – Mirror across LANs and WANs for scale & reliability.
  • Auto-Sharding – Scale horizontally without compromising functionality.
  • Querying – Rich, document-based queries.
  • Fast In-Place Updates – Atomic modifiers for contention-free performance.

C24 IO AND MONGODB TECHNICAL SAMPLE

Introduction

This section is a deep technical dive into a sample application that demonstrates one potential mechanism for coupling C24 iO and mongoDB. The sample uses a scenario that’s manufactured to resemble a high-level business operation.

Scenario

Before that technical deep dive, it would be useful to understand the scenario from a business perspective. The essence of it is that a client of a brokerage firm places orders, the broker then fills those orders. Each client order (NewOrderSingle) may give rise to one or more execution reports (ExecutionReport); orders can be filled with a single execution or multiple individual executions.

Sample Messages

In order to support that scenario, a set of FIX NewOrderSingle and ExecutionReport messages, which were generated from a front office simulator, will be saved into a mongoDB database. All of the messages used in this sample are provided as static data and are contained on two files that can be found in the sample project source (src/main/java/resources).

Inbound Message Delivery

In a typical production scenario, messages would be received via a JMS Queue or a File Reader as raw FIX they then undergo a series of operations:

  1. Message Parsing - C24 iO binds each message to a C24 Java FIX object, this code is provided by the C24 FIX libraries, no custom code is required.
  2. Message Validation - Once parsed (bind), the message is validated to ensure that it is semantically correct.
  3. Message Transformation - Following validation, the message is converted from a C24 Java object to a mongoDB object.
  4. Message Persistence - The mongoDB object is then saved to MongoDB.

Sample Project Distribution

The sample project has been distributed in two forms, the source is available on Github at: https://github.com/C24-Technologies/c24-sample-mongo-trading. The first distribution form is for Internet-enabled environments. Cloning the Github project and running the usual  ’mvn clean test’ will download dependencies, compile all of the application code and run the integration test classes. The second distribution form provides the sample project as a package that can be run in a non-Internet enabled environment. The package is distributed as a zip file that needs to be unpacked and run using the supplied ant build file or shell script. The ant build file contains several interesting targets; they are clean, compile, createNewOrders and  createExecutionReports. Each of these targets needs to be run in turn in order to populate the database with data necessary for the queries to be executed. The default target invokes all targets in the correct order automatically and so running ‘ant’ in the root directory of the project will complete the task. Running the shell script ‘./run.sh’ will also achieve the same result.

Service Dependencies

Whichever project is executed, a mongoDB database must be running and available for service. The directory  src/main/java/resources contains a database configuration file named  mongoDB.properties. Connection parameters for the mongoDB database that you plan to use need to be configured in that file. Although you would always use authentication credentials in a production system, none are necessary for this sample. All that is required is the server (host) name, database name and port number.

Spring Framework C24 iO Configuration Classes

As the Spring Framework is a popular IoC offering, C24 iO objects used in this sample are configured using Spring Configuration  classes. The class biz.c24.io.mongodb.fix.configuration.C24Configuration contains all of the C24 beans plus a Spring  PopertyPlaceHolderConfigurer  (not shown) that provides access to an external property file (it contains the database details). Within this configuration class are several key bean creation methods. The use of these beans will be discussed in the sections below.

Spring Framework mongoDB Configuration Classes

The Spring Framework creates all of the mongoDB Java objects used in this application during container instantiation. The configuration class is very simple and is as follows.

The key mongoDB properties (database, port & server) have been loaded by the Spring PropertyPlaceHolderConfigurer bean defined in the  C24Configuration class. The Spring bean created by  getMongoDB() creates a connection to the database. The mongoDBTemplate bean provides access to  the mongoDB database instance through the normal Spring template mechanism.

Running The Project

Inbound Message Delivery

For the purposes of this demonstration the data is going to be loaded into MongoDb

via two data loader classes:

  1. biz.c24.io.mongodb.fix.application.NewOrderSingleDataLoader
  2. biz.c24.io.mongodb.fix.application.ExecutionReportDataLoader

These classes load the data from the files in src/main/resources/data-fixture by reading a single line at a time.

Message Parsing

Parsing the String that represents the FIX message requires two classes:

  1. The source parser responsible for parsing the message
  2. The object class to populate

The C24 Parser

Each C24 iO parser extends the abstract class biz.c24.io.api.presentation.Source, the FIX parser class is FIXSource. A single instance of this class is required and so the default  Spring  bean creation options are used.

C24 iO biz.c24.io.api.data.Element objects are used to tell the C24 iO parsers (FIXSource in this sample project) which elements that the caller wants the parser to extract from the message during parsing. These two element beans will be used to tell the FIXSource parser that the caller wants to receive an object representing a NewOrderSingleMessage and also an ExecutionReportMessage.

Two utility beans have been created that actually perform the role of using C24 iO code to parse and validate raw FIX messages; it is boilerplate code and so has been captured within a Spring like C24 template (C24ParseTemplate) class.

Message Validation

C24 iO validation rules are defined on each message model, although of course, they can be shared or re-used. Validation rules are invoked through use of a validation manager. Again, a single instance is required which gives rise to the following configuration.

Message Transformation

Once the message has been parsed into a Java C24 ComplexDataObject it is transformed into to a mongoDb object prior to being persisted. See the method asMongoDbObject() in the class biz.c24.io.mongodb.fix.application.C24ParseTemplateImpl.

Message Persistence

Messages are persisted through use of Spring’s MongoTemplate. 

Application Execution

Application code can be invoked in two different ways, firstly through an application launcher and secondly through an integration test. This section will follow application launcher code through the new order creation process.

The application invocation sequence is as follows:

  • Through the createNewOrders() method, the CreateNewOrderSingle class code loads the Spring container through specification of a context loader. The only configuration that needs to be loaded is the  MongoDbConfiguration class context, the  C24Configuration.class is loaded as a direct dependency using an @Import({C24Configuration.class}) statement. The Spring context loader reads the two Spring @Configuration classes and creates all of the beans that have been defined.

  • The createNewOrders() method gets the mongoDb template bean [line 35], as well as the C24ParseTemplate bean  [line 36] from the Spring container. The same method then loads a file that represents a sample set of FIX NewOrderSingle messages from a source located on the project classpath. The method is now setup for work.

  • The file containing the FIX messages contains multiple messages, one per line. The C24ParseTemplate class is called to perform parsing of each raw FIX message into a C24 iO java object [line 45], this is the bind() method invocation.
  • The validation manager validates the message is semantically correct [line 46]
  • The C24ParseTemplate converts the C24 iO object into a mongoDB object [line 47].
  • The mongoDB template is invoked with two parameters, the mongoDB object and the other is the name of the collection for which to add the new document object [line 47].

The interesting section in this last code snippet is the code that parses the raw FIX message [45] and the code that writes it into the mongoDB database [47].  The C24ParseTemplate code takes a raw string type message, performs some basic checks and parses that message into a C24 iO Java object. There are two key methods in this class, the one that binds the string to the Java object and the one that converts the Java object into a mongoDB object.

In the  bind(ComplexDataObject) method, Lines 21 and 22 show the simplicity of using C24 iO code to parse a raw string into a C24 iO Java object. The parser (or source) has a reader set on it. The reader supplies the raw message as a string. At the moment that readObject(…) is called, the parser looks for an instance of the element in the string format message and passes it back to the caller as a C24 iO Java object. Note that All C24 iO message objects extend the class ComplexDataObject. This provides the significant benefit that all C24 iO messages can be handled as a single type.

The other interesting method in this class is the asMongoDBObject(…), this takes a C24 iO Java object (ComplexDataObject) and converts it to a mongoDB document object. C24 iO Java objects can be emitted in any format that’s supported for a message, JSON, XML, CSV, Java Source etc. The C24ParseTemplate class above contains an example of how you could format any C24 iO Java object as XML, no transformation necessary, this is merely a formatting option.

The save() method of the mongoDB template is compiled code and distributed as part of the mongoDB Java driver. The MongoTemplate contains all of the methods that you typically need for inserting, updating, deleting and querying a mongoDB database and its collections. The method call in this example accepts a mongoDB object but also the name of a collection, if the collection does not exist, it will be created on behalf of the caller. Once the save(…) method has been called, the FIX documents will now be present in the database. A simple mongoDB  findOne() query on the  NewOrderSingles collection reveals the following results:

Retrieving results through use of a simple query is useful for checking that data is inserted whilst developing an application. However, when it comes to accessing production data, the task will be approached with a different target goal.

SUMMARY

  • This article began by exploring typical challenges for message toolkits and persistence mechanisms in today’s software market considering the necessity for low-cost, high-performance, scalable, robust and agile tools.
  • Two key enterprise technologies were introduced,  C24 iO Studio and  mongoDB, that together, form a powerful partnership within Message Driven Architectures. Together they provide a fully featured messaging toolkit and document-oriented data storage.
  • Key features of each product were explored along with driving forces that lead to their existence in today’s software market.
  • An example implementation using both technologies has been created, presented in this paper and distributed through two mechanisms; for  Internet and non-internet enabled environments.

REFERENCES AND SOURCES

  1. C24 iO Whitepaper – Just How Hard Can It Be Parsing SWIFT Messages?
  2. mongoDB Slides and Video

C24 INTEGRATION OBJECTS (IO)

To learn more about C24 Technologies and C24 Integration Objects including datasheets, customer successes and reference implementations, please visit www.c24.biz.

MONGODB

To learn more about 10gen and mongoDB, please visit www.10gen.com and www.mongodb.org.

 

 

Spring Integration – Message Gateway Adapters

Introduction

This article is going to present a technique for handling response types from message sub-systems in a service-based Message Driven Architecture (MDA). It will provide some theory, experience of message sub-system development and some robust working code that’s running in several financial  institution production deployments.

As Spring Integration is underpinned by a well established set of Spring APIs familiar to most developers, I have chosen this as a basis for this article.

If developers only ever had to build integration flows to handle positive business responses, this article would not be necessary. However, real world development, particularly within service integration environments, results in developers having to deal not just with positive business responses but business exceptions, technical exceptions and services that become unresponsive.

Spring Integration Gateways

Spring Integration (SI) Gateways are entry points for message sub-systems. Specify a gateway using a Java interface and a dynamic proxy implementation is automatically generated by the SI framework at runtime such that it can be used by any beans in which it’s injected. Incidentally, SI gateways also offer a number of other benefits such as request & reply timeout behaviour.

Typical Message Driven Architecture (MDA) Applications

As with Java design patterns for application development, similar patterns exist for integration applications. One such common integration pattern is service-chaining – a number of services are connected in series or parallel that together perform a business function. If you’ve built and deployed services using MuleSoft’s Mule, SpringSource’s Spring Integration, FuseSource’s ServiceMix (Camel), Oracle’s Service Bus or IBMs WebSphere ESB the chances are that you’ve already built an application using chained-services. This pattern will become ever more widespread as the software industry moves away from client-server topologies towards service based architectures.

A recent engagement in which I provided architectural consultancy will be used as an example  implementation of the service-chain application pattern. The engagement required the team to build a solution that would transition financial messages (SWIFT FIN) from a raw (ISO15022) state through binding, validation and transformation (into XML) and ultimately added to a data store for further processing or business exception management.

Using EIP graph notation, the first phase service composition could be represented as follows. A document message arrives into the application domain, it’s parsed and bound to a Java object, semantically validated as a SWIFT message, transformed to XML and then stored in a datastore. As a side note, the binding, semantic validation and transformation are all performed by C24s iO product. Furthermore, the full solution sample that can be found in GitHub for this project contains dispatcher configurations in order that thread pools can be used for each message sub-system, for the sake of brevity and clarity, such details will be omitted from this article.

Using EIP graph notation, the first phase service composition could be represented as follows. A document message arrives into the application domain, it’s parsed and bound to a Java object, semantically validated as a SWIFT message, transformed to XML and then stored in a datastore. As a side note, the binding, semantic validation and transformation are all performed by C24s iO product. Furthermore, the full solution sample that can be found in GitHub for this project contains dispatcher configurations in order that thread pools can be used for each message sub-system, for the sake of brevity and clarity, such details will be omitted from this article.

Although this configuration specifies a chain of services it’s not adequate to form the bases of a robust production deployment. An exception thrown by any of the services would be thrown straight back to the entry gateway thus loosing context, i.e. which service threw the exception, any non-response code invoked by the service may result in its Java thread getting parked and null values returned by a service may cause unexpected problems or even for the entry gateway to hang; if an entry gateway is used unlike in this diagram.

Unresponsive Service Invocation

The Spring Integration specific construct for accessing a message sub-system is the gateway. Although apparently simple, the SI gateway is a powerful feature that results in generation of a dynamic proxy generated by Spring’s GatewayProxyFactoryBean. This bean can be injected into new or existing code or services as an implementation of the interface. The SI gateway also provides facilities to deal with timeouts and provides some error handling facility. A typical Spring Integration namespace XML configuration is as follows:

<int:channel id="parsing-gw-request-channel" datatype="java.lang.String">
  <int:queue capacity="${gateway.parse.queue.capacity}"/>
</int:channel>

<int:gateway id="parseGateway"
             service-interface=
                 "com.c24.solution.swift.flatten.gateway.ParseGateway"
             default-request-channel="parsing-gw-request-channel"
             default-reply-channel="parsing-gw-reply-channel"
             default-reply-timeout="${gateway.parse.timeout}"/>
<int:channel id="parsing-gw-reply-channel" 
             datatype="biz.c24.io.api.data.ComplexDataObject"/>

Evolving the design, the next phase application architecture design needs to take advantage of these gateways, the Spring Integration context can be extended quite simply by creating a Java interface for each service. Building message sub-system gateways leads us towards a design model like the following:

A small amount of additional configuration and some very simple Java interfaces means that developers can now configure request/reply timeouts on the gateway and avoid any unresponsive code – assuming (as we always should) that code written locally or supplied by 3rd parties is capable of misbehaving.

Additionally, the SI gateway allows specification of an error handling channel so that you have the opportunity to handle errors. This design I am presenting here is not going to use error channels but handle them in a different way – hopefully that will become more obvious as the design evolves.

Specific semantics and configuration examples for gateways and gateway timeouts can be found in reference material provided by SpringSource and other blogs.

A significant benefit to the integration solution has been added with the use of Spring Integration gateways. However, further improvements need to be made for the following reasons:

  1. A gateway timeout will result in a null value being returned to the calling, or outer, flow. This is a business exception condition that means that the payload that was undergoing processing needs to be pushed into a business exception management process in order that it can be progressed through to conclusion – normally dictated by the business. If a transient technical issue caused this problem, resubmission may solve the exception, otherwise further investigation will be required. Whatever the eventual outcome, context about the failure and it’s location need to be recorded and made available to the business exception management operative. The context must contain information about the message sub-system that failed to process the message and the message itself.
  2. Exceptions generated by message sub-systems can be handled in a few different ways, through the error channel inside the gateway undergoing message processing failure or by the calling flow. Again, context needs to be recorded. In this case, the technical exception needs to be added to the context, along with the gateway processing the message undergoing failure and any additional information that may be used within the business exception management process for resolution.

If transactions were involved in this flow, for example a transactional JMS message consumption flow trigger, it would be possible to rollback the transaction in order that it can be re-queued to a DLQ. However, the design in this software compensates for failures by pushing failing messages to a destination configured by the developer directly; and this may of course be a JMS queue if that’s required. This avoids transaction rollback and adds the benefit of exception context directly rather than operations staff and developers having to scour logs to locate exception details.

The Gateway Adapter

In order to all handle exceptions taking place within a message sub-system and also handle null values a Gateway Adapter class can be used. The reason that the SI gateway error channel is not used for this purpose is that it would be complex to define and would have to be done in several places. The gateway adapter allows all business exception conditions to be handled in one place and treat them in the same way. The Gateway Adapter is a custom written Spring bean that invokes the injected gateway directly and manages null and exception responses before allowing the invocation request to return to the caller (or outer flow).

The architectural design diagram evolved from the previous design phase includes a service activator backed by a Gateway Adapter bean, this calls the injected gateway (dynamic proxy) which calls the business service.

Nuts and Bolts

The design diagrams are a useful guide for making the point but maybe the more interesting part is the configuration and code itself. As with the entire solution, the outer or calling flow can be seen in the project located in GitHub, however a useful snippet to view is one of the gateway adapter namespace configurations, the gateway, the gateway adapter and the gateway service configuration. As a number of gateways exist in this application, and they all follow the same pattern, the following configuration and code will merely demonstrate one of them.

Gateway Adapter Namespace Configuration

 
<int:channel id="message-parse-channel" datatype="java.lang.String"/>
<int:chain input-channel="message-parse-channel" 
           output-channel="message-validate-channel">
    <int:service-activator ref="parseGatewayService" method="service"/>
</int:chain>

Gateway

package com.c24.solution.swift.flatten.gateway;

import org.springframework.integration.Message;

/**
 * @author Matt Vickery - matt.vickery@incept5.com
 * @since 17/05/2012
 */
public interface ParseGateway {
    public Message<?> send(Message<String> message);
}

GatewayAdapter

package com.c24.solution.swift.flatten.gateway.adapter;

import biz.c24.io.api.data.ComplexDataObject;
import com.c24.solution.swift.flatten.exception.ExceptionContext;
import com.c24.solution.swift.flatten.exception.ExceptionSubContext;
import com.c24.solution.swift.flatten.gateway.ExceptionGatewayService;
import com.c24.solution.swift.flatten.gateway.ParseGateway;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.integration.Message;
import org.springframework.util.Assert;

/**
 * @author Matt Vickery - matt.vickery@incept5.com
 * @since 17/05/2012
 */
public class ParseGatewayService extends AbstractGatewayService {

  private static final Logger LOG =
      LoggerFactory.getLogger(ParseGatewayService.class);
  private final ParseGateway parseGateway;
  private final ExceptionGatewayService exceptionGatewayService;

  public ParseGatewayService(
         final ParseGateway parseGateway,
         final ExceptionGatewayService exceptionGatewayService) {
      this.parseGateway = parseGateway;
      this.exceptionGatewayService = exceptionGatewayService;
  }

  public Message<ComplexDataObject> service(Message<String> message) {

      Message<?> response;
      try {
          LOG.debug("Entering parse gateway.");
          response = parseGateway.send(message);
      } catch (RuntimeException e) {
          LOG.error("Exception response .. {}, exception: {}", getClass(), e);
          LOG.error("Invoking ... process because: {}.", e.getCause());
          buildExceptionContextAndDispatch(
            message,
            ExceptionContext.PARSE_FAILURE,
            ExceptionSubContext.EXCEPTION_GATEWAY_RESPONSE,
            exceptionGatewayService);
          throw e;
      }

      if (response != null) {
        if (!(response.getPayload() instanceof ComplexDataObject))
            throw new IllegalStateException(INTERRUPTING_..._EXCEPTION);
      } else {
        LOG.info("Null response received ....", getClass());
        buildExceptionContextAndDispatch(
          message,
          ExceptionContext.PARSE_FAILURE,
          ExceptionSubContext.NULL_GATEWAY_RESPONSE,
          exceptionGatewayService);
        throw new GatewayAdapterException(NULL_GATEWAY_RESPONSE_CAUGHT);
      }

      Assert.state(response.getPayload() instanceof ComplexDataObject);
      return (Message<ComplexDataObject>) response;
  }
}

GatewayService

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:c24="http://schema.c24.biz/spring-integration"
       xmlns:int="http://www.springframework.org/schema/integration"
       xsi:schemaLocation="http://www.springframework.org/schema/integration
 http://www..org/schema/integration/spring-integration.xsd  http://schema.c24.biz/spring-integration
 http://schema.c24.biz/spring-integration.xsd
 http://www.springframework.org/schema/beans
 http://www.springframework.org/schema/beans/spring-beans.xsd">

    <int:channel id="parsing-gw-request-channel" 
                 datatype="java.lang.String">
        <int:queue capacity="${gateway.parse.queue.capacity}"/>
    </int:channel>
    <int:gateway
      id="parseGateway"
      service-interface="com.c24.solution.swift.flatten.gateway.ParseGateway"
      default-request-channel="parsing-gw-request-channel"
      default-reply-channel="parsing-gw-reply-channel"
      default-reply-timeout="${gateway.parse.timeout}"/>
    <int:channel id="parsing-gw-reply-channel" 
                 datatype="biz.c24.io.api.data.ComplexDataObject"/>

    <int:chain input-channel="parsing-gw-request-channel" 
               output-channel="parsing-gw-reply-channel">
        <int:poller fixed-delay="50" 
                    task-executor="binding-thread"/>
        <c24:unmarshalling-transformer
          id="c24Mt541UnmarshallingTransformer"
          source-factory-ref="textualSourceFactory"
          model-ref="mt541Model"/>
    </int:chain>

</beans>

Summary

Whatever type of message based integration system you are designing, once you get beyond the use of sample code for prototyping a technology, and let’s face it, there are millions of lines of code out there that are copied from trivial examples, you need to consider invocation in the face of technical exceptions, business exceptions & non-responsive services. Using Spring Integration Gateways to represent entry to message sub-systems means that we must be able to cope with all of these types of behaviour. The Gateway Adapter pattern allows a single, central processing location for such conditions.

The intent for the Gateway Adapter should be clear from the example configuration and code provided but can also be run and tested using the full source located on GitHub. Please leave any feedback or comments as you see fit.