Stateful microservices pattern

What are stateful microservices?

Microservices holding state while performing some longer-than-normal execution time type tasks. They have the following characteristics

  1. They have an API to start a new instance and an API to read the current state of a given instance
  2. They orchestrate a bunch of actions that may be part of a single end-to-end transaction. It is not necessary to have these steps as a single transaction
  3. They have tasks which wrap callouts to external APIs, DBs, messaging systems etc.
  4. Their Tasks can define error handling and rollback conditions
  5. They store their current state and details about completed tasks

Screen Shot 2020-03-13 at 7.52.57 pm

Why stateful?

Stateless microservice requests are generally optimised for short-lived request-response type applications.  There are scenarios where long-running one-way request handling is required along with the ability to provide the client with the status of the request and the ability to perform distributed transaction handling and rollback (because XA sucked!)

So you need stateful because

  • there are a group of tasks that need to be done together as a step that is asynchronous with no guaranteed response-time or asynchronous one-way with a response notification due later
  • or there are a group of tasks where each step individually may have a short response time but  aggregated response-time is large
  • or there are a group of tasks which are part of a single distributed transaction if one fails you need to rollback all

Stateful microservice API

Microservices implementing this pattern generating provide two endpoints

  1. An endpoint to initiate: for example, HTTP POST which responds with a status code of “Created” or “Accepted” (depending on what you do with the request) and responds back with a location
  2. An endpoint to query request state: for example, HTTP GET using the process id from the initiate process response. The response is then the current state of the process with information about the past states

Sample use case: User Signup

  1. The process of signing-up or registering a new user requires multiple steps and interaction looks like this [Command]
  2. The client can then check the status of the registration periodically [Query]

Command

POST /registrations HTTP/1.1Content-Type: application/jsonHost: myapi.org

{ "firstName": "foo","lastName":"bar",email:"foo@bar.com" }
HTTP/1.1 201 Created  
Location: /registrations/12345

Query

GET /registrations/12345 HTTP/1.1Content-Type: application/jsonHost: myapi.org

{ "firstName": "foo","lastName":"bar",email:"foo@bar.com" }
HTTP/1.1 200 Ok  

{ "id":"12345", "status":"Pending", "data": { "firstName": "foo","lastName":"bar",email:"foo@bar.com" }}

Screen Shot 2020-03-13 at 7.38.41 pm

Anti-patterns

While the pattern is simple, I have seen the implementation vary with some key anti-patterns. These anti-patterns make the end solution brittle over time leading to issues with stateful microservice implementation and management

  1. Enterprise business process orchestration: Makes it complex, couples various contexts. Keep it simple!
  2. Hand rolling your own orchestration solution: Unlike regular services, operating long-running services requires additional tools for end-to-end observability and handling errors
  3. Implementing via a stateless service platform and bootstrapping a database: The database can become the bottleneck and prevent your stateful services from scaling. Use available services/products as they optimised their datastores to make them highly scalable and consistent
  4. Leaking internal process id: Your end consumer should see some mapped id not the internal id of the stateful microservice. This abstraction is necessary for security (malicious user cannot guess different ids and query them) and dependency management
  5. Picking a state machine product without “rollback”: Given that distributed transaction rollback and error-handling are two big things we are going need to implement this pattern, it is important to pick a product that lets you do this. A lightweight BPM engine is great for this otherwise you may need to hack around to achieve this in other tools
  6. Using stateful process microservices for everything: Just don’t! Use the stateless pattern as they are optimal for the short-lived request/responses use cases. I have, for example, implemented request/response services with a BPEL engine (holds state) and lived to regret it
  7. Orchestrate when Choreography is needed: If the steps do not make sense within a single context, do not require a common transaction boundary/rollback or the steps have no specific ordering with action rules in other microservices then use event-driven choreography

Summary

Stateful microservices are a thing! Welcome to my world. They let you orchestrate long-running or a bunch of short-running tasks and provide an abstraction over the process to allow clients to fire-and-forget and then come back to ask for status

Screen Shot 2020-03-13 at 8.37.14 pm

Like everything, it is easy to fall into common traps when implementing this pattern and the best-practice is to look for a common boundary where orchestration makes sense

Screen Shot 2020-03-13 at 8.33.59 pm

 

How did we get to Microservices?

If you have struggled with decisions when designing APIs or Microservices – it is best to take a step back and look at how we got here. It helps not only renew our appreciation for the rapid changes we have seen over the past 10-20 years but also puts into perspective why we do what we do

I still believe a lot of us cling to old ways of thinking when the world has moved on or is moving faster than we can process. APIs and microservices are not just new vendor-driven fads rather they are key techniques, processes and practices for businesses to survive in a highly evolving eco-system. Without an API-strategy, for example, your business might not be able to provide the services to consumers your competition can provide or with the quality of service we have come to expect (in real-time)

So, with that in mind let us take a step back and look at the evolution of technology within the enterprise and remember that this aligns to the business strategy

Long time ago

There were monolithic applications,  mainframe systems for example processing Order, Pricing, Shipment etc. You still see this monolith written within startups because they make sense – no network calls, just inter-process communication and if you can join a bunch of tables you can make a “broker” happy with a mashup of data

Screen Shot 2020-02-26 at 5.48.04 pm
1. First, there was just the legacy monolith application

Circa 2000s MVC app world

No ESB yet. Exploring the JVM and Java for enterprise application development. JSF was new and EJB framework was still deciding what type of beans to use. Data flowed via custom connectors from Mainframe to these Java applications which cached them and allowed viewing and query of this information

There were also functional foundation applications for enterprise logging, business rules, pricing, rating, identity etc. that we saw emerging and data standards were often vague. EAI patterns were being observed but not standardized and we were more focused on individual service design patterns and the MVC model

Screen Shot 2020-02-26 at 5.48.12 pm
2. Then we built lots of MVC applications and legacy monolith, integration was point-to-point

Services and Service-Oriented Architecture

The next wave began when the number of in-house custom applications started exploding and there was a need for data standardization, a common language to describe enterprise objects and de-coupled services with standards requests and responses

Some organisations started developing their own XML based engines around message queues and JMS standards while others adopted the early service bus products from vendors

Thus Service-Oriented Architecture (SOA) was born with lofty goals to build canonical enterprise data models, reduce point-point services (Java applications had a build-time dependency to services they consumed, other Java services), add standardized security, build service registry etc

We also saw a general adoption and awareness around EAI patterns – we finally understood what a network can do to consistency models and the choice between availability and consistency in a partition. Basically stuff known by those with a Computer Science degree working on distributed computing or collective-communication in a parallel computing cluster

One key observation is that the vendor products supporting SOA were runtime monoliths in their own right. It was a single product (J2EE EAR) running on a one or more application servers with a single database for stateful processes etc. The web services we developed over this product were mere XML configuration which was executed by one giant application

Also, the core concerns were “service virtualisation” and “message-based routing”, which was a pure stateless and transformation-only concept. This worked best when coupled with an in-house practice of building custom services and failed where there was none and the SOA product had to simply transform, route (i.e. it did not solve problems by itself as an integration layer)

Screen Shot 2020-02-26 at 5.36.36 pm
3. We started to make integration standardised and flexible, succeed within the Enterprise but failed to scale for the Digital world. Not ready for mobile or cloud

API and Microservices era

While the SOA phase helped us move away from ugly file-based integrations of the past and really supercharged enterprise application integration, it failed miserably in the digital customer domain. The SOA solutions were not built to scale, they were not built for the web and the web was scaling and getting jazzier by the day; people were expecting more self-service portals and XML parsers were response times!

Those of us who were lucky to let go off the earlier dogma (vendor coolade) around the “web services” we were building started realising there was nothing webby about it. After a few failed attempts at getting the clunk web portals working, we realised that the SOA way of serving information was not suited to this class of problems and we needed something better

We have come back a full circle to custom build teams and custom services for foundation tasks and abstractions over end-systems – we call these “micro-services” and build these not for the MVC architecture but as pure services. These services speak HTTP  natively as the language of the web, without custom standards like SOAP had introduced earlier and use a representational state transfer style (REST) pattern to align with hypermedia best-practices; we call them web APIs and standardise around using JSON as the data format (instead of XML)

Screen Shot 2020-02-26 at 7.01.52 pm
4. Microservices, DevOps, APIs early on – it was on-prem and scalable

The API and Microservices era comes with changes in how we organise (Dev-Ops), where we host our services (scalable platforms on-prem or available as-a-service) and a fresh look at integration patterns (CQRS, streaming, caching, BFF etc.). The runtime for these new microservices-based integration applications is now broken into smaller chunks as there is no centralised bus 🚎

Screen Shot 2020-02-26 at 7.08.20 pm
5. Microservices, DevOps, APIs on externalised highly scalable platforms (cloud PaaS)

Recap

Enterprise system use has evolved over time from being depended on one thing that did everything to multiple in-house systems to in-house and cloud-based services. The theme has been a gradual move from a singular application to a network partitioned landscape of systems to an eco-system of modular value-based services

Microservices serve traditional integration needs between enterprise systems but more importantly enable organisations to connect to clients and services on the web (cloud) in a scalable and secure manner – something that SOA products failed to do (since they were built for the enterprise context only). APIs enable microservices to communicate with the service consumers and providers in a standard format and bring with them best practices such as contract-driven development, policies, caching etc that makes developing and operating them at scale easier

De-mystifying the Enterprise Application Integration (EAI) landscape: Actors, terminology, cadence and protocols

Any form of Enterprise Application Integration (EAI) [1] work for data synchronization,  digital transformation or customer self-service web implementation involves communication between the service providers and service consumers. A web of connections grows over time between systems, facilitated by tools specialising in “system-integration”; this article covers how the clients, services and integration tools communicate and nuances around this observed in the wild

EAI Actors

Depending on the context, a system becomes a data-consumer or data-provider. These consumers and providers can be internal to a business enterprise or external to them. External providers can be pure software-as-a-service or partners with platforms and build teams entrenched in the “client-site”

Screen Shot 2020-02-25 at 3.41.20 pm
1. Services, Consumers within and outside a business enterprise

The provider and consumer systems are the key actors within an internal or external system-integration context. Cadences vary as some provider services are stable and mature, while others are developed with the client applications

Service/API Contract

Providers and Consumers communicate with each other using one of many standard protocols; also consumers have a direct dependency on service provider’s “service contract” to know about these protocols and service details

A service contract is a document describing one or more services offered by a service provider and covers details such as protocol, methods, data type and structure etc.

Screen Shot 2020-02-25 at 1.48.43 pm
2. Consumer, Service provider and Service Contract

A good service contract contains well-documented details of the service as well as “examples” of the request, response and errors. RAML [3] and Swagger/OAS [4] are two of the popular tools used in documenting service contracts

For example, the “Address search” contract by a SaaS vendor below describes the method, URI, query parameters and provides the user with the ability to “try out” the service. This approach allows consumers to iterate faster when developing solutions that use address search without having to engage the SaaS vendor teams (self-service)

Screen Shot 2020-02-25 at 3.49.51 pm
3. A Service Contract on an API Portal [2]

Service Contract Cadence: Contract Driven Development

Contract cadence is when a service contract is available compared to when the client wants to build their application. There are 2 cadences – mature and in-flight.

For a mature/pre-existing service, a good service contract allows the service consumer to develop a client application without having to engage a service provider person. Meanwhile, for a service being developed with the client application, there is an opportunity for both teams to author the service contract together during the elaboration phase of a project

Screen Shot 2020-02-25 at 4.29.52 pm
4. Deliver Cadence based on Service Contract Availability

Service contracts are key to consuming a service; when consuming mature services look for descriptive, sandbox-enabled, self-service services to supercharge your delivery. For services being developed in a project (along with the client applications), by internal teams or external vendors, ask for contracts upfront and ask for consumers & service providers to co-author the contracts to remove ambiguities sooner 

Avoid generating service contracts from developed components (Java Class to WSDL) as this technique leads to isolated, one-way, ambiguous specifications requiring considerable hand-holding and has the most defects during integration testing (from experience). Use Contract Driven Development [8] which facilitates the writing of a contract by the Service Provider and the Consumer together during Elaboration phase (sign in blood if adventurous)  

API Styles: RPC, REST,

Now what we have services, contracts out of the way we can dig deeper into how messages get over the wire in the real-time services. We will ignore “B2B protocols” for this discussion and leave them for the future

The four common real-time service protocols we see all are built over HTTP using  JSON or XML content-type and different in their implementation. Below is a short description of each

  • REST
    • Is the most service protocol for front-end applications and modern enterprise APIs. Is stateless, cacheable, uniform, layered etc and came from this [6]
    • A resource is a key abstraction in REST and a hard concept for integration-practitioners to master
    • Mature REST APIs look like Hypertext, i.e. you can navigate them like you would the web
    • Uses HTTP methods e.g. GET, PUT, POST, Patch for query and command
    • Uses HTTP status codes for communicating the response e.g. 2xx, 4xx, 5xx
    • Open to custom or standard request/response data-types. Standard hypermedia types include HAL, JSON API, JSON-LD, Siren etc. [4]
    • Requires resource-centric thinking
  • GrapQL
  • Streaming APIs
    • HTTP Streaming
    • Websockets
    • SSE
    • HTTP2/Push
    • gRPC
  • gRPC
  •  SOAP
    • Use to be popular before REST came along
    • Uses HTTP POST with the body containing request details – method-name, request etc in XML
    • Uses XML schemas for describing data types
  • Json-RPC
    • Semi-popular with some legacy clients
    • Uses HTTP POST with the body containing request details – method-name, request etc in JSON
    • Uses JSON object to define the method and data
  • OData
    • Used in CRM, ERP etc enterprise systems to reduce client-side development through config driven service consumptions
    • Presents the end service as a “data source” to the consumer, allowing SQL like query
    • Uses schema for describing data source objects and atom/XML  for transmitting over the wire
    • Requires custom parsers for parsing URL which contains the “query” to be mapped to the back-end service

There is plenty to cover in this space and in a future post, I will compare these protocols further and flavour them from experience. The key takeaway here though is that there are many many ways in which service provides and service consumers can communicate, most choose REST or SOAP over HTTP, there are passionate conversations over REST/SOAP, JSON/XML, HAL/JSON-API/Siren all the while OData remains a mystery to us -until we need to deal with it

EAI Patterns

There are heaps to learn about “how” these service providers and consumers communicate in a networked environment but below is a quick overview of these patterns. These patterns emerge because of the CAP Theorem [7] and the Network partition  between these systems looking to exchange data

Screen Shot 2020-02-25 at 5.57.40 pm
5. Integration Communication Patterns used by Service Providers and Consumers

Recap

  1. Enterprise Integration involves internal, external, batch and real-time services
  2. Key actors are service providers, consumers and mediators
  3. Service contracts are key documents in integrating
  4. The cadence between provider and consumer impacts delivery velocity
  5. Service protocols vary but there are 4-main types: REST, SOAP, JSON-RPC and OData

References

[1] EAI Patterns https://www.enterpriseintegrationpatterns.com/

[2] Experian REST API https://www.edq.com/documentation/apis/address-validate/rest-verification/#/Endpoints/Search_Address

[3] RAML https://raml.org/

[4] SWAGGER https://swagger.io/

[5] Choosing Hypermedia Formats https://sookocheff.com/post/api/on-choosing-a-hypermedia-format/

[6] Roy Fielding’s dissertation https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm

[7] CAP Theorem or Brewer’s Theorem https://en.wikipedia.org/wiki/CAP_theorem

[8] Contract Driven Development https://link.springer.com/chapter/10.1007/978-3-540-71289-3_2

[9] gRPC https://developers.google.com/protocol-buffers/docs/proto3

 

Raspberry Pi Setup – Part II: IoT API Platform

Intro

This is the second part in a series of posts on setting up a Raspberry Pi to fully utilize the Software & Hardware functionality of this platform to build interesting internet of things (IOT) applications.

Part-I is here https://techiecook.wordpress.com/2016/10/10/raspberry-pi-ssh-headless-access-from-mac-and-installing-node-processing/ … this covered the following:

  • Enable SSH service on the Pi
  • Connect to Pi without a display or router – headless access via Mac

 

Part II: is this blog and it covers the following goals:

  • Install Node, Express on the Pi
  • Write a simple HTTP service
  • Write an API to access static content ( under some /public folder)
  • Write an API to POST data to the Pi
  • Write an API to read GPIO pin information

So lets jump in ….

 

Install Node.js

Be it a static webserver or a complex API – you need this framework on the PI (unless you like doing things in Python and not Javascript)

How do I install this on the Pi?

wget https://nodejs.org/download/release/v0.10.0/node-v0.10.0-linux-arm-pi.tar.gz
cd /usr/local
sudo tar xzvf ~/node-v0.10.0-linux-arm-pi.tar.gz --strip=1
node -v 
npm -v

 

Install Express

Easy to build APIs using the Express Framework

How do I install this on the Pi?

npm install express --save

Testing Node + Express

Write your first simple express HTTP API

I modified the code from here http://expressjs.com/en/starter/hello-world.html as shown below to provide a /health endpoint and a JSON response

  • Create a folder called ‘simple-http’
    • Mine is under /projects/node/
  • Initialize your Node Application with
    npm init
  • Install express
    npm install express --save
  • Write the application
    • 2 endpoints – ‘/’ and ‘/health’
      var express = require('express');
      var app = express();
      
      app.get('/', function (req, res) {
       res.send('API Running');
      });
      
      app.get('/health', function (req, res) {
       res.send('{"status":"ok"}');
      });
      
      app.listen(8080, function () {
       console.log('API Running ...');
      });

Screen Shot 2016-10-10 at 3.27.31 PM.png

GET Static Content

  • Create a “public” folder and put some content in itScreen Shot 2016-10-10 at 3.48.07 PM.png
  • Setup the Node/Express app to use “express.static()”
    var express = require('express');
    var app = express();
    var path = require('path');
    app.use('/static', express.static(path.join('/home/pi/projects' + '/public')));
    app.get('/health', function (req, res) {
      res.send('{"status":"ok"}');
    });
    app.listen(8080, function () {
      console.log('API Running ...');
    });
  • Test by accessing the “/static” endpointScreen Shot 2016-10-10 at 3.52.04 PM.png

POST data

Install body-parser

npm install body-parser --save

Use the body-parser

var bodyParser = require('body-parser');
app.use(bodyParser.json()); 
app.use(bodyParser.urlencoded({ extended: true })); 

Write the POST function … in my case I want to be able to do POST /api/readings

app.post('/api/readings', function(req, res) {
    var d = new Date();
    var value = req.body.value;
    var type = req.body.type;
    res.send('{ "status":"updated"}');
    console.log('Reading | '+d.toLocaleDateString()+'|'+d.toLocaleTimeString()+' | value='+value+' | type='+type);
});

Test It!

To test it we can issue a curl command from a computer connected to the same network as the Raspberry Pi….

curl -X POST -H "Content-Type: application/json"  -d '{
 "value":7.5,
 "type":""
}' "http://192.168.3.2:8080/api/readings"

Screen Shot 2016-10-11 at 10.39.39 AM.png

Note: In the example above, I run the command from the terminal and later in Postman from the Mac hosting the Raspberry Pi … remember the IP for my Pi was obtained via the following command

netstat -rn -finet

 

Read GPIO Pin Information

What are GPIO Pins?

The Raspberry Pi comes with a input/output pins that let you connect to electronic components and read from them or write to them … these general purpose input output pins vary in number based on the model of your Pi – A, B, B+ etc

See  http://pinout.xyz/ for how pins are labelled …

Screen Shot 2016-10-11 at 10.53.54 AM.png

Our Goal: API for accessing GPIO pins

Once we know which pins we want to read/write to, we need to be able to access this from our application … in our example this would be the Node.js application we are writing.

So we would like to be able to something like

  • GET /api/gpio/pins and list all the pin details (id, type etc)
  • GET /api/gpio/pins/{pinId}  and get the value of a pin
  • PUT /api/gpio/pins/{pinId} and set it to high/low or on/off

Setup Javascript Libraries for GPIO access

  1. Install “gpio-admin” so that you do not have to run your Node application as root to read / write to the gpio pins (Read https://github.com/rakeshpai/pi-gpio)
    git clone git://github.com/quick2wire/quick2wire-gpio-admin.git
    cd quick2wire-gpio-admin
    make
    sudo make install
    sudo adduser $USER gpio
  2. Install the npm gpio package (Read https://www.npmjs.com/package/rpi-gpio )
    npm install rpi-gpio
  3. Use this in your application
    1. Watchout:  The gpio operations are Async … this means that capturing ‘value’ of a pin, for instance, cannot be done like ‘var value  = readInput()’ in Node.js we need to use “Promises” (Readhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise – Thanks Alvonn!)
    2. “ReferenceError: Promise is not defined”: It appears the version of Node for the Pi (v0.10.0) does not have this library … ouch! Luckily other have hit this problem at some point with node and have posted the following solution which worked  …
      1. Install the “es6-promise” library manually
        npm install es6-promise@3.1.2
      2.  Declare Promise as the following in your application

        var Promise = require('es6-promise').Promise;
    3. Finally make the GPIO call
      1. Declare a ‘gpio’ variable and setup (attach to a pin)
        var gpio = require('rpi-gpio');
        gpio.setup(7, gpio.DIR_IN, readInput);
        function readInput() {
           gpio.read(7, function handleGPIORead(err, value) {
             console.log('Read pin 7, value=' + value);
           });
        }
      2. Read from it Async in a HTTP GET /api/gpio/pins/{pinId} call
        var Promise = require('es6-promise').Promise;
        app.get('/api/gpio/pins/:pinId', function (req, res) {
           var pinId = req.params.pinId; 
           var value = "";
           var p1 = new Promise(function(resolve, reject) {
                   gpio.read(7, function handleGPIORead(err, value) {
                   console.log('GET /api/gipio/pins/:pinId | Pin 7 | value=' + value);
                   resolve(value);
                  });
                 });
        
        p1.then(function(val) {
         console.log('Fulfillment value from promise is ' + val);
         res.send('{"pinId": '+pinId+', "value":'+val+'}');
         }, function(err) {
           console.log('Result from promise is', err);
           res.send('{"pinId": '+pinId+', "value":"Error"}');
         });
        
        });
      3. OutputScreen Shot 2016-10-11 at 4.21.42 PM.png

Show me the code

You can find the code for this project here:

git@bitbucket.org:arshimkola/raspi-gpio-demo-app.git

 

Summary

So by now we should have a platform primed to be used as a IOT device with a simple API running. We could have of course done all of the above in python.

Also we could have written a scheduled-service and “pushed” data from the Pi to some public API … in the next part of the series, we will talk about organizing IOT sensors in a distributed network

 

 

 

References

[1] https://scotch.io/tutorials/use-expressjs-to-get-url-and-post-parameters

[2] https://www.npmjs.com/package/rpi-gpio

[3] https://github.com/rakeshpai/pi-gpio

[4] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise

 

 

 

Lessons from API integration

A general transition has been happening with the nature of work that I do in the integration space … we have been doing less of SOAP/XML/RPC webservices and more RESTful APIs for “digital enablement” of enterprise services .  This brought a paradigm shift and valuable lessons were learnt (rightly or wrongly) … and of course the process of learning and comparing never stops!

Below are my observations …

It is not about SOAP vs REST … it is about products that we integrate, the processes we use and the stories we tell as we use one architectural style vs the other

  1. Goals: Both architectural styles work to achieve the same goal – integration. The key differences lie in where they are used. SOAP/XML has seen some b2b but adoption by web/mobile clients is low
  2. Scale: SOAP/XML is good but will always remain enterprise scale … REST/JSON is webscale. What do I mean by that? REST over HTTP with JSON feels easier to communicate, understand and implement.
  3. Protocol: The HTTP protocol is used by REST better than SOAP, so the former wins in making it easy to adopt your service
  4. Products: The big monolith integration product vendors are now selling “API Gateways” (similar to a “SOA product suite”) … The gateway should be a lightweight policy layer, IMHO, and traditional vendors like to sell “app server” licenses which will blow up an API gateway product (buyer beware!)
  5. Contract: RAML/YAML is a lot easier to read than a WSDL ….which is why contract first works better when doing REST/JSON
  6. Process: The biggest paradigm change we have experienced is doing Contract Driven Development  ny writing API definition in RAML / YAML … Compare this to generating wsdl or RAML from services. Contract driven , as I am learning, is much more collaborative!
  7. Schemas: XML schemas were great until they started describing restrictions … REST with JSON schema is good but may be repeating similar problems at Web scale
  8. Security: I have used OAuth for APIs and watched them enable token authentication and authorization easily with clients outside the enterprise – we are replacing traditional web session ids with oauth tokens to enable faster integration through 3rs party clients … all this through simple configuration in an API gateway compared to some of the struggle with SAML setup within an organisation with the heavy monolithic SOA and security products!

It would be easy to then conclude that we ought to rip out all our existing SOAP/XML integrations and replace them with REST APIs no? Well not quite…as always “horses for the courses”.

Enterprise grade integration may require features currently missing in REST/JSON (WS-* , RPC) and legacy systems may not be equipped to do non XML integration

My goal was to show you my experience in integrating systems using API with Contract driven development & gateways with policies vs traditional web service development using SOA products …hope to hear about your experience, how do you like APIs?

Microservices with Docker and Spring Boot

This guide is for someone interesting in quickly building a micro-service using the Spring Boot framework. Works great if you have prior JAX-RS / Jersey experience

Step-by-step guide

Add the steps involved:

  1. Create a Maven Project
    1. Final Structure
    2. Pom ( Attached here )
  2. Include Spring Boot Dependencies in your pom
    1. Parent Project
      	<parent>
      
      		<groupId>org.springframework.boot</groupId>
      
      		<artifactId>spring-boot-starter-parent</artifactId>
      
      		<version>1.3.1.RELEASE</version>
      
      	</parent>
    2. Dependencies
      <dependencies>
      <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <scope>test</scope>
      </dependency>

      <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
      </dependency>

      <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-jersey</artifactId>
      </dependency>
      </dependencies>

  3. Create the Java main class – this is entry point with the @SpringBootApplication annotation
    1. App.java
      @SpringBootApplication
      public class App 
      {
          public static void main( String[] args )
          {
          	SpringApplication.run(App.class, args);
          }
      }
  4. Create the API Controller class
    1. Add Spring MVC methods for REST in the API controller class
      1. SimpleAPIController.java with SpringMVC only
        @Component
        @RestController
        @Path("/api")
        public class SimpleAPIController {
        	@RequestMapping(value = "/springmvc", produces = "application/json")
            public Map<String, Object> springMvc() {
        		Map<String, Object> data = new HashMap<String, Object>();
        		data.put("message", "Spring MVC Implementation of API is available");
        		return data;
            }
        }
    2. (Alternate/Additional) Add Jersery methods for REST in the API controller class
      1. SimpleAPIController.java with Jersey (Note: You need the spring boot jersey dependency in your pom)
        @Component
        @RestController
        @Path("/api")
        public class SimpleAPIController {
        	@GET
        	@Produces({ "application/json" })
        	public Response getMessage() {
        		Map<String, Object> data = new HashMap<String, Object>();
        		data.put("message", "Jersey Implementation of API is available");
        		Response response = Response.ok().entity(data).build();
        		return response;
        	}
        }
    3. (Alternate/Additional) Create a JerseyConfig class
      1. JerseryConfig.java – required for jersey rest implementation
        @Configuration
        public class JerseyConfig extends ResourceConfig {
            public JerseyConfig() {
                register(SimpleAPIController.class);
            }
        }
  5. Build the Maven project
    1. Add the Spring boot maven build plugin
      	<build>
      		<plugins>
      			<plugin>
      				<groupId>org.springframework.boot</groupId>
      				<artifactId>spring-boot-maven-plugin</artifactId>
      			</plugin>
      		</plugins>
      	</build>
  6. Execute the spring boot application
    1. Run in Eclipse as Java Application
    2. Run as a standalone Java Jar
      java -jar target/SimpleService-0.0.1-SNAPSHOT.jar
    3. Specify a different port and run as a standalone Java Jar

      java -Dserver.port=8090 -jar target/SimpleService-0.0.1-SNAPSHOT.jar
    4. Running in a docker container

      1. See: https://spring.io/guides/gs/spring-boot-docker/
      2. Docker file
        FROM java:8
        VOLUME /tmp
        ADD SimpleService-0.0.1-SNAPSHOT.jar app.jar
        RUN bash -c 'touch /app.jar'
        ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

 

The official how-to is here https://spring.io/guides/gs/rest-service/#use-maven and a great blog post is herehttp://blog.codeleak.pl/2015/01/getting-started-with-jersey-and-spring.html