Better Digital Products using Domain Oriented APIs: The Shopping Mall Metaphor

APIs are the abstractions over technical services. Good APIs mirror strategic thinking in an organisation and lead to better customer experience by enabling high-degree of connectivity via secure mechanisms

Too much focus is on writing protocols & semantics with the desire to design good APIs and too little on business objectives. Not enough questions are asked early on and the focus is always on system-system integration. I believe thinking about what a business does and aligning services to leads us to product centric thinking with reuseable services

History
As an ardent student of software design and engineering principles, I have been keen on Domain Driven Design (DDD) and had the opportunity to apply these principles in the enterprise business context in building reusable and decoupled microservices. I believe the best way to share this experience is through a metaphor and I use a “Shopping mall” metaphor with “Shops” to represent a large enterprise with multiple lines of businesses and teams

Like all metaphors – mine breaks beyond a point but it helps reason about domains, bounded contexts, APIs, events and microservices. This post does not provide a dogmatic point-of-view or a “how to guide”; rather it aims to help you identify key considerations when designing solutions for an enterprise and is applicable upfront or during projects

I have been designing APIs and microservices in Health and Insurance domains across multiple lines of business, across varying contexts over the past 5-8 years. Through this period, I have seen architects (especially those without Integration domain knowledge) struggle to deliver strategic, product centric, business friendly APIs. The solutions handed to us always dealt with an “enterprise integration” context with little to no consideration for future “digital contexts” leading to brittle, coupled services and frustration from business teams around cost of doing integration ( reckon this is why IT transformation is hard )

This realisation led me to asking questions around some of our solution architecture practices and support them through better understanding and application of domain modeling and DDD (especially strategic DDD ). Thought this practice, I was able to design and deliver platforms for our client which were reusable and yet not coupled


Domain Queries 

In one implementation, my team delivered around 400 APIs and after 2 years the client has been able to make continuous changes & add new features without compromising the overall integrity of the connected systems or their data

Though my journey with DDD in the Enterprise, I discovered some fundamental rules about applying these software design principles in a broader enterprise context but first we had to step in to our customer’s shoes and ask some fundamental questions about their business and they way they function

The objective is to key aspects of the API ecosystem you are designing for, below are some of the questions you need to answer through your domain queries

  • What are your top-level resources leading to a product centric design?
  • When do you decide what they are? Way up front or in a project scrum?
  • What are the interactions between these domain services?
  • How is the quality and integrity of your data impacted through our design choices?
  • How do you measure all of this “Integration entropy” – the complexity introduced by our integration choices between systems?

The Shopping Mall example

Imagine being asked to implement the IT system for a large shopping complex or shopping mall. This complex has a lot of shops which want to use the system for showing product information, selling them, shipping them etc

There are functions that are common to all the shops but with nuanced difference in the information they capture – for example, the Coffee Shop does “Customer Management” function with their staff, while the big clothes retail store needs to sell its own rewards point and store the customer’s clothing preferences and the electronics retail does its customer management function through its own points system

You have to design the core domains for the mall’s IT system to provide services they can use (and reuse) for their shops and do so while being able to change aspects of a shop/business without impacting other businesses

Asking Domain and Context questions

  • What are your top-level “domains” so that your can build APIs to link the Point-of-Sale (POS), CRM, Shipping and other systems?
  • Where do you draw the line? Is a service shared by all businesses or to businesses of a certain type or not shared at all?
  • Bounded contexts? What contexts do you see as they businesses do their business?
  • APIs or Events? How do you share information across the networked systems to achieve optimal flow of information while providing the best customer experience? Do you in the networked systems pick consistency or availability?

Summary:

Though my journey with DDD in the Enterprise, I discovered some fundamental rules about applying these software design principles in a broader enterprise context. I found it useful to apply the Shopping Mall metaphor to a Business Enterprise when designing system integrations

It is important to understand the core business lines, capabilities (current and target state), business products, business teams, terminologies then do analysis on any polysemy across domains and within domain contexts leading to building domains, contexts and interactions

We then use this analysis to design our solution with APIs, events and microservices to maximise reuse and reduce crippling coupling

How did we get to Microservices?

If you have struggled with decisions when designing APIs or Microservices – it is best to take a step back and look at how we got here. It helps not only renew our appreciation for the rapid changes we have seen over the past 10-20 years but also puts into perspective why we do what we do

I still believe a lot of us cling to old ways of thinking when the world has moved on or is moving faster than we can process. APIs and microservices are not just new vendor-driven fads rather they are key techniques, processes and practices for businesses to survive in a highly evolving eco-system. Without an API-strategy, for example, your business might not be able to provide the services to consumers your competition can provide or with the quality of service we have come to expect (in real-time)

So, with that in mind let us take a step back and look at the evolution of technology within the enterprise and remember that this aligns to the business strategy

Long time ago

There were monolithic applications,  mainframe systems for example processing Order, Pricing, Shipment etc. You still see this monolith written within startups because they make sense – no network calls, just inter-process communication and if you can join a bunch of tables you can make a “broker” happy with a mashup of data

Screen Shot 2020-02-26 at 5.48.04 pm
1. First, there was just the legacy monolith application

Circa 2000s MVC app world

No ESB yet. Exploring the JVM and Java for enterprise application development. JSF was new and EJB framework was still deciding what type of beans to use. Data flowed via custom connectors from Mainframe to these Java applications which cached them and allowed viewing and query of this information

There were also functional foundation applications for enterprise logging, business rules, pricing, rating, identity etc. that we saw emerging and data standards were often vague. EAI patterns were being observed but not standardized and we were more focused on individual service design patterns and the MVC model

Screen Shot 2020-02-26 at 5.48.12 pm
2. Then we built lots of MVC applications and legacy monolith, integration was point-to-point

Services and Service-Oriented Architecture

The next wave began when the number of in-house custom applications started exploding and there was a need for data standardization, a common language to describe enterprise objects and de-coupled services with standards requests and responses

Some organisations started developing their own XML based engines around message queues and JMS standards while others adopted the early service bus products from vendors

Thus Service-Oriented Architecture (SOA) was born with lofty goals to build canonical enterprise data models, reduce point-point services (Java applications had a build-time dependency to services they consumed, other Java services), add standardized security, build service registry etc

We also saw a general adoption and awareness around EAI patterns – we finally understood what a network can do to consistency models and the choice between availability and consistency in a partition. Basically stuff known by those with a Computer Science degree working on distributed computing or collective-communication in a parallel computing cluster

One key observation is that the vendor products supporting SOA were runtime monoliths in their own right. It was a single product (J2EE EAR) running on a one or more application servers with a single database for stateful processes etc. The web services we developed over this product were mere XML configuration which was executed by one giant application

Also, the core concerns were “service virtualisation” and “message-based routing”, which was a pure stateless and transformation-only concept. This worked best when coupled with an in-house practice of building custom services and failed where there was none and the SOA product had to simply transform, route (i.e. it did not solve problems by itself as an integration layer)

Screen Shot 2020-02-26 at 5.36.36 pm
3. We started to make integration standardised and flexible, succeed within the Enterprise but failed to scale for the Digital world. Not ready for mobile or cloud

API and Microservices era

While the SOA phase helped us move away from ugly file-based integrations of the past and really supercharged enterprise application integration, it failed miserably in the digital customer domain. The SOA solutions were not built to scale, they were not built for the web and the web was scaling and getting jazzier by the day; people were expecting more self-service portals and XML parsers were response times!

Those of us who were lucky to let go off the earlier dogma (vendor coolade) around the “web services” we were building started realising there was nothing webby about it. After a few failed attempts at getting the clunk web portals working, we realised that the SOA way of serving information was not suited to this class of problems and we needed something better

We have come back a full circle to custom build teams and custom services for foundation tasks and abstractions over end-systems – we call these “micro-services” and build these not for the MVC architecture but as pure services. These services speak HTTP  natively as the language of the web, without custom standards like SOAP had introduced earlier and use a representational state transfer style (REST) pattern to align with hypermedia best-practices; we call them web APIs and standardise around using JSON as the data format (instead of XML)

Screen Shot 2020-02-26 at 7.01.52 pm
4. Microservices, DevOps, APIs early on – it was on-prem and scalable

The API and Microservices era comes with changes in how we organise (Dev-Ops), where we host our services (scalable platforms on-prem or available as-a-service) and a fresh look at integration patterns (CQRS, streaming, caching, BFF etc.). The runtime for these new microservices-based integration applications is now broken into smaller chunks as there is no centralised bus 🚎

Screen Shot 2020-02-26 at 7.08.20 pm
5. Microservices, DevOps, APIs on externalised highly scalable platforms (cloud PaaS)

Recap

Enterprise system use has evolved over time from being depended on one thing that did everything to multiple in-house systems to in-house and cloud-based services. The theme has been a gradual move from a singular application to a network partitioned landscape of systems to an eco-system of modular value-based services

Microservices serve traditional integration needs between enterprise systems but more importantly enable organisations to connect to clients and services on the web (cloud) in a scalable and secure manner – something that SOA products failed to do (since they were built for the enterprise context only). APIs enable microservices to communicate with the service consumers and providers in a standard format and bring with them best practices such as contract-driven development, policies, caching etc that makes developing and operating them at scale easier

Raspberry Pi Setup – Part II: IoT API Platform

Intro

This is the second part in a series of posts on setting up a Raspberry Pi to fully utilize the Software & Hardware functionality of this platform to build interesting internet of things (IOT) applications.

Part-I is here https://techiecook.wordpress.com/2016/10/10/raspberry-pi-ssh-headless-access-from-mac-and-installing-node-processing/ … this covered the following:

  • Enable SSH service on the Pi
  • Connect to Pi without a display or router – headless access via Mac

 

Part II: is this blog and it covers the following goals:

  • Install Node, Express on the Pi
  • Write a simple HTTP service
  • Write an API to access static content ( under some /public folder)
  • Write an API to POST data to the Pi
  • Write an API to read GPIO pin information

So lets jump in ….

 

Install Node.js

Be it a static webserver or a complex API – you need this framework on the PI (unless you like doing things in Python and not Javascript)

How do I install this on the Pi?

wget https://nodejs.org/download/release/v0.10.0/node-v0.10.0-linux-arm-pi.tar.gz
cd /usr/local
sudo tar xzvf ~/node-v0.10.0-linux-arm-pi.tar.gz --strip=1
node -v 
npm -v

 

Install Express

Easy to build APIs using the Express Framework

How do I install this on the Pi?

npm install express --save

Testing Node + Express

Write your first simple express HTTP API

I modified the code from here http://expressjs.com/en/starter/hello-world.html as shown below to provide a /health endpoint and a JSON response

  • Create a folder called ‘simple-http’
    • Mine is under /projects/node/
  • Initialize your Node Application with
    npm init
  • Install express
    npm install express --save
  • Write the application
    • 2 endpoints – ‘/’ and ‘/health’
      var express = require('express');
      var app = express();
      
      app.get('/', function (req, res) {
       res.send('API Running');
      });
      
      app.get('/health', function (req, res) {
       res.send('{"status":"ok"}');
      });
      
      app.listen(8080, function () {
       console.log('API Running ...');
      });

Screen Shot 2016-10-10 at 3.27.31 PM.png

GET Static Content

  • Create a “public” folder and put some content in itScreen Shot 2016-10-10 at 3.48.07 PM.png
  • Setup the Node/Express app to use “express.static()”
    var express = require('express');
    var app = express();
    var path = require('path');
    app.use('/static', express.static(path.join('/home/pi/projects' + '/public')));
    app.get('/health', function (req, res) {
      res.send('{"status":"ok"}');
    });
    app.listen(8080, function () {
      console.log('API Running ...');
    });
  • Test by accessing the “/static” endpointScreen Shot 2016-10-10 at 3.52.04 PM.png

POST data

Install body-parser

npm install body-parser --save

Use the body-parser

var bodyParser = require('body-parser');
app.use(bodyParser.json()); 
app.use(bodyParser.urlencoded({ extended: true })); 

Write the POST function … in my case I want to be able to do POST /api/readings

app.post('/api/readings', function(req, res) {
    var d = new Date();
    var value = req.body.value;
    var type = req.body.type;
    res.send('{ "status":"updated"}');
    console.log('Reading | '+d.toLocaleDateString()+'|'+d.toLocaleTimeString()+' | value='+value+' | type='+type);
});

Test It!

To test it we can issue a curl command from a computer connected to the same network as the Raspberry Pi….

curl -X POST -H "Content-Type: application/json"  -d '{
 "value":7.5,
 "type":""
}' "http://192.168.3.2:8080/api/readings"

Screen Shot 2016-10-11 at 10.39.39 AM.png

Note: In the example above, I run the command from the terminal and later in Postman from the Mac hosting the Raspberry Pi … remember the IP for my Pi was obtained via the following command

netstat -rn -finet

 

Read GPIO Pin Information

What are GPIO Pins?

The Raspberry Pi comes with a input/output pins that let you connect to electronic components and read from them or write to them … these general purpose input output pins vary in number based on the model of your Pi – A, B, B+ etc

See  http://pinout.xyz/ for how pins are labelled …

Screen Shot 2016-10-11 at 10.53.54 AM.png

Our Goal: API for accessing GPIO pins

Once we know which pins we want to read/write to, we need to be able to access this from our application … in our example this would be the Node.js application we are writing.

So we would like to be able to something like

  • GET /api/gpio/pins and list all the pin details (id, type etc)
  • GET /api/gpio/pins/{pinId}  and get the value of a pin
  • PUT /api/gpio/pins/{pinId} and set it to high/low or on/off

Setup Javascript Libraries for GPIO access

  1. Install “gpio-admin” so that you do not have to run your Node application as root to read / write to the gpio pins (Read https://github.com/rakeshpai/pi-gpio)
    git clone git://github.com/quick2wire/quick2wire-gpio-admin.git
    cd quick2wire-gpio-admin
    make
    sudo make install
    sudo adduser $USER gpio
  2. Install the npm gpio package (Read https://www.npmjs.com/package/rpi-gpio )
    npm install rpi-gpio
  3. Use this in your application
    1. Watchout:  The gpio operations are Async … this means that capturing ‘value’ of a pin, for instance, cannot be done like ‘var value  = readInput()’ in Node.js we need to use “Promises” (Readhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise – Thanks Alvonn!)
    2. “ReferenceError: Promise is not defined”: It appears the version of Node for the Pi (v0.10.0) does not have this library … ouch! Luckily other have hit this problem at some point with node and have posted the following solution which worked  …
      1. Install the “es6-promise” library manually
        npm install es6-promise@3.1.2
      2.  Declare Promise as the following in your application

        var Promise = require('es6-promise').Promise;
    3. Finally make the GPIO call
      1. Declare a ‘gpio’ variable and setup (attach to a pin)
        var gpio = require('rpi-gpio');
        gpio.setup(7, gpio.DIR_IN, readInput);
        function readInput() {
           gpio.read(7, function handleGPIORead(err, value) {
             console.log('Read pin 7, value=' + value);
           });
        }
      2. Read from it Async in a HTTP GET /api/gpio/pins/{pinId} call
        var Promise = require('es6-promise').Promise;
        app.get('/api/gpio/pins/:pinId', function (req, res) {
           var pinId = req.params.pinId; 
           var value = "";
           var p1 = new Promise(function(resolve, reject) {
                   gpio.read(7, function handleGPIORead(err, value) {
                   console.log('GET /api/gipio/pins/:pinId | Pin 7 | value=' + value);
                   resolve(value);
                  });
                 });
        
        p1.then(function(val) {
         console.log('Fulfillment value from promise is ' + val);
         res.send('{"pinId": '+pinId+', "value":'+val+'}');
         }, function(err) {
           console.log('Result from promise is', err);
           res.send('{"pinId": '+pinId+', "value":"Error"}');
         });
        
        });
      3. OutputScreen Shot 2016-10-11 at 4.21.42 PM.png

Show me the code

You can find the code for this project here:

git@bitbucket.org:arshimkola/raspi-gpio-demo-app.git

 

Summary

So by now we should have a platform primed to be used as a IOT device with a simple API running. We could have of course done all of the above in python.

Also we could have written a scheduled-service and “pushed” data from the Pi to some public API … in the next part of the series, we will talk about organizing IOT sensors in a distributed network

 

 

 

References

[1] https://scotch.io/tutorials/use-expressjs-to-get-url-and-post-parameters

[2] https://www.npmjs.com/package/rpi-gpio

[3] https://github.com/rakeshpai/pi-gpio

[4] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise

 

 

 

Raspberry Pi Setup – Part I : Boot, SSH and Headless access

Intro

Part I : Documenting the workflow/steps to setup my Raspberry Pi (2 B) – from Installing Raspbian to installing Node, Processing etc

Goals:

  • Enable SSH service on the Pi
  • Connect to Pi without a display or router – headless access via Mac

Box 1.jpg
Raspberry Pi with GPIO Ribbon cable

Steps

  1. Install Raspbian
  2. Boot your Pi
  3. Setup SSH
  4. Connect to Mac
  5. Troubleshoot Connection to Mac
  6. Install Node
  7. Install Processing

Install Raspbian

You can download it here https://www.raspberrypi.org/downloads/raspbian/

Screen Shot 2016-10-10 at 1.11.28 PM.png

Boot Pi

Use the Raspbian image to boot-up the pi and you should see a command shell (once you have connected your pi to a display ofcourse!)

Setup SSH

Start with typing the following command in your shell

sudo raspi-config

screen-shot-2016-10-10-at-1-17-24-pm

This launches the Config Tool UI, follow the screenshots below

Screen Shot 2016-10-10 at 1.16.15 PM.png

Screen Shot 2016-10-10 at 1.19.27 PM.png

Screen Shot 2016-10-10 at 1.16.26 PM.png

Test by SSH’ing into the pi from the pi

ssh pi@localhost

Screen Shot 2016-10-10 at 2.06.22 PM.png

Copy Network Card’s Address

Do an “ifconfig” in the Pi’s shell and note down the “HWaddr” … we will use this later to search for the IP assigned by Mac

ifconfig

screen-shot-2016-10-10-at-1-37-48-pm

Connect to Mac Headless

Next we will connect the Pi to the in “headless” mode using the Mac to assign an IP to the Pi and then SSH into the Pi from a terminal in Mac …

  1. Ensure your Mac is connected to the network (wifi)
  2. On the Mac goto System Preferences -> SharingScreen Shot 2016-10-10 at 1.29.40 PM.png
  3. Connect Pi to Mac via Ethernet Cable and then power it up
  4. Wait until the “green lights” on the Pi have stopped blinking
  5. Check connection using the following command
    1. See Routing Table with the following command
      netstat -rn -finet
    2. Your Pi’s IP should be in the 1st column, next to the “HWaddr” in the 2nd column Screen Shot 2016-10-10 at 1.37.30 PM.png
    3. If you do not see your “HWAddr” listed then there is some connection issue …  goto “Troubleshoot Connection to Mac” in the next section
    4. Once you can see your IP, setup X11 forwarding with XQuartzScreen Shot 2016-10-10 at 1.34.13 PM.pngscreen-shot-2016-10-10-at-1-44-39-pm  Screen Shot 2016-10-10 at 1.44.49 PM.png
    5. Finally SSH in with the following command
      ssh -X pi@192.168.3.2
      1. If prompted for a password enter “raspberry” (which is your default password and you should change it at some point)
      2. You can setup passwordless login using the tutorial here https://www.raspberrypi.org/documentation/remote-access/ssh/passwordless.md
    6.  … finally this is what it looks likescreen-shot-2016-10-10-at-1-36-57-pm

 

Troubleshoot Connection to Mac

If you are unable to see an IP assigned to your Pi or unable to Ping it or SSH then there could be several reasons for this

  • Check your ethernet cable – I was using a bad cable when connecting to the Mac and a a good one when connecting to the router (when testing standalone), this cost me a lot of time!
  • Check your Mac Settings
    • I did not change the System Preferences -> Network -> Ethernet
    • Mac by default assigns 192.168.2.2 IP to your Pi, this may not get used if your Pi has “static” IP setup … validate by doing the followin
      • cat /etc/network/interfaces
      • ensure you use “dhcp” instead of “static” as shown belowScreen Shot 2016-10-10 at 1.59.01 PM.png
  • Check your Pi
    • while hardware issues are rare, you could have a problem there. Compare with another Pi
    • is it powered on? are the ethernet lights on when you connect the cable?

Software Install

… after you are done setting up the SSH / headless pi access, you can finally start building a usable platform

see Part II of the journey to build services over the PI to send out and read in data!

 

 

Camunda BPM – Manual Retry

The Camunda BPM is a lightweight, opensource BPM platform (See here: https://camunda.com/bpm/features).

The “Cockpit” application within Camunda is the admin dashboard where deployed processes can viewed at a glance and details of running processes are displayed by the process instance ids. Clicking on a process instance id reveals runtime details while the process is running – process variables, task variables etc. If there is an error thrown from any of the services in a flow, the Cockpit application allows “retrying” the service manually after the coded automatic-retries have completed.

The steps below show how to retry a failed process from the Cockpit administrative console

Manual Retry

From the Process Detail Screen select a failed process (click on the GUID link)

Then on the Right Hand side you should see under “Runtime” a “semi-circle with arrow” indicating “recycle” or “retry” – click on this

This launches a pop-up with check-boxes with IDs for the failed service tasks and option to replay them. Notice these are only tasks for “this” instance of the “flow”

After the “Retry” is clicked another pop-up indicates the status of the action (as in “was the engine able to process the retry request”. The actual Services may have failed again)

 

Java Application Memory Usage and Analysis

The Java Virtual Machine (JVM) runs standalone applications and many key enterprise applications like monolithic application servers, API Gateways and microservices. Understanding an tuning an application begins with understanding the technology running it. Here is a quick overview of the JVM Memory management

JVM Memory:

  • Stack and Heap form the memory used by a Java Application
  • The execution thread uses the Stack – it starts with the ‘main’ function and the functions it calls along with primitives they create and the references to objects in these functions
  • All the objects live in the Heap – the heap is bigger
  • Stack memory management (cleaning old unused stuff) is done using a Last In First Out (LIFO) strategy
  • Heap memory management is more complex since this is where objects are created and cleaning them up requires care
  • You use command line (CLI) arguments to control sizes and algorithms for managing the Java Memory

Java Memory Management:

  • Memory management is the process of cleaning up unused items in Stack and Heap
  • Not cleaning up will eventually halt the application as the fixed memory is used up and an out-of-memory or a stack-overflow exception occurs
  • The process of cleaning JVM memory is called “Garbage Collection” a.k.a “GC”
  • The Stack is managed using a simple LIFO strategy
  • The Heap is managed using one or more algorithms which can be specified by Command Line arguments
  • The Heap Collection Algorithms include Serial, New Par, Parallel Scavenge, CMS, Serial Old (MSC), Parallel Old and G1GC

I like to use the “Fridge” analogy – Think about how leftovers go into the fridge and how last weeks leftovers, fresh veggies and that weird stuff from long long ago gets cleaned out … what is your strategy? JVM Garbage Collection Algorithms follow a similar concept while working with a few more constraints (how do you clean the fridge while your roommate/partner/husband/wife is/are taking food out?)

 

Java Heap Memory:

  • The GC Algorithms divide the heap into age based partitions
  • The process of cleanup or collection uses age as a factor
  • The two main partitions are “Young or New Generation” and “Old or Tenured Generation” space
  • The “Young or New Generation” space is further divided into 2 Survivor partitions
  • Each survivor partition is further divided into multiple sections based on command line arguments
  • We can control the GC Algorithm performance based on our use-case and performance requirements through command line arguments that specify the size of the Young, Old and Survivor spaces.

For example, consider applications that do not have long lived objects – Stateless web applications, API Gateway products e.t.c … these need larger Young or New Generation space with a strategy to age objects slowly (long tenuring). These would use either the NewPar GC or CMS algorithm to do Garbage Collection – if there are more than 2 CPU cores available (i.e. extra threads for the collector) then the application can benefit more with the CMS algorithm

The Picture Below give a view of how the Heap section of the Java Application’s memory is divided. There are partitions based on the age of “Objects” and stuff that is very old and unused eventually gets cleaned up

JVM Heap.png

The Heap and Stack Memory can be viewed at runtime using various tools like Oracle’s JRockit Mission Control (now part of the JDK), we can also do very long term analysis of the memory and the garbage collection process using Garbage Collection (GC) logs and free tools to parse GC logs

One of the key resources to analyse in the JVM is the Memory usage and Garbage Collection. The process of cleaning un-used objects from the JVM memory is called “Garbage Collection (GC)”, details of how this works is provided here: Java Garbage Collectors
Tools:

Oracle JRockit Mission Control is now free with the JDK!

  • OSX : “/Library/Java/JavaVirtualMachines/{JDK}/Contents/Home/bin/”
  • Windows: “JDK_HOME/bin/”

GC Viewer tool can be downloaded from here

Memory Analysis Tools:

  • Runtime memory analysis with Oracle JRockit Mission Control (JMRC)
  • Garbage Collection (GC) logs and “GC Viewer Tool” analyser tool

Oracle JRockit Mission Control (JMRC)

  • Available for SPARC, Linux and Windows only
  • Download here: http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-jrockit-2192437.html
  • Usage Examples:
  • High Level CPU  and Heap usage, along with details on memory Fragmentation
  • Detailed view of Heap – Tenured, Young Eden, Young Survivor etc
  • The “Flight Recorder” tool helps start a recording session over a duration and is used for deep analysis at a later time
  • Thread Level Socket Read/Write details
  • GC Pause – Young and Tenured map
  • Detailed Thread analysis

 

Garbage Collection log analysis

  1. Problem: Often it is not possible to run a profiler at runtime because
    1. Running on the same host uses up resources
    2. Running on a remote resource produces JMX connectivity issues due to firewall etc
  2. Solution:
    1. Enable GC Logging using the following JVM command line arguments
      -XX:+PrintGCDetails
      -XX:+PrintGCDateStamps
      -XX:+PrintTenuringDistribution
      -Xloggc:%GC_LOG_HOME%/logs/gc.log
    2. Ensure the GC_LOG_HOME is not on a Remote host (i.e. no overhead in writing GC logs
    3. Analyse logs using a tool
      1. Example: GC Viewer https://github.com/chewiebug/GCViewer
  1. Using the GC Viewer Tool to analyse GC Logs
    1. Download the tool
      1. https://github.com/chewiebug/GCViewer
    2. Import a GC log file
    3. View Summary
      1. Gives a summary of the Garbage Collection
      2. Total number of concurrent collections, minor and major collections
      3. Usage Scenario:
        ” Your application runs fine but occasionally you are seeing issues with slowed performance … it could be possible that there is a Serial Garbage Collector running which stops all processing (threads) and cleans memory.  Use the summary view GC Viewer to look for Long pauses in the Full GC Pause”
    4. View Detailed Heap Info
      1. Gives a view of the growing heap sizes in Young and Old generation over time (horizontal axis)
      2. Moving the slide bar moves time along and the “zig-zag” patters rise and fall with rising memory usage and clearing
      3. Vertical lines show instances when “collections” of the “garbage” take place
        1. Too many close lines indicate frequent collection (and interruption to the application if collections are not parallel) – bad, this means frequent minor collections
        2. Tall and Thick lines indicate longer collection (usually Full GC) – bad, this means longer full GC pauses

 

API Performance Testing with Gatling

The Problem

“Our API Gateway is falling over and we expect a 6-fold increase in our client base and a 10-fold increase in requests, our backend service is scaling and performing well. Please help us understand and fix the API Gateway” 

Tasks

It was pretty clear we had to run a series of performance tests simulating the current and new user load, then apply fixes to the product (API Gateway), rinse and repeat until we have met the client’s objectives.

So the key tasks were:

  1. Do some Baseline tests to compare the API Gateway Proxy to the Backend API
  2. Use tools to monitor the API Gateway under load
  3. Tune the API Gateway to Scale under Increasing Load
  4. Repeat until finely tuned

My experience with API performance testing was limited (Java Clients) so I reached out to my colleagues and got a leg up from our co-founder (smart techie) who created a simple “Maven-Gatling-Scala Project” in Eclipse to enable me to run these tests.

Gatling + Eclipse + Maven

  • What is Gatling? Read more here: http://gatling.io/docs/2.1.7/quickstart.html
  • Creating an Eclipse Project Structure
    • Create a Maven Project
    • Create a Scala Test under /src/test/scala
    • Create galting.conf , recorder.conf and a logback.xml under /src/test/resources
    • Begin writing your Performance Test Scenario
  • How do we create Performance Test Scenarios?
    • Engine.scala – Must be a Scala App extension
    • Recorder.scala – Must be a Scala App
    • Test – Must extend Gatling Simulation, we created an AbstractClass and extended it in our test cases so we can reuse some common variables
    • gatling.conf – This file contains important configuration which determine where the test results go, how to make the http calls etc See more details here http://gatling.io/docs/2.0.0-RC2/general/configuration.html

Here is a screenshot of my workspace

Screen Shot 2015-10-21 at 11.31.07 am

Executing the Gatling Scala Test

  • Use Maven Gatling Plugin See https://github.com/gatling/gatling-maven-plugin-demo
  • Make sure your pom file has the gatling-maven-plugin artifact as shown in the screen shot below
  • Use the option “-DSimulationClass” to specify the Test you want to run (For Example: -DSimulationClass=MyAPIOneTest)
  • Use the option “-Dgatling.core.outputDirectoryBaseName” to output your reports to this folder

Screen Shot 2015-10-21 at 11.53.50 am


Creating A Performance Test Scenario

Our starting point to do any sort of technical analysis of the API gateway was to study what it did over a varying load and build a baseline. Also this had to be done by invoking a few of APIs during the load to simulate varying requests per second (For example: One api is invoked every 5 seconds while another is done every 10 seconds).

After reading the documentation for Gatling and trying out a couple of simple tests, we were able to replicate our requirements of ‘n’ users by doing 2 scenarios executing http invocations to two different API calls with different ‘pause’ times calculated from the ‘once every x second’ requirement. Each scenario took ‘n/2’ users to simulate the ‘agents’ / ‘connections’ making different calls and each scenario was run in parallel.

  • We used a simple spreadsheet to put down our test ranges, write down what we will vary, what we will keep constant and our expectations for the total number of Requests during the Test duration and expectations for the Requests Per Second we expect Gatling to achieve per test
  • We used the first few rows on the spreadsheet to do a dry run and then matched the Gatling Report results to our expectations in the spreadsheet and confirmed we were on the right track – that is, as we increased the user base our number for RPS and how the APIs were invoked matched our expectation
  • We then coded the “mvn gatling:execute” calls into a run script which would take in User count, Test Duration and Hold times as arguments and also coded a Test Suite Run Script to run the first script using values from the Spreadsheet Rows
  • We assimilated  the Gatling Reports into a Reports folder and served it via a simple HTTP Server
  • We did extensive analysis of the Baseline Results, monitored our target product using tools (VisualVM, Oracle JRockit), did various tuning (that is another blog post) and re-ran the tests until we were able to see the product scale better under increasing load

Performance Test Setup

The setup has 3 AWS instances created to replicate the Backend API and the API Gateway and host the Performance tester (Gatling code). We also use Nodejs to server the Gatling reports though a static folder mapping, that is during each run the Gatling reports are generated onto a ‘reports’ folder mapped under the ‘public’ folder of a simple http server written in Node/Express framework.

Perf Test Setup

API Performance Testing Toolkit

The Scala code for the Gatling tests is as follows – notice “testScenario” and “testScenario2” pause for 3.40 seconds and 6.80 seconds and the Gatling Setup has these two scenarios in parallel. See http://gatling.io/docs/2.0.0/general/simulation_setup.html

Screen Shot 2015-10-20 at 1.56.45 pm

Results

  • 2 user tests reveal we coded for the scenario right and calls are made to the different APIs during differing time periods as shown below

Screen Shot 2015-10-20 at 1.56.22 pm Screen Shot 2015-10-20 at 2.26.50 pm

  • As we build our scenarios, we watch the users ramp up as expected and performance of the backend and gateway degrade as expected with more and more users
    • Baseline testing

 Screen Shot 2015-10-21 at 12.12.08 pm

    • Before tuning

Screen Shot 2015-10-20 at 2.00.08 pm

  • Finally we use this information to Tune the components (change Garbage collector, add a load balancer etc) and retest until we meet and exceed the desired results
    • After tuning

Screen Shot 2015-10-20 at 1.59.17 pm

Conclusion

  • Scala/Gatling

Oracle SOA Suite 11g BPEL – FTP Adapter: What’s my filename?

I was writing an FTP adapter for a client recently for a legacy integration project, when a couple of  requirements came up:

1) When reading the file from a remote location, the client wanted to use the filename as a data element.

2) When writing the file to a remote location, the client wanted the filename to be based on one the elements from the inbound data (in this case a Primary Key from an Oracle table).

 

Part I: Reading the filename from the inbound FTP Adapter

The solution in short is this – when you create the FTP Adapter Receive, goto the properties and assign the jca.ftp.FileName to a variable.  For example, I created a simple String variable in my BPEL process called “FileName” and then assigned the jca.ftp.FileName to “FileName” BPEL variable. The end result was this …..

<receive name=”ReceiveFTPFile” createInstance=”yes”
variable=”FTPAdapterOutput”
partnerLink=”ReadFileFromRemoteLocation”
portType=”ns1:Get_ptt” operation=”Get”>
<bpelx:property name=”jca.ftp.FileName” variable=”FileName”/>
</receive>

 

Here’s a visual guide on how to do this:

Create a Variable

 

Assign the Variable to the jca.ftp.FileName property on the Receive …

 

Part II: Assigning a special element name instead of YYYY-MM-DD etc for FTP Outbound filename:

You can use this same process as shown above in the Outbound FTP Adapter. That is, read the value from the element you want the filename to be (either create a new String BPEL var or resuse something in your schema) and assign it to the Invoke property’s jca.ftp.FileName.

 

BPEL Error with Receive/Pick

Error: “Error(81): There is not an activity (receive/pick) to start the process”

Fix:  Check the “Create Instance” checkbox on your Receive or Pick activity.

 

When do you see these errors?

When you create a BPEL process and remove the default Receive/Reply components to receive/pick events from a queue or an FTP adapter for example.

For example: I have a BPEL flow below with an FTP adapter which receives a file and calls out to a Java/Spring Bean (to parse the file etc)

 

Oracle SOA Suite 11g – Configure Resource Adapters on Weblogic Server [AQAdapter]

AQAdapters

AQ is Oracle’s Advanced Queuing – a database backed channel. We use AQ Queues a lot when doing integration projects and it always helps to have a local install of SOA Suite with AQ capabilities (i.e. your own DB with AQ Queues etc)

It was hard finding any documentation on configuring Adapters for Oracle SOA Suite on a Weblogic server, so I thought I would put together a little doco explaining how I configured this. It is the same for an Apps Adapter config.  Initially this looked a bit different than the old OC4J way of configuring adapters but it really is not all that different.

Weblogic requires a “weblogic-ra.xml” along with the “ra.xml” file in the “META-INF” folder of the RAR file for the adapter.  The trickiest part is getting the Web console to apply changes … what I mean is that initially I tried to “Update” an existing “Deployment” of AQAdapter from the Weblogic Admin Console and it blew up … later I found out because the AQAdapter was packaged up in a RAR file (and not exploded on the filesystem) …as a result my changes from the console were not making it through.

The steps below show how I extracted the AQAdapter.rar to AQAdapter folder I created under the $SOA_HOME/soa/connectors/  folder. You can use these steps to configure any adapter (I have personally tested Oracle Apps Adapter – screen shots later)

Before we begin though, read through Oracle’s documentation on AQ and how to create queues and configure etc

Oracle AQ Documentation: http://download.oracle.com/docs/cd/E12839_01/integration.1111/e10231/adptr_aq.htm

Oracle AQ Adapter Properties: http://download.oracle.com/docs/cd/E12839_01/integration.1111/e10231/adptr_propertys.htm#CIHCHGJJ

Here is a good post that explain how to create a user and then create an AQ Queue: http://ora-soa.blogspot.com/2008/06/steps-to-create-aq-queuetopic.html

Steps for Creating an AQ Producer/Consumer and configuring the AQ Adapter:

Lets start from JDeveloper, I created a simple composite that uses an AQAdapter to Enqueue and XML

Have a look at the JCA Properties for the Queue, the JNDI name can be changed to anything you like but must be consistent with what you configure later. For example, I use “eis/AQ/MyAQDatasource” in this example to match the name of the datasource I will be using. Because the AQ Queue is Database backed … the JNDI for the AQ Queue refers to a configuration that contains the JNDI for a XA Datasource. In my example it is “jdbc/MyAQDatasource”

Steps:

First configure the Adapter weblogic-ra.xml

Goto your Weblogic Server’s host’s file-system, navigate to the connectors directory [$SOA_HOME/soa/connectors/ … in my case it was “g:\Oracle\mw_10.3.5\Oracle_SOA1\soa\connectors\”]

Back-up your AqAdapter.rar

Create a AqAdapter directory and copy the original AqAqapter.rar here

Extract the AqAdapter.rar file in the new directory by doing “jar xf AqAdapter.rar”

Remove the AqAdapter.rar file (so that your directory now has AqAdapter.jar and META-INF folder)


Navigate to the META-INF folder and add your “connection-instance” to the “Weblogic-ra.xml” … basically you are saying the “eis/AQ/MyAQDatasource” JNDI name is configured to have these properties (this is where you put in your XADatasource JNDI)

Add your changes based on what you have in JDeveloper

Save the Weblogic-ra.xml


To configure the AQ Adapter in Weblogic Admin Console

Start the Admin Console

Goto the Deployments

Locate and Uninstall/Delete the “AqAdapter” deployment (don’t worry it will not remove it from your filesystem)

Install the new AQAdapter as show below. Note, the “Deployment Plan” is not created right away … there is a trick to creating it. After the initial Install wizard your need to goto your Adapter configurations and first hit-Enter on a  property in the configuration and then click the SAVE button. (see the Red comments in the images below).

This is the tricky part … click on a property, then press the “ENTER” key, then click on the “SAVE” button
Make sure your DataSource Exists

 

Here is a screen shot explaining how the Datasource is used ….