Better Digital Products using Domain Oriented APIs: The Shopping Mall Metaphor

APIs are the abstractions over technical services. Good APIs mirror strategic thinking in an organisation and lead to better customer experience by enabling high-degree of connectivity via secure mechanisms

Too much focus is on writing protocols & semantics with the desire to design good APIs and too little on business objectives. Not enough questions are asked early on and the focus is always on system-system integration. I believe thinking about what a business does and aligning services to leads us to product centric thinking with reuseable services

History
As an ardent student of software design and engineering principles, I have been keen on Domain Driven Design (DDD) and had the opportunity to apply these principles in the enterprise business context in building reusable and decoupled microservices. I believe the best way to share this experience is through a metaphor and I use a “Shopping mall” metaphor with “Shops” to represent a large enterprise with multiple lines of businesses and teams

Like all metaphors – mine breaks beyond a point but it helps reason about domains, bounded contexts, APIs, events and microservices. This post does not provide a dogmatic point-of-view or a “how to guide”; rather it aims to help you identify key considerations when designing solutions for an enterprise and is applicable upfront or during projects

I have been designing APIs and microservices in Health and Insurance domains across multiple lines of business, across varying contexts over the past 5-8 years. Through this period, I have seen architects (especially those without Integration domain knowledge) struggle to deliver strategic, product centric, business friendly APIs. The solutions handed to us always dealt with an “enterprise integration” context with little to no consideration for future “digital contexts” leading to brittle, coupled services and frustration from business teams around cost of doing integration ( reckon this is why IT transformation is hard )

This realisation led me to asking questions around some of our solution architecture practices and support them through better understanding and application of domain modeling and DDD (especially strategic DDD ). Thought this practice, I was able to design and deliver platforms for our client which were reusable and yet not coupled


Domain Queries 

In one implementation, my team delivered around 400 APIs and after 2 years the client has been able to make continuous changes & add new features without compromising the overall integrity of the connected systems or their data

Though my journey with DDD in the Enterprise, I discovered some fundamental rules about applying these software design principles in a broader enterprise context but first we had to step in to our customer’s shoes and ask some fundamental questions about their business and they way they function

The objective is to key aspects of the API ecosystem you are designing for, below are some of the questions you need to answer through your domain queries

  • What are your top-level resources leading to a product centric design?
  • When do you decide what they are? Way up front or in a project scrum?
  • What are the interactions between these domain services?
  • How is the quality and integrity of your data impacted through our design choices?
  • How do you measure all of this “Integration entropy” – the complexity introduced by our integration choices between systems?

The Shopping Mall example

Imagine being asked to implement the IT system for a large shopping complex or shopping mall. This complex has a lot of shops which want to use the system for showing product information, selling them, shipping them etc

There are functions that are common to all the shops but with nuanced difference in the information they capture – for example, the Coffee Shop does “Customer Management” function with their staff, while the big clothes retail store needs to sell its own rewards point and store the customer’s clothing preferences and the electronics retail does its customer management function through its own points system

You have to design the core domains for the mall’s IT system to provide services they can use (and reuse) for their shops and do so while being able to change aspects of a shop/business without impacting other businesses

Asking Domain and Context questions

  • What are your top-level “domains” so that your can build APIs to link the Point-of-Sale (POS), CRM, Shipping and other systems?
  • Where do you draw the line? Is a service shared by all businesses or to businesses of a certain type or not shared at all?
  • Bounded contexts? What contexts do you see as they businesses do their business?
  • APIs or Events? How do you share information across the networked systems to achieve optimal flow of information while providing the best customer experience? Do you in the networked systems pick consistency or availability?

Summary:

Though my journey with DDD in the Enterprise, I discovered some fundamental rules about applying these software design principles in a broader enterprise context. I found it useful to apply the Shopping Mall metaphor to a Business Enterprise when designing system integrations

It is important to understand the core business lines, capabilities (current and target state), business products, business teams, terminologies then do analysis on any polysemy across domains and within domain contexts leading to building domains, contexts and interactions

We then use this analysis to design our solution with APIs, events and microservices to maximise reuse and reduce crippling coupling

How did we get to Microservices?

If you have struggled with decisions when designing APIs or Microservices – it is best to take a step back and look at how we got here. It helps not only renew our appreciation for the rapid changes we have seen over the past 10-20 years but also puts into perspective why we do what we do

I still believe a lot of us cling to old ways of thinking when the world has moved on or is moving faster than we can process. APIs and microservices are not just new vendor-driven fads rather they are key techniques, processes and practices for businesses to survive in a highly evolving eco-system. Without an API-strategy, for example, your business might not be able to provide the services to consumers your competition can provide or with the quality of service we have come to expect (in real-time)

So, with that in mind let us take a step back and look at the evolution of technology within the enterprise and remember that this aligns to the business strategy

Long time ago

There were monolithic applications,  mainframe systems for example processing Order, Pricing, Shipment etc. You still see this monolith written within startups because they make sense – no network calls, just inter-process communication and if you can join a bunch of tables you can make a “broker” happy with a mashup of data

Screen Shot 2020-02-26 at 5.48.04 pm
1. First, there was just the legacy monolith application

Circa 2000s MVC app world

No ESB yet. Exploring the JVM and Java for enterprise application development. JSF was new and EJB framework was still deciding what type of beans to use. Data flowed via custom connectors from Mainframe to these Java applications which cached them and allowed viewing and query of this information

There were also functional foundation applications for enterprise logging, business rules, pricing, rating, identity etc. that we saw emerging and data standards were often vague. EAI patterns were being observed but not standardized and we were more focused on individual service design patterns and the MVC model

Screen Shot 2020-02-26 at 5.48.12 pm
2. Then we built lots of MVC applications and legacy monolith, integration was point-to-point

Services and Service-Oriented Architecture

The next wave began when the number of in-house custom applications started exploding and there was a need for data standardization, a common language to describe enterprise objects and de-coupled services with standards requests and responses

Some organisations started developing their own XML based engines around message queues and JMS standards while others adopted the early service bus products from vendors

Thus Service-Oriented Architecture (SOA) was born with lofty goals to build canonical enterprise data models, reduce point-point services (Java applications had a build-time dependency to services they consumed, other Java services), add standardized security, build service registry etc

We also saw a general adoption and awareness around EAI patterns – we finally understood what a network can do to consistency models and the choice between availability and consistency in a partition. Basically stuff known by those with a Computer Science degree working on distributed computing or collective-communication in a parallel computing cluster

One key observation is that the vendor products supporting SOA were runtime monoliths in their own right. It was a single product (J2EE EAR) running on a one or more application servers with a single database for stateful processes etc. The web services we developed over this product were mere XML configuration which was executed by one giant application

Also, the core concerns were “service virtualisation” and “message-based routing”, which was a pure stateless and transformation-only concept. This worked best when coupled with an in-house practice of building custom services and failed where there was none and the SOA product had to simply transform, route (i.e. it did not solve problems by itself as an integration layer)

Screen Shot 2020-02-26 at 5.36.36 pm
3. We started to make integration standardised and flexible, succeed within the Enterprise but failed to scale for the Digital world. Not ready for mobile or cloud

API and Microservices era

While the SOA phase helped us move away from ugly file-based integrations of the past and really supercharged enterprise application integration, it failed miserably in the digital customer domain. The SOA solutions were not built to scale, they were not built for the web and the web was scaling and getting jazzier by the day; people were expecting more self-service portals and XML parsers were response times!

Those of us who were lucky to let go off the earlier dogma (vendor coolade) around the “web services” we were building started realising there was nothing webby about it. After a few failed attempts at getting the clunk web portals working, we realised that the SOA way of serving information was not suited to this class of problems and we needed something better

We have come back a full circle to custom build teams and custom services for foundation tasks and abstractions over end-systems – we call these “micro-services” and build these not for the MVC architecture but as pure services. These services speak HTTP  natively as the language of the web, without custom standards like SOAP had introduced earlier and use a representational state transfer style (REST) pattern to align with hypermedia best-practices; we call them web APIs and standardise around using JSON as the data format (instead of XML)

Screen Shot 2020-02-26 at 7.01.52 pm
4. Microservices, DevOps, APIs early on – it was on-prem and scalable

The API and Microservices era comes with changes in how we organise (Dev-Ops), where we host our services (scalable platforms on-prem or available as-a-service) and a fresh look at integration patterns (CQRS, streaming, caching, BFF etc.). The runtime for these new microservices-based integration applications is now broken into smaller chunks as there is no centralised bus 🚎

Screen Shot 2020-02-26 at 7.08.20 pm
5. Microservices, DevOps, APIs on externalised highly scalable platforms (cloud PaaS)

Recap

Enterprise system use has evolved over time from being depended on one thing that did everything to multiple in-house systems to in-house and cloud-based services. The theme has been a gradual move from a singular application to a network partitioned landscape of systems to an eco-system of modular value-based services

Microservices serve traditional integration needs between enterprise systems but more importantly enable organisations to connect to clients and services on the web (cloud) in a scalable and secure manner – something that SOA products failed to do (since they were built for the enterprise context only). APIs enable microservices to communicate with the service consumers and providers in a standard format and bring with them best practices such as contract-driven development, policies, caching etc that makes developing and operating them at scale easier

Oracle SOA Suite 11g – Configure Resource Adapters on Weblogic Server [AQAdapter]

AQAdapters

AQ is Oracle’s Advanced Queuing – a database backed channel. We use AQ Queues a lot when doing integration projects and it always helps to have a local install of SOA Suite with AQ capabilities (i.e. your own DB with AQ Queues etc)

It was hard finding any documentation on configuring Adapters for Oracle SOA Suite on a Weblogic server, so I thought I would put together a little doco explaining how I configured this. It is the same for an Apps Adapter config.  Initially this looked a bit different than the old OC4J way of configuring adapters but it really is not all that different.

Weblogic requires a “weblogic-ra.xml” along with the “ra.xml” file in the “META-INF” folder of the RAR file for the adapter.  The trickiest part is getting the Web console to apply changes … what I mean is that initially I tried to “Update” an existing “Deployment” of AQAdapter from the Weblogic Admin Console and it blew up … later I found out because the AQAdapter was packaged up in a RAR file (and not exploded on the filesystem) …as a result my changes from the console were not making it through.

The steps below show how I extracted the AQAdapter.rar to AQAdapter folder I created under the $SOA_HOME/soa/connectors/  folder. You can use these steps to configure any adapter (I have personally tested Oracle Apps Adapter – screen shots later)

Before we begin though, read through Oracle’s documentation on AQ and how to create queues and configure etc

Oracle AQ Documentation: http://download.oracle.com/docs/cd/E12839_01/integration.1111/e10231/adptr_aq.htm

Oracle AQ Adapter Properties: http://download.oracle.com/docs/cd/E12839_01/integration.1111/e10231/adptr_propertys.htm#CIHCHGJJ

Here is a good post that explain how to create a user and then create an AQ Queue: http://ora-soa.blogspot.com/2008/06/steps-to-create-aq-queuetopic.html

Steps for Creating an AQ Producer/Consumer and configuring the AQ Adapter:

Lets start from JDeveloper, I created a simple composite that uses an AQAdapter to Enqueue and XML

Have a look at the JCA Properties for the Queue, the JNDI name can be changed to anything you like but must be consistent with what you configure later. For example, I use “eis/AQ/MyAQDatasource” in this example to match the name of the datasource I will be using. Because the AQ Queue is Database backed … the JNDI for the AQ Queue refers to a configuration that contains the JNDI for a XA Datasource. In my example it is “jdbc/MyAQDatasource”

Steps:

First configure the Adapter weblogic-ra.xml

Goto your Weblogic Server’s host’s file-system, navigate to the connectors directory [$SOA_HOME/soa/connectors/ … in my case it was “g:\Oracle\mw_10.3.5\Oracle_SOA1\soa\connectors\”]

Back-up your AqAdapter.rar

Create a AqAdapter directory and copy the original AqAqapter.rar here

Extract the AqAdapter.rar file in the new directory by doing “jar xf AqAdapter.rar”

Remove the AqAdapter.rar file (so that your directory now has AqAdapter.jar and META-INF folder)


Navigate to the META-INF folder and add your “connection-instance” to the “Weblogic-ra.xml” … basically you are saying the “eis/AQ/MyAQDatasource” JNDI name is configured to have these properties (this is where you put in your XADatasource JNDI)

Add your changes based on what you have in JDeveloper

Save the Weblogic-ra.xml


To configure the AQ Adapter in Weblogic Admin Console

Start the Admin Console

Goto the Deployments

Locate and Uninstall/Delete the “AqAdapter” deployment (don’t worry it will not remove it from your filesystem)

Install the new AQAdapter as show below. Note, the “Deployment Plan” is not created right away … there is a trick to creating it. After the initial Install wizard your need to goto your Adapter configurations and first hit-Enter on a  property in the configuration and then click the SAVE button. (see the Red comments in the images below).

This is the tricky part … click on a property, then press the “ENTER” key, then click on the “SAVE” button
Make sure your DataSource Exists

 

Here is a screen shot explaining how the Datasource is used ….



Enterprise Integration – Using Heterogeneous Namespace Design For Messaging Schema

When integrating with Legacy systems, especially ones that rely on flat-files, it is often the case that there is no XSD definition that can be used in BPEL/ESB processes.  This happened recently when I was using Oracle’s AIA framework to build Application Business Connector Services (ABCS) for a legacy system that has a file-poll based integration.

The very first step, after developing the EAI picture for the client’s ERP to Legacy system, was to begin hashing out data mapping and business logic details in the ABCSs. I used Oracle JDeveloper to build schemas and used namesspace standards, as shown below, for organizing the ERP and Legacy System’s Entity Schemas and the Schemas used to do Request/Reply on these entities.

Lets take for example the Order entity in Legacy System1. The endpoint expects a list of Orders (for milk runs) and the ABCS takes a Request that has a List of Orders, then creates the file that represents the endpoint datafile and finally uses a File Adapter to put the file there.

I have shown below,  how to create the schema for the Order Entity (OrderType) and how to wrap it in a Order Request Type.  Due to time-constraints, I will simply upload the images now and come back to this post to detail the steps.