How did we get to Microservices?

If you have struggled with decisions when designing APIs or Microservices – it is best to take a step back and look at how we got here. It helps not only renew our appreciation for the rapid changes we have seen over the past 10-20 years but also puts into perspective why we do what we do

I still believe a lot of us cling to old ways of thinking when the world has moved on or is moving faster than we can process. APIs and microservices are not just new vendor-driven fads rather they are key techniques, processes and practices for businesses to survive in a highly evolving eco-system. Without an API-strategy, for example, your business might not be able to provide the services to consumers your competition can provide or with the quality of service we have come to expect (in real-time)

So, with that in mind let us take a step back and look at the evolution of technology within the enterprise and remember that this aligns to the business strategy

Long time ago

There were monolithic applications,  mainframe systems for example processing Order, Pricing, Shipment etc. You still see this monolith written within startups because they make sense – no network calls, just inter-process communication and if you can join a bunch of tables you can make a “broker” happy with a mashup of data

Screen Shot 2020-02-26 at 5.48.04 pm
1. First, there was just the legacy monolith application

Circa 2000s MVC app world

No ESB yet. Exploring the JVM and Java for enterprise application development. JSF was new and EJB framework was still deciding what type of beans to use. Data flowed via custom connectors from Mainframe to these Java applications which cached them and allowed viewing and query of this information

There were also functional foundation applications for enterprise logging, business rules, pricing, rating, identity etc. that we saw emerging and data standards were often vague. EAI patterns were being observed but not standardized and we were more focused on individual service design patterns and the MVC model

Screen Shot 2020-02-26 at 5.48.12 pm
2. Then we built lots of MVC applications and legacy monolith, integration was point-to-point

Services and Service-Oriented Architecture

The next wave began when the number of in-house custom applications started exploding and there was a need for data standardization, a common language to describe enterprise objects and de-coupled services with standards requests and responses

Some organisations started developing their own XML based engines around message queues and JMS standards while others adopted the early service bus products from vendors

Thus Service-Oriented Architecture (SOA) was born with lofty goals to build canonical enterprise data models, reduce point-point services (Java applications had a build-time dependency to services they consumed, other Java services), add standardized security, build service registry etc

We also saw a general adoption and awareness around EAI patterns – we finally understood what a network can do to consistency models and the choice between availability and consistency in a partition. Basically stuff known by those with a Computer Science degree working on distributed computing or collective-communication in a parallel computing cluster

One key observation is that the vendor products supporting SOA were runtime monoliths in their own right. It was a single product (J2EE EAR) running on a one or more application servers with a single database for stateful processes etc. The web services we developed over this product were mere XML configuration which was executed by one giant application

Also, the core concerns were “service virtualisation” and “message-based routing”, which was a pure stateless and transformation-only concept. This worked best when coupled with an in-house practice of building custom services and failed where there was none and the SOA product had to simply transform, route (i.e. it did not solve problems by itself as an integration layer)

Screen Shot 2020-02-26 at 5.36.36 pm
3. We started to make integration standardised and flexible, succeed within the Enterprise but failed to scale for the Digital world. Not ready for mobile or cloud

API and Microservices era

While the SOA phase helped us move away from ugly file-based integrations of the past and really supercharged enterprise application integration, it failed miserably in the digital customer domain. The SOA solutions were not built to scale, they were not built for the web and the web was scaling and getting jazzier by the day; people were expecting more self-service portals and XML parsers were response times!

Those of us who were lucky to let go off the earlier dogma (vendor coolade) around the “web services” we were building started realising there was nothing webby about it. After a few failed attempts at getting the clunk web portals working, we realised that the SOA way of serving information was not suited to this class of problems and we needed something better

We have come back a full circle to custom build teams and custom services for foundation tasks and abstractions over end-systems – we call these “micro-services” and build these not for the MVC architecture but as pure services. These services speak HTTP  natively as the language of the web, without custom standards like SOAP had introduced earlier and use a representational state transfer style (REST) pattern to align with hypermedia best-practices; we call them web APIs and standardise around using JSON as the data format (instead of XML)

Screen Shot 2020-02-26 at 7.01.52 pm
4. Microservices, DevOps, APIs early on – it was on-prem and scalable

The API and Microservices era comes with changes in how we organise (Dev-Ops), where we host our services (scalable platforms on-prem or available as-a-service) and a fresh look at integration patterns (CQRS, streaming, caching, BFF etc.). The runtime for these new microservices-based integration applications is now broken into smaller chunks as there is no centralised bus 🚎

Screen Shot 2020-02-26 at 7.08.20 pm
5. Microservices, DevOps, APIs on externalised highly scalable platforms (cloud PaaS)

Recap

Enterprise system use has evolved over time from being depended on one thing that did everything to multiple in-house systems to in-house and cloud-based services. The theme has been a gradual move from a singular application to a network partitioned landscape of systems to an eco-system of modular value-based services

Microservices serve traditional integration needs between enterprise systems but more importantly enable organisations to connect to clients and services on the web (cloud) in a scalable and secure manner – something that SOA products failed to do (since they were built for the enterprise context only). APIs enable microservices to communicate with the service consumers and providers in a standard format and bring with them best practices such as contract-driven development, policies, caching etc that makes developing and operating them at scale easier

Oracle SOA Suite 11g BPEL – FTP Adapter: What’s my filename?

I was writing an FTP adapter for a client recently for a legacy integration project, when a couple of  requirements came up:

1) When reading the file from a remote location, the client wanted to use the filename as a data element.

2) When writing the file to a remote location, the client wanted the filename to be based on one the elements from the inbound data (in this case a Primary Key from an Oracle table).

 

Part I: Reading the filename from the inbound FTP Adapter

The solution in short is this – when you create the FTP Adapter Receive, goto the properties and assign the jca.ftp.FileName to a variable.  For example, I created a simple String variable in my BPEL process called “FileName” and then assigned the jca.ftp.FileName to “FileName” BPEL variable. The end result was this …..

<receive name=”ReceiveFTPFile” createInstance=”yes”
variable=”FTPAdapterOutput”
partnerLink=”ReadFileFromRemoteLocation”
portType=”ns1:Get_ptt” operation=”Get”>
<bpelx:property name=”jca.ftp.FileName” variable=”FileName”/>
</receive>

 

Here’s a visual guide on how to do this:

Create a Variable

 

Assign the Variable to the jca.ftp.FileName property on the Receive …

 

Part II: Assigning a special element name instead of YYYY-MM-DD etc for FTP Outbound filename:

You can use this same process as shown above in the Outbound FTP Adapter. That is, read the value from the element you want the filename to be (either create a new String BPEL var or resuse something in your schema) and assign it to the Invoke property’s jca.ftp.FileName.

 

BPEL Error with Receive/Pick

Error: “Error(81): There is not an activity (receive/pick) to start the process”

Fix:  Check the “Create Instance” checkbox on your Receive or Pick activity.

 

When do you see these errors?

When you create a BPEL process and remove the default Receive/Reply components to receive/pick events from a queue or an FTP adapter for example.

For example: I have a BPEL flow below with an FTP adapter which receives a file and calls out to a Java/Spring Bean (to parse the file etc)

 

Oracle SOA Suite 11g – Configure Resource Adapters on Weblogic Server [AQAdapter]

AQAdapters

AQ is Oracle’s Advanced Queuing – a database backed channel. We use AQ Queues a lot when doing integration projects and it always helps to have a local install of SOA Suite with AQ capabilities (i.e. your own DB with AQ Queues etc)

It was hard finding any documentation on configuring Adapters for Oracle SOA Suite on a Weblogic server, so I thought I would put together a little doco explaining how I configured this. It is the same for an Apps Adapter config.  Initially this looked a bit different than the old OC4J way of configuring adapters but it really is not all that different.

Weblogic requires a “weblogic-ra.xml” along with the “ra.xml” file in the “META-INF” folder of the RAR file for the adapter.  The trickiest part is getting the Web console to apply changes … what I mean is that initially I tried to “Update” an existing “Deployment” of AQAdapter from the Weblogic Admin Console and it blew up … later I found out because the AQAdapter was packaged up in a RAR file (and not exploded on the filesystem) …as a result my changes from the console were not making it through.

The steps below show how I extracted the AQAdapter.rar to AQAdapter folder I created under the $SOA_HOME/soa/connectors/  folder. You can use these steps to configure any adapter (I have personally tested Oracle Apps Adapter – screen shots later)

Before we begin though, read through Oracle’s documentation on AQ and how to create queues and configure etc

Oracle AQ Documentation: http://download.oracle.com/docs/cd/E12839_01/integration.1111/e10231/adptr_aq.htm

Oracle AQ Adapter Properties: http://download.oracle.com/docs/cd/E12839_01/integration.1111/e10231/adptr_propertys.htm#CIHCHGJJ

Here is a good post that explain how to create a user and then create an AQ Queue: http://ora-soa.blogspot.com/2008/06/steps-to-create-aq-queuetopic.html

Steps for Creating an AQ Producer/Consumer and configuring the AQ Adapter:

Lets start from JDeveloper, I created a simple composite that uses an AQAdapter to Enqueue and XML

Have a look at the JCA Properties for the Queue, the JNDI name can be changed to anything you like but must be consistent with what you configure later. For example, I use “eis/AQ/MyAQDatasource” in this example to match the name of the datasource I will be using. Because the AQ Queue is Database backed … the JNDI for the AQ Queue refers to a configuration that contains the JNDI for a XA Datasource. In my example it is “jdbc/MyAQDatasource”

Steps:

First configure the Adapter weblogic-ra.xml

Goto your Weblogic Server’s host’s file-system, navigate to the connectors directory [$SOA_HOME/soa/connectors/ … in my case it was “g:\Oracle\mw_10.3.5\Oracle_SOA1\soa\connectors\”]

Back-up your AqAdapter.rar

Create a AqAdapter directory and copy the original AqAqapter.rar here

Extract the AqAdapter.rar file in the new directory by doing “jar xf AqAdapter.rar”

Remove the AqAdapter.rar file (so that your directory now has AqAdapter.jar and META-INF folder)


Navigate to the META-INF folder and add your “connection-instance” to the “Weblogic-ra.xml” … basically you are saying the “eis/AQ/MyAQDatasource” JNDI name is configured to have these properties (this is where you put in your XADatasource JNDI)

Add your changes based on what you have in JDeveloper

Save the Weblogic-ra.xml


To configure the AQ Adapter in Weblogic Admin Console

Start the Admin Console

Goto the Deployments

Locate and Uninstall/Delete the “AqAdapter” deployment (don’t worry it will not remove it from your filesystem)

Install the new AQAdapter as show below. Note, the “Deployment Plan” is not created right away … there is a trick to creating it. After the initial Install wizard your need to goto your Adapter configurations and first hit-Enter on a  property in the configuration and then click the SAVE button. (see the Red comments in the images below).

This is the tricky part … click on a property, then press the “ENTER” key, then click on the “SAVE” button
Make sure your DataSource Exists

 

Here is a screen shot explaining how the Datasource is used ….



Enterprise Integration – Using Heterogeneous Namespace Design For Messaging Schema

When integrating with Legacy systems, especially ones that rely on flat-files, it is often the case that there is no XSD definition that can be used in BPEL/ESB processes.  This happened recently when I was using Oracle’s AIA framework to build Application Business Connector Services (ABCS) for a legacy system that has a file-poll based integration.

The very first step, after developing the EAI picture for the client’s ERP to Legacy system, was to begin hashing out data mapping and business logic details in the ABCSs. I used Oracle JDeveloper to build schemas and used namesspace standards, as shown below, for organizing the ERP and Legacy System’s Entity Schemas and the Schemas used to do Request/Reply on these entities.

Lets take for example the Order entity in Legacy System1. The endpoint expects a list of Orders (for milk runs) and the ABCS takes a Request that has a List of Orders, then creates the file that represents the endpoint datafile and finally uses a File Adapter to put the file there.

I have shown below,  how to create the schema for the Order Entity (OrderType) and how to wrap it in a Order Request Type.  Due to time-constraints, I will simply upload the images now and come back to this post to detail the steps.

 

 

Oracle SOA Suite 10g – Database Integration. Resolving “ORA-01017: invalid username/password” partner link error

Database adapters are used to perform CRUD operations on tables from BPEL processes in Oracle SOA Suite. I came across an error recently which is quite easy to resolve but required a bit of idea about how the DB Adapters are configured.

When you create a DBAdapter in JDeveloper and use that in a Partner Link in a BPEL (or ESB) process – the WSDL file for the adapter contains the JNDI name for the service. At development time you can use the JDeveloper configured database connection, however in other environments you need to add a level of indirection through the use of “datasources”. This is done to prevent the “hard-coding” over username/password details within  some xml file packaged and deployed to the appserver.

So how does the datasource – say “jdbc/myappDS” – get tied to a JNDI for the Database Adapter?

If you have Oracle SOA Suite 10g installed … navigate to your applications home directory and then under “j2ee/oc4j_soa/application-deployments/default/DbAdapter” look for the Deployment Descriptor file – “oc4j-ra.xml”. This file should map the DBAdaptor JNDI to the Datasource JNDI.

Now, when you define your datasource you should have the username/password configured in the DataSourceConnectionPool in an OC4J instance.

 

 

BPEL Error and Fix:

Error message

[2011/06/08 19:21:17]"{http://schemas.oracle.com/bpel/extension}remoteFault" has been thrown.-<remoteFault xmlns="http://schemas.oracle.com/bpel/extension">-<part name="code"><code>1017</code></part>-<part name="summary"><summary>file:/opt/oracle/soa/bpel/domains/default_dev/tmp/.bpel_ebsflow_1.0_96b622dbeda228ce786e206d103b0eae.tmp/assign_id.wsdl [ assign_id_ptt::assign_id(InputParameters) ] - WSIF JCA Execute of operation 'assign_wagn' failed due to: Could not create/access the TopLink Session.This session is used to connect to the datastore. [Caused by: ORA-01017: invalid username/password; logon denied]; nested exception is:        ORABPEL-11622Could not create/access the TopLink Session.This session is used to connect to the datastore. [Caused by: ORA-01017: invalid username/password; logon denied]

See root exception for the specific exception. You may need to configure the connection settings in the deployment descriptor (i.e. $J2EE_HOME/application-deployments/default/DbAdapter/oc4j-ra.xml) and restart the server. Caused by Exception [TOPLINK-4002] (Oracle TopLink - 10g Release 3 (10.1.3.5.0) (Build 090715)): oracle.toplink.exceptions.DatabaseExceptionInternal Exception: java.sql.SQLException: ORA-01017: invalid username/password; logon deniedError Code: 1017.</summary></part>-<part name="detail"><detail>Internal Exception: java.sql.SQLException: ORA-01017: invalid username/password; logon deniedError Code: 1017</detail></part></remoteFault>

Datasource configuration

High Performance Computing Comes to the Enterprise – Oracle’s Exalogic

Oracle’s Exalogic….

is a hardware platform that outperforms competition with features like 40 Gb/sec Infiniband network link,  30 x86 compute nodes,  360 Xeon cores (2.93 GHz), 2.8 TB DRAM and 960 GB SSD  in a full rack. Phew!

Ref: Oracle’s Whitepaper on Exalogic

You can “google” it … search for “Oracle Exalogic” and learn more about the beast, but in short this is a platform that is not only optimized to perform well, but also designed to use less resources. So for example, the power consumption is really low and this is a very green option. Or so says the “cool-ade label”.

Application Architects have always fret over network latency, I/O bottlenecks and general hardware issues over the years. While the classical “Computer Science” recommends/insists that the optimization lies in the application and algorithmic efficiency – the reality in enterprise environments is that “Information Processing” applications are often (lets assume) optimized but it is hardware issues that cause more problems. Sure there is no replacement to SQL tuning, code instrumentation etc but if you are an enterprise invested in a lot of COTS applications – you just want the damn thing to run! Often the “damn thing” does want to run but then it has limited resources and “scaling” of these resources is not optimized.

This is specially true for 3-tier applications which despite being optimized (No “select * queries” or bad sort loops) have to run on hardware that perform great in isolation but when clustered they do not scale as well as High Performance Computing applications do. Why is that?

The problem lies…

in the common protocols used to move data around. Ethernet and TCP/IP over it has been the standard to make computers and applications in them “talk”. Lets just say that this hardware and protocol can be optimized quite a bit! Well that’s what has happened with Exalogic and Exadata.

Thanks to some fresh thinking on Oracle’s part, their acquisition of Sun Microsystems, improvements in Java language (some of the New IO stuff) and high performance network switches from Infiniband… there is a new hardware platform in town which is juiced out (can I say “pimped out”) to perform!

My joy stems from the fact that Oracle is using optimizations employed in High Performance computing to enterprise hardware. The use of collective communication APIs like Scatter/Gather to improve application I/O throughput and latency (Fact: 10GBPS Ethernet has a MTU of 1.5K – while Infiniband uses a 64K MTU for Inifiniband over IP protocol and 32K MTU or more for Socket Direct Protocol).


Personally all this ties very well with my background in High Performance Computing (see my Master’s in Computer Science report on High Performance Unified Parallel C (UPC) Collectives For Linux/Myrinet Platforms   done at Michigan Tech with Dr. Steve Seidel) …and my experience in Enterprise Application development/architecture.


…here’s my description of Scatter and Gather collective communication written in 2004:

Broadcast using memory pinning and remote DMA access in linux (basically network card can access user space directly and do gets and puts)