Microservices with Docker and Spring Boot

This guide is for someone interesting in quickly building a micro-service using the Spring Boot framework. Works great if you have prior JAX-RS / Jersey experience

Step-by-step guide

Add the steps involved:

  1. Create a Maven Project
    1. Final Structure
    2. Pom ( Attached here )
  2. Include Spring Boot Dependencies in your pom
    1. Parent Project
      	<parent>
      
      		<groupId>org.springframework.boot</groupId>
      
      		<artifactId>spring-boot-starter-parent</artifactId>
      
      		<version>1.3.1.RELEASE</version>
      
      	</parent>
    2. Dependencies
      <dependencies>
      <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <scope>test</scope>
      </dependency>

      <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
      </dependency>

      <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-jersey</artifactId>
      </dependency>
      </dependencies>

  3. Create the Java main class – this is entry point with the @SpringBootApplication annotation
    1. App.java
      @SpringBootApplication
      public class App 
      {
          public static void main( String[] args )
          {
          	SpringApplication.run(App.class, args);
          }
      }
  4. Create the API Controller class
    1. Add Spring MVC methods for REST in the API controller class
      1. SimpleAPIController.java with SpringMVC only
        @Component
        @RestController
        @Path("/api")
        public class SimpleAPIController {
        	@RequestMapping(value = "/springmvc", produces = "application/json")
            public Map<String, Object> springMvc() {
        		Map<String, Object> data = new HashMap<String, Object>();
        		data.put("message", "Spring MVC Implementation of API is available");
        		return data;
            }
        }
    2. (Alternate/Additional) Add Jersery methods for REST in the API controller class
      1. SimpleAPIController.java with Jersey (Note: You need the spring boot jersey dependency in your pom)
        @Component
        @RestController
        @Path("/api")
        public class SimpleAPIController {
        	@GET
        	@Produces({ "application/json" })
        	public Response getMessage() {
        		Map<String, Object> data = new HashMap<String, Object>();
        		data.put("message", "Jersey Implementation of API is available");
        		Response response = Response.ok().entity(data).build();
        		return response;
        	}
        }
    3. (Alternate/Additional) Create a JerseyConfig class
      1. JerseryConfig.java – required for jersey rest implementation
        @Configuration
        public class JerseyConfig extends ResourceConfig {
            public JerseyConfig() {
                register(SimpleAPIController.class);
            }
        }
  5. Build the Maven project
    1. Add the Spring boot maven build plugin
      	<build>
      		<plugins>
      			<plugin>
      				<groupId>org.springframework.boot</groupId>
      				<artifactId>spring-boot-maven-plugin</artifactId>
      			</plugin>
      		</plugins>
      	</build>
  6. Execute the spring boot application
    1. Run in Eclipse as Java Application
    2. Run as a standalone Java Jar
      java -jar target/SimpleService-0.0.1-SNAPSHOT.jar
    3. Specify a different port and run as a standalone Java Jar

      java -Dserver.port=8090 -jar target/SimpleService-0.0.1-SNAPSHOT.jar
    4. Running in a docker container

      1. See: https://spring.io/guides/gs/spring-boot-docker/
      2. Docker file
        FROM java:8
        VOLUME /tmp
        ADD SimpleService-0.0.1-SNAPSHOT.jar app.jar
        RUN bash -c 'touch /app.jar'
        ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

 

The official how-to is here https://spring.io/guides/gs/rest-service/#use-maven and a great blog post is herehttp://blog.codeleak.pl/2015/01/getting-started-with-jersey-and-spring.html

Java Application Memory Usage and Analysis

The Java Virtual Machine (JVM) runs standalone applications and many key enterprise applications like monolithic application servers, API Gateways and microservices. Understanding an tuning an application begins with understanding the technology running it. Here is a quick overview of the JVM Memory management

JVM Memory:

  • Stack and Heap form the memory used by a Java Application
  • The execution thread uses the Stack – it starts with the ‘main’ function and the functions it calls along with primitives they create and the references to objects in these functions
  • All the objects live in the Heap – the heap is bigger
  • Stack memory management (cleaning old unused stuff) is done using a Last In First Out (LIFO) strategy
  • Heap memory management is more complex since this is where objects are created and cleaning them up requires care
  • You use command line (CLI) arguments to control sizes and algorithms for managing the Java Memory

Java Memory Management:

  • Memory management is the process of cleaning up unused items in Stack and Heap
  • Not cleaning up will eventually halt the application as the fixed memory is used up and an out-of-memory or a stack-overflow exception occurs
  • The process of cleaning JVM memory is called “Garbage Collection” a.k.a “GC”
  • The Stack is managed using a simple LIFO strategy
  • The Heap is managed using one or more algorithms which can be specified by Command Line arguments
  • The Heap Collection Algorithms include Serial, New Par, Parallel Scavenge, CMS, Serial Old (MSC), Parallel Old and G1GC

I like to use the “Fridge” analogy – Think about how leftovers go into the fridge and how last weeks leftovers, fresh veggies and that weird stuff from long long ago gets cleaned out … what is your strategy? JVM Garbage Collection Algorithms follow a similar concept while working with a few more constraints (how do you clean the fridge while your roommate/partner/husband/wife is/are taking food out?)

 

Java Heap Memory:

  • The GC Algorithms divide the heap into age based partitions
  • The process of cleanup or collection uses age as a factor
  • The two main partitions are “Young or New Generation” and “Old or Tenured Generation” space
  • The “Young or New Generation” space is further divided into 2 Survivor partitions
  • Each survivor partition is further divided into multiple sections based on command line arguments
  • We can control the GC Algorithm performance based on our use-case and performance requirements through command line arguments that specify the size of the Young, Old and Survivor spaces.

For example, consider applications that do not have long lived objects – Stateless web applications, API Gateway products e.t.c … these need larger Young or New Generation space with a strategy to age objects slowly (long tenuring). These would use either the NewPar GC or CMS algorithm to do Garbage Collection – if there are more than 2 CPU cores available (i.e. extra threads for the collector) then the application can benefit more with the CMS algorithm

The Picture Below give a view of how the Heap section of the Java Application’s memory is divided. There are partitions based on the age of “Objects” and stuff that is very old and unused eventually gets cleaned up

JVM Heap.png

The Heap and Stack Memory can be viewed at runtime using various tools like Oracle’s JRockit Mission Control (now part of the JDK), we can also do very long term analysis of the memory and the garbage collection process using Garbage Collection (GC) logs and free tools to parse GC logs

One of the key resources to analyse in the JVM is the Memory usage and Garbage Collection. The process of cleaning un-used objects from the JVM memory is called “Garbage Collection (GC)”, details of how this works is provided here: Java Garbage Collectors
Tools:

Oracle JRockit Mission Control is now free with the JDK!

  • OSX : “/Library/Java/JavaVirtualMachines/{JDK}/Contents/Home/bin/”
  • Windows: “JDK_HOME/bin/”

GC Viewer tool can be downloaded from here

Memory Analysis Tools:

  • Runtime memory analysis with Oracle JRockit Mission Control (JMRC)
  • Garbage Collection (GC) logs and “GC Viewer Tool” analyser tool

Oracle JRockit Mission Control (JMRC)

  • Available for SPARC, Linux and Windows only
  • Download here: http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-jrockit-2192437.html
  • Usage Examples:
  • High Level CPU  and Heap usage, along with details on memory Fragmentation
  • Detailed view of Heap – Tenured, Young Eden, Young Survivor etc
  • The “Flight Recorder” tool helps start a recording session over a duration and is used for deep analysis at a later time
  • Thread Level Socket Read/Write details
  • GC Pause – Young and Tenured map
  • Detailed Thread analysis

 

Garbage Collection log analysis

  1. Problem: Often it is not possible to run a profiler at runtime because
    1. Running on the same host uses up resources
    2. Running on a remote resource produces JMX connectivity issues due to firewall etc
  2. Solution:
    1. Enable GC Logging using the following JVM command line arguments
      -XX:+PrintGCDetails
      -XX:+PrintGCDateStamps
      -XX:+PrintTenuringDistribution
      -Xloggc:%GC_LOG_HOME%/logs/gc.log
    2. Ensure the GC_LOG_HOME is not on a Remote host (i.e. no overhead in writing GC logs
    3. Analyse logs using a tool
      1. Example: GC Viewer https://github.com/chewiebug/GCViewer
  1. Using the GC Viewer Tool to analyse GC Logs
    1. Download the tool
      1. https://github.com/chewiebug/GCViewer
    2. Import a GC log file
    3. View Summary
      1. Gives a summary of the Garbage Collection
      2. Total number of concurrent collections, minor and major collections
      3. Usage Scenario:
        ” Your application runs fine but occasionally you are seeing issues with slowed performance … it could be possible that there is a Serial Garbage Collector running which stops all processing (threads) and cleans memory.  Use the summary view GC Viewer to look for Long pauses in the Full GC Pause”
    4. View Detailed Heap Info
      1. Gives a view of the growing heap sizes in Young and Old generation over time (horizontal axis)
      2. Moving the slide bar moves time along and the “zig-zag” patters rise and fall with rising memory usage and clearing
      3. Vertical lines show instances when “collections” of the “garbage” take place
        1. Too many close lines indicate frequent collection (and interruption to the application if collections are not parallel) – bad, this means frequent minor collections
        2. Tall and Thick lines indicate longer collection (usually Full GC) – bad, this means longer full GC pauses

 

Dockerized Java Application Performance Analysis

Using JMX to analyse a Java Virtual Machine (JVM) within a local or remote docker image. The example below explores how to analyse a Tomcat Server (version 7) running in a docker instance using Oracle JRockit Mission Control (JMC)

Screen Shot 2015-12-12 at 12.46.11 am

Your JRockit Mission Control is here

  • OSX : /Library/Java/JavaVirtualMachines/{JDK}/Contents/Home/bin/”
  • Windows: “JDK_HOME/bin/”

 

Step-by-step guide

Add the steps involved:

  1. Determine your docker machines IP, for example on our machine we did
    1. Do `docker-machine active` to see the active machines 

      1. On my machine this returned `default` as the machine name

    2. Do `docker-machine ip default` to see the IP of the machine named ‘default`

      1. On my machine this returned `192.168.99.100`

        Screen Shot 2015-12-12 at 12.39.53 am.png

  2. Create a setenv.sh script with the JMX options
    1. Set the `CATALINA_OPTS` env variable as shown below
    2. Set the “java.rmi.server.hostname” to the IP obtained in step (1.b) above
    3. Please note that we are not using SSL nor JMX Remote Authentication – so the config below is for DEV only 
  3. Use the setenv.sh script within the tomcat runtime
    1. In your Dockerfile you can do
      1. `ADD setenv.sh /apache-tomcat-7.0.62/bin/setenv.sh`  when using an Apache Tomcat image
      2. or `ADD setenv.sh /camunda/bin/setenv.sh` when using Camunda (embedded or standalone)

    2. You can add this at runtime as well using
      1. `-v ${PWD}/setenv.sh:/apache-tomcat-7.0.62/bin/setenv.sh`when using an Apache Tomcat image
      2. or `-v ${PWD}/setenv.sh:/camunda/bin/setenv.sh` when using Camunda (embedded or standalone)
  4. Start the Docker container and map JMX ports
    1. To the `docker run` command add the following ports
      1. The JMX Remote and RMI Port  `-p 1898:1898`
      2. and  TCP Transport Port `-p 62911:62911`
  5. Validate the Docker container is exposing the ports
    1. Do a `docker ps ` and example your image’s ports
    2. Do a `docker exec -i -t` and validate the contents of your setenv.sh file to confirm the  CATALINA_OPTS  are set
      1. NOT doing will throw you off if the setenv.sh is not copied! Simple exposing the Docker container ports does not mean the JVM allows JMX connections
      2. Example: Docker Exec command to view the contents of the file for apache-tomcat-7.0.62 
  6. Run the JRockit Mission Control
    1. Comes with the JDK
    2. On a Mac you can find it here “/Library/Java/JavaVirtualMachines/{JDK}/Contents/Home/bin/”  and the executable is called “jmc”



  7. Add a Remote connection in JMC

    1. File – > Connect
    2. Provide the JMX Connection properties
      1. Host is the Docker Machine IP we obtained in Step 1 above
      2. Port is the JMX Remote Port, for example: 1898 in Step 2 above
      3. Name the Connection “Docker-<image-name>”
      4. Test the Connection

       

    3. Your JVM shows up in JRockit
  8. Start JMX Console and view JVM metrics

 

See more details on how to use the JRockit JVM tool here

 

API Performance Testing with Gatling

The Problem

“Our API Gateway is falling over and we expect a 6-fold increase in our client base and a 10-fold increase in requests, our backend service is scaling and performing well. Please help us understand and fix the API Gateway” 

Tasks

It was pretty clear we had to run a series of performance tests simulating the current and new user load, then apply fixes to the product (API Gateway), rinse and repeat until we have met the client’s objectives.

So the key tasks were:

  1. Do some Baseline tests to compare the API Gateway Proxy to the Backend API
  2. Use tools to monitor the API Gateway under load
  3. Tune the API Gateway to Scale under Increasing Load
  4. Repeat until finely tuned

My experience with API performance testing was limited (Java Clients) so I reached out to my colleagues and got a leg up from our co-founder (smart techie) who created a simple “Maven-Gatling-Scala Project” in Eclipse to enable me to run these tests.

Gatling + Eclipse + Maven

  • What is Gatling? Read more here: http://gatling.io/docs/2.1.7/quickstart.html
  • Creating an Eclipse Project Structure
    • Create a Maven Project
    • Create a Scala Test under /src/test/scala
    • Create galting.conf , recorder.conf and a logback.xml under /src/test/resources
    • Begin writing your Performance Test Scenario
  • How do we create Performance Test Scenarios?
    • Engine.scala – Must be a Scala App extension
    • Recorder.scala – Must be a Scala App
    • Test – Must extend Gatling Simulation, we created an AbstractClass and extended it in our test cases so we can reuse some common variables
    • gatling.conf – This file contains important configuration which determine where the test results go, how to make the http calls etc See more details here http://gatling.io/docs/2.0.0-RC2/general/configuration.html

Here is a screenshot of my workspace

Screen Shot 2015-10-21 at 11.31.07 am

Executing the Gatling Scala Test

  • Use Maven Gatling Plugin See https://github.com/gatling/gatling-maven-plugin-demo
  • Make sure your pom file has the gatling-maven-plugin artifact as shown in the screen shot below
  • Use the option “-DSimulationClass” to specify the Test you want to run (For Example: -DSimulationClass=MyAPIOneTest)
  • Use the option “-Dgatling.core.outputDirectoryBaseName” to output your reports to this folder

Screen Shot 2015-10-21 at 11.53.50 am


Creating A Performance Test Scenario

Our starting point to do any sort of technical analysis of the API gateway was to study what it did over a varying load and build a baseline. Also this had to be done by invoking a few of APIs during the load to simulate varying requests per second (For example: One api is invoked every 5 seconds while another is done every 10 seconds).

After reading the documentation for Gatling and trying out a couple of simple tests, we were able to replicate our requirements of ‘n’ users by doing 2 scenarios executing http invocations to two different API calls with different ‘pause’ times calculated from the ‘once every x second’ requirement. Each scenario took ‘n/2’ users to simulate the ‘agents’ / ‘connections’ making different calls and each scenario was run in parallel.

  • We used a simple spreadsheet to put down our test ranges, write down what we will vary, what we will keep constant and our expectations for the total number of Requests during the Test duration and expectations for the Requests Per Second we expect Gatling to achieve per test
  • We used the first few rows on the spreadsheet to do a dry run and then matched the Gatling Report results to our expectations in the spreadsheet and confirmed we were on the right track – that is, as we increased the user base our number for RPS and how the APIs were invoked matched our expectation
  • We then coded the “mvn gatling:execute” calls into a run script which would take in User count, Test Duration and Hold times as arguments and also coded a Test Suite Run Script to run the first script using values from the Spreadsheet Rows
  • We assimilated  the Gatling Reports into a Reports folder and served it via a simple HTTP Server
  • We did extensive analysis of the Baseline Results, monitored our target product using tools (VisualVM, Oracle JRockit), did various tuning (that is another blog post) and re-ran the tests until we were able to see the product scale better under increasing load

Performance Test Setup

The setup has 3 AWS instances created to replicate the Backend API and the API Gateway and host the Performance tester (Gatling code). We also use Nodejs to server the Gatling reports though a static folder mapping, that is during each run the Gatling reports are generated onto a ‘reports’ folder mapped under the ‘public’ folder of a simple http server written in Node/Express framework.

Perf Test Setup

API Performance Testing Toolkit

The Scala code for the Gatling tests is as follows – notice “testScenario” and “testScenario2” pause for 3.40 seconds and 6.80 seconds and the Gatling Setup has these two scenarios in parallel. See http://gatling.io/docs/2.0.0/general/simulation_setup.html

Screen Shot 2015-10-20 at 1.56.45 pm

Results

  • 2 user tests reveal we coded for the scenario right and calls are made to the different APIs during differing time periods as shown below

Screen Shot 2015-10-20 at 1.56.22 pm Screen Shot 2015-10-20 at 2.26.50 pm

  • As we build our scenarios, we watch the users ramp up as expected and performance of the backend and gateway degrade as expected with more and more users
    • Baseline testing

 Screen Shot 2015-10-21 at 12.12.08 pm

    • Before tuning

Screen Shot 2015-10-20 at 2.00.08 pm

  • Finally we use this information to Tune the components (change Garbage collector, add a load balancer etc) and retest until we meet and exceed the desired results
    • After tuning

Screen Shot 2015-10-20 at 1.59.17 pm

Conclusion

  • Scala/Gatling

Groovy Grape Turns Sour – java.lang.RuntimeException: Error grabbing Grapes — [download failed:

Issue:

Trying to run a Ratpack Groovy code [ @Grab(‘io.ratpack:ratpack-groovy:1.0.0’) ]

groovy -Dgroovy.grape.report.downloads=true -Dratpack.port=8081 server.groovy

The Error:

General Error:  

java.lang.RuntimeException: Error grabbing Grapes -- [download failed:

Specific Error:

java.lang.RuntimeException: Error grabbing Grapes -- [download failed: org.yaml#snakeyaml;1.12!snakeyaml.jar(bundle), download failed: com.google.guava#guava;18.0!guava.jar(bundle)]

The fix:

Delete the Repository folders in repo managers like Maven.

Why? Because the ~/.groovy/grapes repo for the dependency has the property file configured to read from Maven Repo instead of the Remote Repo (cannot download because it is not a HTTP link)

Detailed Steps:

  1. Step 1:  Use Grape Resolve to load the dependency and show the error

    “grape resolve  com.google.guava guava 18.0″

  2. Step 2:  Find the dependency property and figure out where this is pulling from

    cat ~/.groovy/grapes/com.google.guava/guava/ivydata-18.0.properties

    #ivy cached data file for com.google.guava#guava;18.0
    #Thu Oct 01 15:49:19 AEST 2015
    artifact\:ivy\#ivy\#xml\#1163610380.original=artifact\:guava\#pom.original\#pom\#361716139
    artifact\:guava\#pom.original\#pom\#361716139.original=artifact\:guava\#pom.original\#pom\#361716139
    resolver=localm2
    artifact\:guava\#pom.original\#pom\#361716139.exists=true
    artifact\:ivy\#ivy\#xml\#1163610380.exists=true
    artifact\:ivy\#ivy\#xml\#1163610380.location=file\:/Users/alokmishra/.m2/repository/com/google/guava/guava/18.0/guava-18.0.pom
    artifact.resolver=localm2
    artifact\:guava\#pom.original\#pom\#361716139.location=file\:/Users/alokmishra/.m2/repository/com/google/guava/guava/18.0/guava-18.0.pom
    artifact\:ivy\#ivy\#xml\#1163610380.is-local=true
    artifact\:guava\#pom.original\#pom\#361716139.is-local=true
  3. Step 3: Delete from Maven
    1. Locate the Dependency in your ~/.m2/repository
    2. Delete the repository for this Dependency
      1. rm -r -f /Users/alokmishra/.m2/repository/com/google/guava/
    3. Delete the repository from Groovy
      1.  rm -r -f /Users/alokmishra/.groovy/grapes/com.google.guava/guava/
  4. Step 4: Retest
    1. grape resolve  com.google.guava guava 18.0

      /Users/alokmishra/.groovy/grapes/com.google.guava/guava/jars/guava-18.0.jar
    2. groovy -Dgroovy.grape.report.downloads=true -Dratpack.port=8081 server.groovy

      [SUCCESSFUL ] com.google.guava#guava;18.0!guava.jar(bundle) (4353ms)

Oracle AS Adapter for Peoplesoft: IWAY Error Resolution

ERROR com.ibi.bse.ConfigWorker:java.lang.NoClassDefFoundError: oracle/tip/adapter/api/exception/PCResourceException

Solution:

It appears that Oracle took the IWay Servlet and built a Java Swing application around it that allows you to create a Web service (BSE) or J2CA based connection to the Enterprise Applications (SEIBEL, JDE, Peoplesoft).

This Swing application is launched in a Unix/Linux shell using the iwae.sh script and in Windows using the ae.bat batch file and uses the libraries at the install folder’s lib sub-folder.

After deploying the “iwafjca” RAR and WAR and the “ibse” WAR that came with the installation to my Weblogic server (SOA Domain), I tried to connect to the Servlets, create a BSE or J2CA config and then  connect to an Enterprise Application … however the “Adapter” Node in the Swing tree would not expand and after setting the Option -> Debug on in the swing app, I was able to see in the logs that I had some exceptions due to missing libraries.

I found that the SOA Adapter Libraries for 11g in the SOA Home / modules path need to be copied as show below. Hope this helps!

Copy Missing Libraries from 

${middleware.home}/Oracle_SOA1/soa/modules/oracle.soa.adapter_11.1.1/

To

IWay Install Path

Example: ${middleware.home}/Oracle_SOA1/soa/thirdparty/ApplicationAdapters/lib

 

 

Oracle SOA Suite 11g BPEL – FTP Adapter: What’s my filename?

I was writing an FTP adapter for a client recently for a legacy integration project, when a couple of  requirements came up:

1) When reading the file from a remote location, the client wanted to use the filename as a data element.

2) When writing the file to a remote location, the client wanted the filename to be based on one the elements from the inbound data (in this case a Primary Key from an Oracle table).

 

Part I: Reading the filename from the inbound FTP Adapter

The solution in short is this – when you create the FTP Adapter Receive, goto the properties and assign the jca.ftp.FileName to a variable.  For example, I created a simple String variable in my BPEL process called “FileName” and then assigned the jca.ftp.FileName to “FileName” BPEL variable. The end result was this …..

<receive name=”ReceiveFTPFile” createInstance=”yes”
variable=”FTPAdapterOutput”
partnerLink=”ReadFileFromRemoteLocation”
portType=”ns1:Get_ptt” operation=”Get”>
<bpelx:property name=”jca.ftp.FileName” variable=”FileName”/>
</receive>

 

Here’s a visual guide on how to do this:

Create a Variable

 

Assign the Variable to the jca.ftp.FileName property on the Receive …

 

Part II: Assigning a special element name instead of YYYY-MM-DD etc for FTP Outbound filename:

You can use this same process as shown above in the Outbound FTP Adapter. That is, read the value from the element you want the filename to be (either create a new String BPEL var or resuse something in your schema) and assign it to the Invoke property’s jca.ftp.FileName.

 

BPEL Error with Receive/Pick

Error: “Error(81): There is not an activity (receive/pick) to start the process”

Fix:  Check the “Create Instance” checkbox on your Receive or Pick activity.

 

When do you see these errors?

When you create a BPEL process and remove the default Receive/Reply components to receive/pick events from a queue or an FTP adapter for example.

For example: I have a BPEL flow below with an FTP adapter which receives a file and calls out to a Java/Spring Bean (to parse the file etc)

 

Oracle SOA Suite 11g – Configure Resource Adapters on Weblogic Server [AQAdapter]

AQAdapters

AQ is Oracle’s Advanced Queuing – a database backed channel. We use AQ Queues a lot when doing integration projects and it always helps to have a local install of SOA Suite with AQ capabilities (i.e. your own DB with AQ Queues etc)

It was hard finding any documentation on configuring Adapters for Oracle SOA Suite on a Weblogic server, so I thought I would put together a little doco explaining how I configured this. It is the same for an Apps Adapter config.  Initially this looked a bit different than the old OC4J way of configuring adapters but it really is not all that different.

Weblogic requires a “weblogic-ra.xml” along with the “ra.xml” file in the “META-INF” folder of the RAR file for the adapter.  The trickiest part is getting the Web console to apply changes … what I mean is that initially I tried to “Update” an existing “Deployment” of AQAdapter from the Weblogic Admin Console and it blew up … later I found out because the AQAdapter was packaged up in a RAR file (and not exploded on the filesystem) …as a result my changes from the console were not making it through.

The steps below show how I extracted the AQAdapter.rar to AQAdapter folder I created under the $SOA_HOME/soa/connectors/  folder. You can use these steps to configure any adapter (I have personally tested Oracle Apps Adapter – screen shots later)

Before we begin though, read through Oracle’s documentation on AQ and how to create queues and configure etc

Oracle AQ Documentation: http://download.oracle.com/docs/cd/E12839_01/integration.1111/e10231/adptr_aq.htm

Oracle AQ Adapter Properties: http://download.oracle.com/docs/cd/E12839_01/integration.1111/e10231/adptr_propertys.htm#CIHCHGJJ

Here is a good post that explain how to create a user and then create an AQ Queue: http://ora-soa.blogspot.com/2008/06/steps-to-create-aq-queuetopic.html

Steps for Creating an AQ Producer/Consumer and configuring the AQ Adapter:

Lets start from JDeveloper, I created a simple composite that uses an AQAdapter to Enqueue and XML

Have a look at the JCA Properties for the Queue, the JNDI name can be changed to anything you like but must be consistent with what you configure later. For example, I use “eis/AQ/MyAQDatasource” in this example to match the name of the datasource I will be using. Because the AQ Queue is Database backed … the JNDI for the AQ Queue refers to a configuration that contains the JNDI for a XA Datasource. In my example it is “jdbc/MyAQDatasource”

Steps:

First configure the Adapter weblogic-ra.xml

Goto your Weblogic Server’s host’s file-system, navigate to the connectors directory [$SOA_HOME/soa/connectors/ … in my case it was “g:\Oracle\mw_10.3.5\Oracle_SOA1\soa\connectors\”]

Back-up your AqAdapter.rar

Create a AqAdapter directory and copy the original AqAqapter.rar here

Extract the AqAdapter.rar file in the new directory by doing “jar xf AqAdapter.rar”

Remove the AqAdapter.rar file (so that your directory now has AqAdapter.jar and META-INF folder)


Navigate to the META-INF folder and add your “connection-instance” to the “Weblogic-ra.xml” … basically you are saying the “eis/AQ/MyAQDatasource” JNDI name is configured to have these properties (this is where you put in your XADatasource JNDI)

Add your changes based on what you have in JDeveloper

Save the Weblogic-ra.xml


To configure the AQ Adapter in Weblogic Admin Console

Start the Admin Console

Goto the Deployments

Locate and Uninstall/Delete the “AqAdapter” deployment (don’t worry it will not remove it from your filesystem)

Install the new AQAdapter as show below. Note, the “Deployment Plan” is not created right away … there is a trick to creating it. After the initial Install wizard your need to goto your Adapter configurations and first hit-Enter on a  property in the configuration and then click the SAVE button. (see the Red comments in the images below).

This is the tricky part … click on a property, then press the “ENTER” key, then click on the “SAVE” button
Make sure your DataSource Exists

 

Here is a screen shot explaining how the Datasource is used ….



Enterprise Integration – Using Heterogeneous Namespace Design For Messaging Schema

When integrating with Legacy systems, especially ones that rely on flat-files, it is often the case that there is no XSD definition that can be used in BPEL/ESB processes.  This happened recently when I was using Oracle’s AIA framework to build Application Business Connector Services (ABCS) for a legacy system that has a file-poll based integration.

The very first step, after developing the EAI picture for the client’s ERP to Legacy system, was to begin hashing out data mapping and business logic details in the ABCSs. I used Oracle JDeveloper to build schemas and used namesspace standards, as shown below, for organizing the ERP and Legacy System’s Entity Schemas and the Schemas used to do Request/Reply on these entities.

Lets take for example the Order entity in Legacy System1. The endpoint expects a list of Orders (for milk runs) and the ABCS takes a Request that has a List of Orders, then creates the file that represents the endpoint datafile and finally uses a File Adapter to put the file there.

I have shown below,  how to create the schema for the Order Entity (OrderType) and how to wrap it in a Order Request Type.  Due to time-constraints, I will simply upload the images now and come back to this post to detail the steps.