How did we get to Microservices?

If you have struggled with decisions when designing APIs or Microservices – it is best to take a step back and look at how we got here. It helps not only renew our appreciation for the rapid changes we have seen over the past 10-20 years but also puts into perspective why we do what we do

I still believe a lot of us cling to old ways of thinking when the world has moved on or is moving faster than we can process. APIs and microservices are not just new vendor-driven fads rather they are key techniques, processes and practices for businesses to survive in a highly evolving eco-system. Without an API-strategy, for example, your business might not be able to provide the services to consumers your competition can provide or with the quality of service we have come to expect (in real-time)

So, with that in mind let us take a step back and look at the evolution of technology within the enterprise and remember that this aligns to the business strategy

Long time ago

There were monolithic applications,  mainframe systems for example processing Order, Pricing, Shipment etc. You still see this monolith written within startups because they make sense – no network calls, just inter-process communication and if you can join a bunch of tables you can make a “broker” happy with a mashup of data

Screen Shot 2020-02-26 at 5.48.04 pm
1. First, there was just the legacy monolith application

Circa 2000s MVC app world

No ESB yet. Exploring the JVM and Java for enterprise application development. JSF was new and EJB framework was still deciding what type of beans to use. Data flowed via custom connectors from Mainframe to these Java applications which cached them and allowed viewing and query of this information

There were also functional foundation applications for enterprise logging, business rules, pricing, rating, identity etc. that we saw emerging and data standards were often vague. EAI patterns were being observed but not standardized and we were more focused on individual service design patterns and the MVC model

Screen Shot 2020-02-26 at 5.48.12 pm
2. Then we built lots of MVC applications and legacy monolith, integration was point-to-point

Services and Service-Oriented Architecture

The next wave began when the number of in-house custom applications started exploding and there was a need for data standardization, a common language to describe enterprise objects and de-coupled services with standards requests and responses

Some organisations started developing their own XML based engines around message queues and JMS standards while others adopted the early service bus products from vendors

Thus Service-Oriented Architecture (SOA) was born with lofty goals to build canonical enterprise data models, reduce point-point services (Java applications had a build-time dependency to services they consumed, other Java services), add standardized security, build service registry etc

We also saw a general adoption and awareness around EAI patterns – we finally understood what a network can do to consistency models and the choice between availability and consistency in a partition. Basically stuff known by those with a Computer Science degree working on distributed computing or collective-communication in a parallel computing cluster

One key observation is that the vendor products supporting SOA were runtime monoliths in their own right. It was a single product (J2EE EAR) running on a one or more application servers with a single database for stateful processes etc. The web services we developed over this product were mere XML configuration which was executed by one giant application

Also, the core concerns were “service virtualisation” and “message-based routing”, which was a pure stateless and transformation-only concept. This worked best when coupled with an in-house practice of building custom services and failed where there was none and the SOA product had to simply transform, route (i.e. it did not solve problems by itself as an integration layer)

Screen Shot 2020-02-26 at 5.36.36 pm
3. We started to make integration standardised and flexible, succeed within the Enterprise but failed to scale for the Digital world. Not ready for mobile or cloud

API and Microservices era

While the SOA phase helped us move away from ugly file-based integrations of the past and really supercharged enterprise application integration, it failed miserably in the digital customer domain. The SOA solutions were not built to scale, they were not built for the web and the web was scaling and getting jazzier by the day; people were expecting more self-service portals and XML parsers were response times!

Those of us who were lucky to let go off the earlier dogma (vendor coolade) around the “web services” we were building started realising there was nothing webby about it. After a few failed attempts at getting the clunk web portals working, we realised that the SOA way of serving information was not suited to this class of problems and we needed something better

We have come back a full circle to custom build teams and custom services for foundation tasks and abstractions over end-systems – we call these “micro-services” and build these not for the MVC architecture but as pure services. These services speak HTTP  natively as the language of the web, without custom standards like SOAP had introduced earlier and use a representational state transfer style (REST) pattern to align with hypermedia best-practices; we call them web APIs and standardise around using JSON as the data format (instead of XML)

Screen Shot 2020-02-26 at 7.01.52 pm
4. Microservices, DevOps, APIs early on – it was on-prem and scalable

The API and Microservices era comes with changes in how we organise (Dev-Ops), where we host our services (scalable platforms on-prem or available as-a-service) and a fresh look at integration patterns (CQRS, streaming, caching, BFF etc.). The runtime for these new microservices-based integration applications is now broken into smaller chunks as there is no centralised bus 🚎

Screen Shot 2020-02-26 at 7.08.20 pm
5. Microservices, DevOps, APIs on externalised highly scalable platforms (cloud PaaS)

Recap

Enterprise system use has evolved over time from being depended on one thing that did everything to multiple in-house systems to in-house and cloud-based services. The theme has been a gradual move from a singular application to a network partitioned landscape of systems to an eco-system of modular value-based services

Microservices serve traditional integration needs between enterprise systems but more importantly enable organisations to connect to clients and services on the web (cloud) in a scalable and secure manner – something that SOA products failed to do (since they were built for the enterprise context only). APIs enable microservices to communicate with the service consumers and providers in a standard format and bring with them best practices such as contract-driven development, policies, caching etc that makes developing and operating them at scale easier

Java Application Memory Usage and Analysis

The Java Virtual Machine (JVM) runs standalone applications and many key enterprise applications like monolithic application servers, API Gateways and microservices. Understanding an tuning an application begins with understanding the technology running it. Here is a quick overview of the JVM Memory management

JVM Memory:

  • Stack and Heap form the memory used by a Java Application
  • The execution thread uses the Stack – it starts with the ‘main’ function and the functions it calls along with primitives they create and the references to objects in these functions
  • All the objects live in the Heap – the heap is bigger
  • Stack memory management (cleaning old unused stuff) is done using a Last In First Out (LIFO) strategy
  • Heap memory management is more complex since this is where objects are created and cleaning them up requires care
  • You use command line (CLI) arguments to control sizes and algorithms for managing the Java Memory

Java Memory Management:

  • Memory management is the process of cleaning up unused items in Stack and Heap
  • Not cleaning up will eventually halt the application as the fixed memory is used up and an out-of-memory or a stack-overflow exception occurs
  • The process of cleaning JVM memory is called “Garbage Collection” a.k.a “GC”
  • The Stack is managed using a simple LIFO strategy
  • The Heap is managed using one or more algorithms which can be specified by Command Line arguments
  • The Heap Collection Algorithms include Serial, New Par, Parallel Scavenge, CMS, Serial Old (MSC), Parallel Old and G1GC

I like to use the “Fridge” analogy – Think about how leftovers go into the fridge and how last weeks leftovers, fresh veggies and that weird stuff from long long ago gets cleaned out … what is your strategy? JVM Garbage Collection Algorithms follow a similar concept while working with a few more constraints (how do you clean the fridge while your roommate/partner/husband/wife is/are taking food out?)

 

Java Heap Memory:

  • The GC Algorithms divide the heap into age based partitions
  • The process of cleanup or collection uses age as a factor
  • The two main partitions are “Young or New Generation” and “Old or Tenured Generation” space
  • The “Young or New Generation” space is further divided into 2 Survivor partitions
  • Each survivor partition is further divided into multiple sections based on command line arguments
  • We can control the GC Algorithm performance based on our use-case and performance requirements through command line arguments that specify the size of the Young, Old and Survivor spaces.

For example, consider applications that do not have long lived objects – Stateless web applications, API Gateway products e.t.c … these need larger Young or New Generation space with a strategy to age objects slowly (long tenuring). These would use either the NewPar GC or CMS algorithm to do Garbage Collection – if there are more than 2 CPU cores available (i.e. extra threads for the collector) then the application can benefit more with the CMS algorithm

The Picture Below give a view of how the Heap section of the Java Application’s memory is divided. There are partitions based on the age of “Objects” and stuff that is very old and unused eventually gets cleaned up

JVM Heap.png

The Heap and Stack Memory can be viewed at runtime using various tools like Oracle’s JRockit Mission Control (now part of the JDK), we can also do very long term analysis of the memory and the garbage collection process using Garbage Collection (GC) logs and free tools to parse GC logs

One of the key resources to analyse in the JVM is the Memory usage and Garbage Collection. The process of cleaning un-used objects from the JVM memory is called “Garbage Collection (GC)”, details of how this works is provided here: Java Garbage Collectors
Tools:

Oracle JRockit Mission Control is now free with the JDK!

  • OSX : “/Library/Java/JavaVirtualMachines/{JDK}/Contents/Home/bin/”
  • Windows: “JDK_HOME/bin/”

GC Viewer tool can be downloaded from here

Memory Analysis Tools:

  • Runtime memory analysis with Oracle JRockit Mission Control (JMRC)
  • Garbage Collection (GC) logs and “GC Viewer Tool” analyser tool

Oracle JRockit Mission Control (JMRC)

  • Available for SPARC, Linux and Windows only
  • Download here: http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-jrockit-2192437.html
  • Usage Examples:
  • High Level CPU  and Heap usage, along with details on memory Fragmentation
  • Detailed view of Heap – Tenured, Young Eden, Young Survivor etc
  • The “Flight Recorder” tool helps start a recording session over a duration and is used for deep analysis at a later time
  • Thread Level Socket Read/Write details
  • GC Pause – Young and Tenured map
  • Detailed Thread analysis

 

Garbage Collection log analysis

  1. Problem: Often it is not possible to run a profiler at runtime because
    1. Running on the same host uses up resources
    2. Running on a remote resource produces JMX connectivity issues due to firewall etc
  2. Solution:
    1. Enable GC Logging using the following JVM command line arguments
      -XX:+PrintGCDetails
      -XX:+PrintGCDateStamps
      -XX:+PrintTenuringDistribution
      -Xloggc:%GC_LOG_HOME%/logs/gc.log
    2. Ensure the GC_LOG_HOME is not on a Remote host (i.e. no overhead in writing GC logs
    3. Analyse logs using a tool
      1. Example: GC Viewer https://github.com/chewiebug/GCViewer
  1. Using the GC Viewer Tool to analyse GC Logs
    1. Download the tool
      1. https://github.com/chewiebug/GCViewer
    2. Import a GC log file
    3. View Summary
      1. Gives a summary of the Garbage Collection
      2. Total number of concurrent collections, minor and major collections
      3. Usage Scenario:
        ” Your application runs fine but occasionally you are seeing issues with slowed performance … it could be possible that there is a Serial Garbage Collector running which stops all processing (threads) and cleans memory.  Use the summary view GC Viewer to look for Long pauses in the Full GC Pause”
    4. View Detailed Heap Info
      1. Gives a view of the growing heap sizes in Young and Old generation over time (horizontal axis)
      2. Moving the slide bar moves time along and the “zig-zag” patters rise and fall with rising memory usage and clearing
      3. Vertical lines show instances when “collections” of the “garbage” take place
        1. Too many close lines indicate frequent collection (and interruption to the application if collections are not parallel) – bad, this means frequent minor collections
        2. Tall and Thick lines indicate longer collection (usually Full GC) – bad, this means longer full GC pauses

 

Oracle SOA Suite 11g BPEL – FTP Adapter: What’s my filename?

I was writing an FTP adapter for a client recently for a legacy integration project, when a couple of  requirements came up:

1) When reading the file from a remote location, the client wanted to use the filename as a data element.

2) When writing the file to a remote location, the client wanted the filename to be based on one the elements from the inbound data (in this case a Primary Key from an Oracle table).

 

Part I: Reading the filename from the inbound FTP Adapter

The solution in short is this – when you create the FTP Adapter Receive, goto the properties and assign the jca.ftp.FileName to a variable.  For example, I created a simple String variable in my BPEL process called “FileName” and then assigned the jca.ftp.FileName to “FileName” BPEL variable. The end result was this …..

<receive name=”ReceiveFTPFile” createInstance=”yes”
variable=”FTPAdapterOutput”
partnerLink=”ReadFileFromRemoteLocation”
portType=”ns1:Get_ptt” operation=”Get”>
<bpelx:property name=”jca.ftp.FileName” variable=”FileName”/>
</receive>

 

Here’s a visual guide on how to do this:

Create a Variable

 

Assign the Variable to the jca.ftp.FileName property on the Receive …

 

Part II: Assigning a special element name instead of YYYY-MM-DD etc for FTP Outbound filename:

You can use this same process as shown above in the Outbound FTP Adapter. That is, read the value from the element you want the filename to be (either create a new String BPEL var or resuse something in your schema) and assign it to the Invoke property’s jca.ftp.FileName.

 

BPEL Error with Receive/Pick

Error: “Error(81): There is not an activity (receive/pick) to start the process”

Fix:  Check the “Create Instance” checkbox on your Receive or Pick activity.

 

When do you see these errors?

When you create a BPEL process and remove the default Receive/Reply components to receive/pick events from a queue or an FTP adapter for example.

For example: I have a BPEL flow below with an FTP adapter which receives a file and calls out to a Java/Spring Bean (to parse the file etc)

 

Setup ADF Session UserData in Application Module’s Prepare Session – HowTo

Here’s a useful bit of code that uses the information in the security context and populate the ADF App Module Session’s user data. This snippet is used in your ADF Fusion project’s Application Module’s “prepareSession” method, as shown below, and it uses the “session.getUserData()”  to get a handle to the session to populate the user info.

Here’s the code:

    protected void prepareSession(Session session) {
        .....
        .... 
        java.security.AccessControlContext context = java.security.AccessController.getContext();
        javax.security.auth.Subject subject = javax.security.auth.Subject.getSubject(context);

        if (subject != null && subject.getPrincipals() != null) {
            Iterator iteratorOverPrincipals =  subject.getPrincipals().iterator();
            String user = null;
            if (iteratorOverPrincipals.hasNext())  
                user = ((Principal)iteratorOverPrincipals.next()).getName();
         
            if (user != null)  
                log.log(Level.INFO, "PrepareSession:" + user);
        
            Hashtable userData = session.getUserData();
            userData.put(UserSession.USER_SESSION_KEY, new UserSession(user));
        } else {
            Hashtable userData = session.getUserData();
            userData.put(UserSession.USER_SESSION_KEY, new UserSession(null));
            ....
            ...
        }
      ....
      ....
    }

High Performance Computing Comes to the Enterprise – Oracle’s Exalogic

Oracle’s Exalogic….

is a hardware platform that outperforms competition with features like 40 Gb/sec Infiniband network link,  30 x86 compute nodes,  360 Xeon cores (2.93 GHz), 2.8 TB DRAM and 960 GB SSD  in a full rack. Phew!

Ref: Oracle’s Whitepaper on Exalogic

You can “google” it … search for “Oracle Exalogic” and learn more about the beast, but in short this is a platform that is not only optimized to perform well, but also designed to use less resources. So for example, the power consumption is really low and this is a very green option. Or so says the “cool-ade label”.

Application Architects have always fret over network latency, I/O bottlenecks and general hardware issues over the years. While the classical “Computer Science” recommends/insists that the optimization lies in the application and algorithmic efficiency – the reality in enterprise environments is that “Information Processing” applications are often (lets assume) optimized but it is hardware issues that cause more problems. Sure there is no replacement to SQL tuning, code instrumentation etc but if you are an enterprise invested in a lot of COTS applications – you just want the damn thing to run! Often the “damn thing” does want to run but then it has limited resources and “scaling” of these resources is not optimized.

This is specially true for 3-tier applications which despite being optimized (No “select * queries” or bad sort loops) have to run on hardware that perform great in isolation but when clustered they do not scale as well as High Performance Computing applications do. Why is that?

The problem lies…

in the common protocols used to move data around. Ethernet and TCP/IP over it has been the standard to make computers and applications in them “talk”. Lets just say that this hardware and protocol can be optimized quite a bit! Well that’s what has happened with Exalogic and Exadata.

Thanks to some fresh thinking on Oracle’s part, their acquisition of Sun Microsystems, improvements in Java language (some of the New IO stuff) and high performance network switches from Infiniband… there is a new hardware platform in town which is juiced out (can I say “pimped out”) to perform!

My joy stems from the fact that Oracle is using optimizations employed in High Performance computing to enterprise hardware. The use of collective communication APIs like Scatter/Gather to improve application I/O throughput and latency (Fact: 10GBPS Ethernet has a MTU of 1.5K – while Infiniband uses a 64K MTU for Inifiniband over IP protocol and 32K MTU or more for Socket Direct Protocol).


Personally all this ties very well with my background in High Performance Computing (see my Master’s in Computer Science report on High Performance Unified Parallel C (UPC) Collectives For Linux/Myrinet Platforms   done at Michigan Tech with Dr. Steve Seidel) …and my experience in Enterprise Application development/architecture.


…here’s my description of Scatter and Gather collective communication written in 2004:

Broadcast using memory pinning and remote DMA access in linux (basically network card can access user space directly and do gets and puts)



Notes on Webcenter PS4 Install – Part II

Installing Webcenter ….

…okay I got a little stuck here because of the incorrect JDK version.  I extracted the “ofm_wc_generic_11.1.1.5.0_disk1_1of1.zip” file using “unzip <filename>” and it created 3 “DISK” folders. I went under “Disk1/bin/” and did

[oracle@xxxxxDisk1]$ ./runInstaller
Starting Oracle Universal Installer...

Checking if CPU speed is above 300 MHz.    Actual 2660 MHz    Passed
Checking Temp space: must be greater than 150 MB.   Actual 23647 MB    Passed
Checking swap space: must be greater than 512 MB.   Actual 4031 MB    Passed

Continue? (y/n) [n] y

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2011-05-27_03-33-20AM. Please wait ...
Please specify JRE/JDK location ( Ex. /home/jre ), <location>/bin/java should exist :/opt/app/Oracle/Middleware/jdk160_24

After a little while it displayed the following error in the Xterm window ….

[oracle@xxxxxxDisk1]$ java.lang.UnsatisfiedLinkError: /tmp/OraInstall2011-05-27_03-46-01AM/oui/lib/linux64/liboraInstaller.so: /tmp/OraInstall2011-05-27_03-46-01AM/oui/lib /linux64/liboraInstaller.so: wrong ELF class: ELFCLASS64 (Possible cause: architecture word width mismatch)
 at java.lang.ClassLoader$NativeLibrary.load(Native Method)

The reason for this was the incorrect JDK version (Using a 32 bit when I need a 64 bit – “architecture word width mismatch.

Check your JDK build in the following manner:

Goto your Java bin folder, say /…/jdk160_24/bin

Run the file command in the bin folder …i.e. “file java” in the bin folder.

[oracle@pdemora140rhv bin]$ file java"
 java: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped

The ELF 32-bit LSB executable tells you the build of the “java” binary.

So now that we know that the versions are incorrect, we down load the right Weblogic installer and JDK from the following sites

Next Steps…

  • Install 64bit JDK
  • Install Weblogic using 64bit jdk … use “java -jar”
  • Install the Webcenter Product (11.1.1.5)
  • Run config.sh to create a Domain – goto ORACLE_HOME/oracle_common/common/bin  & make sure display is exported (see earlier post)
  • Create the Webcenter Spaces weblogic domain
    • keep JDBC url handy (details coming soon on how to configure the different schema logins with the same JDBC url and password)
    • make sure you select the Administration Server settings and select the machine for Adminserver and the Webcenter managed server (this is so that the node manager can start it)
  • Change the managed server startup script under DOMAIN_HOME/bin folder to use a different tmp folder or run Admin server and select “No Stage” for Webcenter spaces server
  • Run the SetupNM script
  • Start the NodeManager
  • Start the Weblogic Admin Server
  • Goto http://<adminserver ip>:7001/console to deploy the Admin Console
  • Navigate to Servers -> Webcenter Spaces server and configure the options for Deployment -> No Stage
  • Start Webcenter Spaces server
  • Goto http://<webcenter spaces ip>:8888/webcenter to launch the application.  If the webcenter spaces startup logs show “127.0.0.1” or “localhost” then goto the Spaces Server settings in Admin Server and you will find “localhost” configured for the server’s listening IP.  Remove this (make this field blank) to have the spaces server listen to the machine’s configured IP

And when you are done this is the result …

 

Oracle Fusion ADF – JSF Rich Text and Table Component (Data Input and Refresh)

Goal:

The ADF framework is quite powerful and you should be able to quickly create a page that will let you do Partial Page Rendering (PPR) in no time … there are plenty of good examples online about  how to do this (and you should be able to work things out on your own in no time).

However, my problem was that by using out of box “Partial Triggers” property on the table component, I was unable to “re-query” the underlying table model.  Instead, I used an explicit call to the underlying UI controls and ended up learning quite a bit in the process.

Build a very simple example with a basic java “Map” data model to accept “key/value” data pair as inputs on a page and display the growing “list”/”map” as a table below. Use framework based or custom built Partial Page Rendering (PPR) to refresh the table and tie this to the “Submit” button on the page.

The example below should walk you through a very basic ADF web application build using plain java classes to demonstrate MVC interaction in this framework.

Please Note: In an effort to quickly pen down my observations, the first draft might be not-so-polished.  I will work on refining this post over time.

Here’s what I was thinking of building/experimenting with:

<img class="alignnone size-full wp-image-363" title="Sample Page – Table from a List” src=”https://techiecook.files.wordpress.com/2011/02/page-view.jpg” alt=”” width=”489″ height=”285″ />

Outline

The application is a very simple, two project fusion web app. It has a Model and View project – the model was started from a POJO Service class (I don’t use any patterns to ensure that multiple instances of the service class refer to the same data model etc – it is very basic).

Here’s how you might build a service class … and after you are done – right click and create a Data Control

…. right click on the Service Java Class, to create/update the DataControl,  and select “Create Data Control”

The overall project will look something like this:  Data Controls built from the POJO service will be available to the UI via Data Bindings. The page will consist of 3-4 components (two Rich Input Texts, One Button and One Table). The page (and its 3-4 components) will be backed by a custom bean (Managed Bean) , that is …. there will be a “Java class” with RichInput Text fields, RichTable field and methods that accept ValueChangedEvent etc. There will be a definition in the ADF config file about this class which will tie this to the JSF page so that details of the fields are available to the class (as you change it on the screen) and when you click the button component on the screen it invokes a method on the class.

PPR – Using out-of-box features

As explained above, the page bean has the meat of the logic. We could have entirely avoided this using out-of-box PP, this is done by clicking on the Rich Component (you want to be refreshed) and  under “Properties -> Behavior” selecting “Partial Triggers” to specify a trigger that will cause a partial page refresh for this component.

Once you click on Edit – the wizard opens and you can use it to select “which component causes the trigger” …

PPR – Using custom method in page bean

The bean class,  notice:

  • the fields and  their accessors/mutators
  • the methods with ValueChangeEvent – these are tied to the RichTextInput components changing
  • The “cb3_action” method – this one is tied to the “button” on the page.  This calls the “refresh table”. Details on the method’s comments.

Code to handle the button event:

public String cb3_action() {
 // Should be final Strings in a shared class
 String insertMethodInPOJO = "insertIntoStringTable";
 String paramNameForKey = "key";
 String paramNameForValue = "value";

 // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 // Steps to execute a method as defined in a Data Control
 // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 BindingContainer bindings =
 BindingContext.getCurrent().getCurrentBindingsEntry();
 OperationBinding operationBinding =
 bindings.getOperationBinding(insertMethodInPOJO);
 // mKey and mValue are instances of RichInputText class
 operationBinding.getParamsMap().put(paramNameForKey, mKey.getValue());
 operationBinding.getParamsMap().put(paramNameForValue,
 mValue.getValue());
 Object result = operationBinding.execute();

 // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 // REFRESH the table
 // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 refreshTable();

 if (!operationBinding.getErrors().isEmpty()) {
 return null;
 }

 //do Something with the errors
 return null;
 }

The table refresh code

/***
 * REFRESH Table Model:
 *    Grab the Collection Model, Get the JUCtrlHierBinding, Execute Query to force refetch
 */
 public void refreshTable() {
 CollectionModel cm = (CollectionModel)richTableBinding.getValue();
 JUCtrlHierBinding tableBinding =
 (JUCtrlHierBinding)cm.getWrappedData();
 tableBinding.executeQuery();
 }

POJO Business Objects and UML Diagrams in JDeveloper 11g

Step 1) Right Click on the Project and select “New …”

Step 2) Select from the “Java” Category in Categories on Left …. and then select “Java Class Diagram” as shown below

Step 3) Drag Java classes to the Diagram or create new UML Model and generate objects.

Additionally …looks like you can take your Business Objects in POJO form and create DB entities out of these too. I have yet to test this, will post an update when I do!