Wednesday, June 15, 2011

Internship @ I8C about Integration-As-A-Service

The past 10 weeks we - Siebe Le Duc and Stijn Waegmans - did our internship at I8C to do research about Cloud Integration tools. We did research on the Iaas tools Babelway, Boomi and Cast Iron and the Paas tool Windows Azure. We concluded that:
  • Babelway is a good B2B point-to-point solution,
  • Boomi has many B2B Saas Integration possibilities for both on-premises to Saas and Saas to Saas communications,
  • Cast Iron offers application-to-application integration solutions for both on-premises as in the cloud,
  • Windows Azure is a good platform for hosting applications, but is in full flux on the integration side, the AppFabric Service Bus.
During our internship we have discovered that the Integration world is a huge world and there are a lot of things happening at this very moment.
One area that can be an important advantage for Integration-as-a-service is the social platform. Because all the processes are developed and deployed in the cloud, the providers of the services can  form a good image of what processes are build by their customers. This becomes interesting when they use the information to help other users.
Boomi is already doing this by its ‘Boomi suggest’ option in the mapping. Cast Iron is doing this by providing ‘TIPs’. In Babelway companies can set their message profiles or transfer protocol specifications available to others to use for free. Pervasive does this by their ‘Pervasive Galaxy’ that is a platform to buy and sell apps and even make arrangements for cooperation between Pervasive users. So the social platform exists but it can be further developed and offer a huge advantage over non-cloud competition.
We are sure Cloud Integration will continue to grow and will take an increasingly big chunk of the integration market, if the use of different Saas solutions will keep growing. This growth can’t be without growing pains for the Cloud Integration tools and there will be new big changes in the future.
You find our final internship presentation here (slideshare).
We had a great and instructive time at I8C and would like to thank Guy Crets and the rest of the I8C team for supporting us throughout the entire internship.

Authors: Stijn and Siebe

Monday, June 13, 2011

JVM Performance Tuning Part 2: Garbage Collection Theory

Garbage collection is often the most misunderstood feature of the Java Virtual Machine. It's often advertised as moving the responsibility of memory management from the application developer to the Java Virtual Machine. This just isn't the case. On the other hand, the developer doesn't need to put too much effort in pleasing the garbage collector.
A good understanding of garbage collection theory is necessary for writing high performant applications for the Java platform. Part 2 of the JVM performance tuning blog entries will discuss the theoretical process of garbage collection in detail.

An important question we could ask ourselfs is why we should care about garbage collection?
There is a cost related to the allocation and collection of memory. It can play an important role in how the software performs especially when the application requires large amounts of RAM and forces the OS to use virtual memory. This behaviour often occurs when programs have memory leaks, meaning that memory will be allocated, but not properly released. Although the JVM is responsible for freeing unused memory, a developer has to make it clear what is being unused.

Garbage collection is the process of cleaning up unreachable java objects. Object are said to be unreachable when no more strong reference to it exist. As soon as an object is unreachable it can be collected by the GC. The object is a candidate for collection, but this doesn't mean it will be immediately collected.

The global working of the garbage collection contains three phases:
  • Lock it down: Objects participating in garbage collection first need to be locked.
  • Mark: iteration phase. Al unreachable object will be marked.
  • Sweep: sweeping phase of al the marked objects.
No mather what type of garbage collector is being used, these three phases can be found in any of them. The most import characteristics of different types of garbage collectors are the different kind of strategies being used:
  • Serial versus Parallel: Serial GC uses only one thread to perform garbage collection, even when multiple CPU's are available. Parallel GC uses multiple threads to execute GC in parallel. This introduces a little more overhead, but the use of multiple thread decreases the total GC overhead.
  • Concurrent versus Stop the World: A stop the world GC stops all the current running application threads at the moment the garbage collector will start cleaning up the dead objects. At that moment the application seems to be frozen. With a concurrent GC only a small part of the overal GC process will stop the application threads. The largest part of GC occurs concurrently with the application threads. Since the state of an object can change during concurrent GC, this type of GC introduces extra overhead an memory requirements.
  • Compacting versus non-compacting versus copying: At the moment the GC has deleted the unreachable objects, the GC can decide to perform a compacting operation. It means all the remaining objects will be put next to each other in the java heap to avoid fragmentation. Compacting introduced extra overhead during GC, but makes allocation of new objects faster, since the JVM doesn't need to search for a place in the fragmented heap to store the object. A copying collector copies the objects to another place in memory.
  • CMS: concurrent mark sweep (explained in detail later)

GC performance can be measured based on performance metrics:
  • Throughput: % of the time not spent on GC
  • Garbage Collection Overhead: % of the time spent on GC
  • Pause Time: the time application threads will be stopped to perform GC
  • Frequency of Collection: how often GC occurs
  • Footprint: how many resources (memory and CPU) the GC needs
  • Promptness: the time between an object become garbage and the time the object will be cleaned

Thursday, June 9, 2011

SoftwareAG ProcessWorld 2011 Day2

Perhaps I picked the wrong presentations, or perhaps its because I already passed by most of the information booths the first day, but I found this second day of ProcessWorld a bit less interesting.
It started nicely though with a very good presentation by Alexander Osterwalder on high level (strategic) business modelling, namely their Business Model Canvas. For me it was interesting to see how a business models can be confined by the decisions made at a strategic level. As a side note, I would like to mention that, for me, this was the best brought presentation of the whole event.

Other interesting things learned:

I must say the technology seems sound with their "write once, deploy anywhere" idea for their new mobile acquisition. I was a bit disappointed though, by their tech-demo, as it was a most simple voting app. I would have like to have seen something that differentiates between a mobile app and a web-app (like GPS, phonebook,...). I' am still not convinced that this acquisition fits in the portfolio of a middleware-company like SoftwareAG, but I'm sure they'll come up with some good use-cases that convince me otherwise.

Complex Events
It's a disadvantage when you have already spoken to a person and then go to his/her presentation: you get a lot of the same information again... Nonetheless their was a nice use case explained. The key for using CEP, is to determine if you have events or real data.

Product roadmap
The next major release (release K) will be for end of next year. A major focus for this release will be the manageability of the servers. They already made major advances with the 8.2 release (fixes through installer, new deployment procedure), but release K will improve this even more. As it stands now, the MWS in the next version will be renamed and offer more managing capabilities. There are also a whole number of other changes that were to small to read, and which they didn't bother to read, so we'll have to wait for the actual presentation to be released for that info.
In the meantime there could be minor releases that incorporate the new acquired technologies (Terracotta and mobile).

Cloud Ready
When they use this term, they actually mean certified for the Amazon Cloud service. They are currently working on certifying their software for running on the Amazon cloud. As a matter of fact they all the VM's on the floor where running on that cloud (until they got some serious lag, and they switches locally: hurray for the cloud! ;))
Otherwise they also talk cloud when speaking of collaboration. One of the ARIS products offers this nice (video) collaboration out of the box.

Overall I was pleasantly surprised with the wealth of information available at the conference. As this was my first visit, it is difficult to say if this was due to the new products and acquisitions. Perhaps I can tell you next year...

Author: Stefan

Saturday, June 4, 2011

How to do NTLMv2 authentication in TIBCO BusinessWorks

As a proof of concept I had to test if TIBCO could perform authentication from its BusinessWorks suite to a Microsoft Dynamics CRM web service using ‘Integrated Windows Authentication’.
TIBCO BusinessWorks has all the necessary tools for connectivity, transformation and orchestration of processes but unfortunately it has no support for Integrated Windows Authentication. But I don’t consider it as a flaw of TIBCO BusinessWorks. Integrated Windows Authentication is specific to Microsoft products and the protocol that is currently in scope for the POC, NTLM, is a proprietary protocol.

What is the goal of the POC?

Authenticate TIBCO when calling the Microsoft Dynamics CRM web service. The authentication needs to be done using the NTLMv2 protocol. The account I use is a designated system account for TIBCO, which has received the correct access.

How did I start?

A lot of developers think: ‘what I do, I do better’. Well, I am more in favor of ‘use instead of build’. So first I started to find solutions on the internet that might do the trick for us. Since that didn’t work out well, I started to use some libraries that implement NTLM and to see if it works with TIBCO BusinessWorks.

I also wanted to find a solution as fast as possible. So instead of trying to investigate further on why something doesn’t work by the book, I just tried a different library/application.

So here is a summary of things I’ve tried:

Proxy solutions:

NTLMAPS: This is a tool that was used at a client side but stopped working for them after they switched to a new Active Directory domain. For my POC, and using the latest NTLMAPS version, I constantly received a 401 error back. So I had to quickly give up on this.

CNTLM: A rewrite of NTLMAPS and I managed to get authenticated when I was trying it from a Non-MS browser like chrome or Firefox. However the tool was prompting me for credentials for user authentication, which were then used for NTLM authentication. I quickly tried to configure my SOAP Request-Reply activity using HTTP Authentication and a correctly set Identity but unfortunately it didn't work. I didn’t investigate further on this.

Since the proxy solutions did not work out well, I tried to use a Java Code activity and tried to use libraries implementing the NTLM protocol.

Client solutions:

According to the documentation on Apache it should support NTLMv2 but I didn’t manage to get it to work. Although following the guidelines, authentication was always failing with a 401 error. Maybe I was doing something wrong but since TIBCO BusinessWorks is also using (an older) HTTPClient in its third party library repository, I decided not to investigate further on this.  Just to be sure that an upgrade would cause a nasty side effect.

On I found an interesting article about configuring the HTTPClient 3.x of Apache with the JCIFS library to get NTLM support.

I didn’t try this one because on the site of JCIFS, they themselves recommend to use the Jespa library if you’re looking for full NTLM support.

Unfortunately the Jespa library is not open-source and has some limitations when you integrate directly with Active Directory. However in my situation I only needed a small portion of this library. I needed to establish a connection and needed a provider that will authenticate against the NTLMv2 protocol. So for my POC there is no impact.

Proxy setup

I’ve made a small TIBCO BW project, which can act as a proxy, between TIBCO BusinessWorks and MS Dynamics CRM web services. This service is working identically as the NTLMAPS application. It will sit as a proxy between the Soap Request-Reply activities and the endpoint.
How does the Forward request activity look like?

1)     First I defined some input parameters so I could dynamically configure my process:

2)     Configuring the Java Code: Updating the import statements
import java.util.*;
import jespa.http.HttpURLConnection;
3)     Configuring the Java Code: Add an inner class

This class will perform the POST action and return the soap reply.
public class HttpPost implements PrivilegedExceptionAction

private URL url = null;
private HttpURLConnection conn = null;
private OutputStreamWriter wout = null;
private BufferedReader rd  = null;
private StringBuilder sb = null;
private String line = null;
private String responseMessage = null;
private int responseCode = 0;
private String responseBody = null;
private String endpoint;

public HttpPost(String endpoint){
       this.endpoint = endpoint;

public Object run() throws Exception

       url = new URL(endpoint);
       conn = new HttpURLConnection(url);
       try {
       conn.addRequestProperty("SOAPAction", soapAction);
       conn.addRequestProperty("Content-Type", contentType);

       // Set the input
       wout = new OutputStreamWriter( conn.getOutputStream() );
       wout.flush(); // this triggers the POST

       // Get the response
       rd = new BufferedReader(new InputStreamReader(conn.getInputStream()));
       sb = new StringBuilder();
       while ((line = rd.readLine()) != null) {
             sb.append(line + "\n");
        } catch (IOException ioe) {
             System.err.println(ioe.getMessage()); // such as '404 Not Found'
             rd = new BufferedReader(new InputStreamReader(conn.getInputStream()));
             sb = new StringBuilder();
             while ((line = rd.readLine()) != null) {
                    sb.append(line + "\n");
        } finally {
             responseCode = conn.getResponseCode();
             responseMessage = conn.getResponseMessage();
             responseBody = sb.toString();
             rd = null;
             sb = null;
             conn = null;
        return null;

public int getResponseCode()
       return this.responseCode;

public String getResponseMessage()
       return this.responseMessage;

public String getResponseBody()
       return this.responseBody;


4)     Implement the invoke function
org.apache.log4j.Logger logger = org.apache.log4j.Logger.getLogger("bw.logger");
HttpPost t = new HttpPost(endpoint);
RunAs.runAs(t, new PasswordCredential(domain + "\\" + userName, password.toCharArray()));"Server replied with HTTP status code: " + t.getResponseCode() + " " + t.getResponseMessage());
soapReply = t.getResponseBody();

Using the proxy class

When configuring my Soap Request-Reply message, I only need to configure a Proxy Configuration which points to my HTTP Receiver. My HTTP Receiver will forward the request and returns back the correct response.

Update: As some readers have commented, there seems to be a bug inside the above code. 

The updated project can be downloaded here. This project has updated java code that improves the handling of the soap request/response. You'll have to change the global variables so the authentication group is updated with your login credentials.
Also note that since BusinessWorks version 5.10, Tibco has added NTLM authentication support. See the release notes at

Author: G√ľnther