Thursday, November 27, 2014

SAP PI to Oracle with batch insert - Improved performance

We were implementing a bulk-data transfer from SAP into an Oracle database. Easily upwards of 15 000 materials were being uploaded to the database via a stored procedure (company policy). Normally we like this approach because it decouples PI from the database’s underlying table structure, but we were getting terrible performance.

In testing, the entire workflow took almost 2 hours. Whilst this in itself wasn’t an issue (the process runs in the middle of the night), it was unnecessary load on both systems, and the extended duration put the process at increased risk of failure (e.g. due to network issues).

Keen to improve this, we looked at PI’s “batch insert” capabilities. In order to maintain the decoupling, and in order to protect the destination tables, we created an interface table to temporarily contain the material data, and a procedure that safely updated the destination table.


Testing showed a 30-to-60-fold performance improvement during the PI-DB exchange, and the entire process ended up taking just 10 minutes.

Author: Edwin

Thursday, November 20, 2014

ESB = Erroneous Spaghetti Box?

While re-reading the Microservices article by Martin Fowler, I was triggered by the following footnote #7: We can't resist mentioning Jim Webber's statement that ESB stands for "Egregious Spaghetti Box"I viewed the presentation - from 2008 - in which Jim Webber and Martin Fowler bash the Enterprise Service Bus and translate the acronym ESB into Erroneous Spaghetti Box.

http://www.slideshare.net/deimos/jim-webber-martin-fowler-does-my-bus-look-big-in-this

I do agree that often, the integration platform simply contains a spaghetti of point-2-point integrations. But that's good! Way better than all that integration logic dispersed over many systems. With a wide variety of integration techniques, protocols and message formats. And spaghetti in a box is exactly what I tell when explaining what an integration platform is. Only by taking the next step of careful service and message design, one can arrive at a true Service Oriented Architecture.

Let's sum up the main advantages of an integration platform:
  • A standardized way to have applications talk to one another
  • No coding in a 3GL such as Java or C# but configuration in an application specifically built for the task of integrating systems
  • Support for applications of different kinds and ages, including packaged applications
  • Strongly reduced diversity in the tools and techniques used to integrate applications
  • Support for reliable, asynchronous communication using queuing and message persistence (which Fowler doesn't seem to like either)
  • Trivial connectivity through adapters
  • Central place to monitor and manage the communication between systems, in particular the reliable message exchange
  • Help turn packaged or older applications into services if desired (not everything is developed in-house)
With the disadvantages:
  • That it is a central, separate platform,
  • Requiring some specific skills (XML)
  • The cost of the integration development and support becoming truly visible.
Where Webber and Fowler do have a point, is that middleware vendors come with a whole slew of products. Obviously one should only pick the parts that are useful. And the ESB will definitely not create the Service Oriented Architecture for you.

Author: Guy

Thursday, November 13, 2014

Micro Services - Conway Law and Application Integration teams

"Micro Services" is a new buzzword in world of IT architects. As it talks about application components communicating over a network and contains "services", it probably has something to do with SOA and integration. So I had to look into it.


Let's bulletize the description from the article by Martin Fowler and James Lewis:
  • The Microservices architectural style is an approach to
  • developing a single application < application architecture
  • as a suite of small services, < componentization, no libraries
  • each (service) running in its own process
  • and communicating with lightweight mechanisms, < over the network
  • often an HTTP resource API. < REST, message bus
  • These services are built around business capabilities < Domain Driven Design
  • and independently deployable by fully automated deployment machinery.
  • There is a bare minimum of centralized management of these services, 
  • which may be written in different programming languages and
  • use different data storage technologies < eventual consistency
Microservices are an architecture used by very large, modern IT systems such as LinkedIn, Netflix, Amazon and eBay. There's all sorts of interesting aspects to Micro Services, e.g. the GUI part, security, transactional integrity, versioning etc.

Conway law - Integration Competence Center
But there was one aspect that triggered me in particular when learning about Microservices: Conway Law: "any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure".

So this law states that an application architecture will reflect the way an IT department is organized. Microservices advocates refer a lot to it.

Service boundaries reinforced by team boundaries (picture from article by Martin Fowler)


For Microservices to focus and align with business functionality, the teams developing (and maintaining) the Microservices should therefore be cross-functional, including the full range of skills required for the development: user-experience, database, and project management.

Orthogonal to the view of the Microservices architects, that Conway Law confirms my personal view and opinion that any IT organization that wishes to leverage a central integration platform to a great extent, requires a separate team developing on and managing that integration platform.


How did I learn about MicroServices?

PS: when searching for the term "micro service", I found the term also in the book "Java Web Services Architecture" back from 2003!

Author: Guy

Thursday, November 6, 2014

Message modeling and XSD generation

As an integration consultant I work almost daily with XML messages. In my opinion in order to work efficiently with XML you need to have XML schemas. XML schemas makes it possible to validate your messages (including those hard to find typo’s in mappings), they can be used to generate documentation, they define your service contracts and can be used to generate a skeleton of your code. if and when validation should be enabled is a different discussion. Perhaps in the future I will write another article about it.

In order to benefit from XML schemas they need to be clear, precise, flexible and interoperable with the different technologies you are going to use on your project. Amongst us colleagues we regularly have lively discussions on how to achieve this. We all have the same ideas on the general guidelines but sometime we disagree about some details. Mostly it boils down to the choice of technology we are used of working with. But I am relatively sure I can work with the schemas created by my peers.

One major downside of XML schemas is that it is very technical and functional analyst don’t always understand it very well and why should they? They want to model the messages in their favorite modelling tool. In the perfect world you can generate the XSD’s from the model. This way you can enforce the policy you have defined to which the XSD’s should conform.

So what is wrong with this? Nothing! I even encourage you to do it. If it is done correctly and you keep in mind that in the end a developer has to import the XSD’s in his tool and work with them.

On a recent project I had to import an XSD from a third party in order to interact with them. In their documentation they were very proud of their UML model and how clever they were modelling there data. With the generated XSD they were less concerned. From what the XSD should be: simple, flexible, easy to understand … nothing was achieved. I spend 2 days trying to import them in my tool (IBM Integration Toolkit). In the end I gave up as I could no longer justify the time spent to my client.  I wrote my own (very simple) XSD’s that conform to the messages we need to send and receive and used those within our tools.

For those thinking: then don’t use IBM Integration toolkit. I have quite some experience with IBM tooling and in my career I never before had so much problems importing XSD’s. I find the XML support of IBM tools excellent. We tried to import the XSD’s in different tools and they all failed.

So to conclude I want to give you some advice:

  • Pay attention to your XML Schemas
  • Define guidelines/rules to which you XML schemas should adhere within your organization
  • For a public interface make sure the XML schema does not use too advanced schema features (UN/CEFACT Naming and Design Rules may help you there)
  • Model your data and generate the XML Schemas from the (UML) model but let your developers validate the generated XSD’s

XML Schemas should be an aid not a burden ! Keep it Simple !

Author: Jef Jansen

Thursday, October 30, 2014

Websphere Technical University Düsseldorf 2014 - Part 2

I started day 2 of the WTU conference with a session from Michael Hamann about some of the new features in IBM Websphere Datapower (V7) concerning the Datapower Policy Framework and Service Patterns.
The Datapower policy framework is managed on Websphere Service Registry and Repository and enforced on Datapower. This setup isn’t new, but already exists since Datapower firmware version 5.0.0. What’s new since version 7 is the possibility to use variables in the policy-config, this feature is called dynamic policy variability.
Another new feature in V7 is Service Patterns, these are templates that you can create from existing services in a new GUI, the Blueprint Console.
I experienced myself that many of our customers already created their own scripts to work with some sort of templates for the common integration scenarios, so the use of service patterns will be great for them. They will have a supported way of working that brings more features than what they have right now.

Of course not all sessions that I attended involve my working terrain, but there were still some interesting things that caught my attention:
Embedded image permalink
(Photo: Twitter @bluemonki)
In the session about Cloud Integration by John Hosie, he mentioned ‘Chef’, which is a tool to automate the setup and maintenance of your infrastructure in the cloud. Check https://www.getchef.com/chef/ if you want to know more.
Of course something that came up in half of the sessions I attended is IBM’s answer to Platform as a Service (PaaS): Bluemix. One of the more impressive examples came from the same ‘Cloud Integration’ session. After syncing your local database with a cloud DB in Bluemix you can generate REST-API’s to retrieve the data you want to expose in just a few clicks.
Another hot topic on the conference was discussed by Bernard Kufluk and Bryan Boyd in their presentation about the Internet of Things (previously known as Smarter Planet). He gave us a glimpse of what the future might look like when all of our stuff is connected to the internet using the MQTT protocol. In contrast to most of the existing applications that nowadays use HTTP to send data to the server, MQTT makes it possible to send commands from the server to the client application (for example to stop a car remotely as shown in the demo). The appliance to take care of all this MQTT traffic is IBM MessageSight. My first impression is that this appliance is for bidirectional MQTT traffic what Datapower is for HTTP traffic.
The session about Blueworks Live from Roland Peisl presented another product that I likely won’t be working with in the near future, but nevertheless it was interesting to see how the product evolved since the last time I used it, back in the days when it was called Lombardi Blueprint. While obviously a lot has changed since then, the conclusion remains the same: it’s a great tool for the business to help them with process discovery sessions. If you’re looking for a tool that supports a full business process round-trip, you should rather use Business Process Manager.

Author: Tim


Wednesday, October 29, 2014

Websphere Technical University Düsseldorf 2014 - First impressions

Impressions of the first day of the Websphere Technical University 2014 in Düsseldorf
The Websphere Technical University and Digital Experience conference is held in Düsseldorf from the 28th of october till the 31st. With over 16 rooms for each timeslot there is something to each person’s liking. My main interest for this conference is the integration track and even though this limits the immense choice of presentations, there are still some hard choices to be made.
For this first day I started the day with the general opening session. This featured a great demo that showed the power of Bluemix.
Embedded image permalink
(Photo: Twitter @reynoutvab)
In the afternoon the conference really started for me with a presentation about the trends and directions of the IBM Integration Bus. Speaker Jens Diedrichsen (@JensDiedrichsen) introduced us the new features that will be present in IIB V 10.0
Embedded image permalink
(Photo: Twitter @bluemonki)
Personal highlights for me were:
  • smaller install base (download size < 1GB)
  • MQ is no longer a prerequisite. Not all IIB options will work without MQ yet, but in the future this is the goal.
  • Unit testing is improved with a built-in testservice and CI capabilities
  • Github will be provide extra samples, best practices and also connectors.
The IIB V10.0 Open Beta is now available at http://ibm.biz/iibopenbeta to discover all the new features yourself.

The following interesting session that I attended was the presentation by Klaus Bonnert about API management. In an existing Datapower environment, the API-management software can add some useful advantages without having to rewrite your API’s:
  • Analytics view
  • API manager can become your single console for all deployments
  • Self service for user creation

DFDL
My last session for the day was the session about DFDL (Data Format Description Language) by Alex Wood. Despite being present in MessageBroker since V8, I never really looked at it until now. Much like XSD is for XML, DFDL is a way to describe flatfile and binary data. It is a standard owned by OGF (https://www.ogf.org) and is the way to go for those who want to be able to validate or serialize general text and binary data format.
Some of the features of DFDL:
  • based on XML-schema (DFDL schema is valid xml)
  • human readable
  • high performance, since you can choose the data format that suits you best
  • github for many existing schemas that describe file formats like EDIFACT
  • currently used by message broker / IIB, rational, Master Data Mgmt
  • IBM DFDL also available as an embeddable component (latest release V1.1.1)
Embedded image permalink
(Photo Twitter @hosie31504)

Author: Tim

Saturday, October 18, 2014

webMethods Integration Server StatsViewer


When running a lot of assets or a small footprint, memory consumption becomes a problem. Luckily the webMethods IS logs the memory usage in the stats logs (for those that don't have more advanced monitoring like jmx). However these log files contain the usage as hex values. This doesn't make it easy for browsing through the logs.

Excel can come in handy, but it is cumbersome to provide functions for conversion for every file you open. I remembered that their used to be a utility on wmusers to view the log files graphically and I was able to find the thread on the techcommunity...unfortunately without the source. I eventually did find the utility on the site, but it just feels outdated. As this use case was fairly simple, I started looking around the web and found a great javascript library to create graphs, namely chart.js. After a few lines of code (thanks to HTML5) I already had my first view of some stats...great! After some tweaking I now have a quite stable tool to read stats files.

You can find the sources here. For now it only parses the memory and threads (on a scale to 1000), but you can easily extend it so hopefully it will come in handy for you.

Author: Stefan