Wednesday, December 31, 2014

Devops (2): Vagrant


After looking a bit into Docker, I went on to look into Vagrant, another well-known tool to provision and configure virtual machines.



To learn about Vagrant, I read the book "Vagrant: Up and Running". This O'Reilly book is well-written and the examples all work. While doing the exercises on a Windows Server 2012R2 VM, I hardly encountered any problems. Starting the Ubuntu VM with the vagrant up command and removing it with vagrant destory.

The Vagrant command line tool allows to create a Virtual Machine in a reproducible and neutral manner. Vagrant was initially developed around the the free VM product Oracle VirtualBox. But Vagrant now comes with many other providers, e.g. AWS, Rackspace, IBM SoftLayer and Microsoft Azure. Support for VMWare however is not free ($79).

Vagrant focuses on the creation of Virtual Machines in a neutral manner. Contrary to Docker, it uses an actual Virtualization solution to provision the virtual machines. This allows Vagrant to support multiple Operating Systems in parallel. And offers support for automating the creation of Windows based virtual machines.

When Vagrant is used in combination with Oracle VirtualBox, Vagrant will use the VBoxManage.exe of VirtualBox. To create machines with a cloud provider, the respective Vagrant provider will leverage the API and tools of the specific Infrastructure-As-A-Service solution. Vagrant configure all sorts of attributes of the virtual machine, incl. e.g. networking (and port forwarding).

For the actual provisioning of the machines, Vagrant supports many options, including command line. But most often, Vagrant will be used in combination with Chef or Puppet. E.g. the Chef development kit uses Vagrant as its default "driver".

Boxes
Vagrant does not start from an ISO image, but from an already prepared "box". The more such box is pre-configured, the fewer configuration needs to be done afterwards. Vagrant uses its own software format to package the virtual machines that are taken as a starting point (compare to Amazon Machine Images). Vagrantbox.es and many others make pre-packaged Vagrant boxes available.

Windows specific
  • VagrantManager makes Vagrant accessible from the Windows (or iOS) Taskbar 
  • The company modern.IE makes Windows boxes with all sorts of IE versions available.
  • Interesting blog on how to create Variant Windows boxes
  • Vagrant can directly access the command line of Linux boxes over SSH (secure shell). For Windows boxes this cal also be arranged wen cygwin (or other SSH server) is installed. But Vagrant can also use WinRM to access the Windows command line
  • Where the installation of software on Linux boxes leverages apt-get or yum to install software packages, Chocolatery wants to bring a similar solution to the Microsoft world; many packages are available for quick and easy installation
  • Boxstarter leverages Chocolatey packages to automate the installation of software and create repeatable, scripted Windows environments.
Vagrant and Integration Tools
In my own domain of Application Integration and SOA, I expect that both vendors and customers will pickup tools such as Vagrant for creating and provisioning (virtual) machines. Combined with Chef or Puppet to actually install and configure the software on these machines.

Author: Guy

Monday, December 29, 2014

Devops and Docker

The holiday period between Christmas and New Year is an ideal period to catch up on some reading and experimenting. Devops and tools such as Docker, Vagrant, Chef, Puppet and Ansible were on my radar for a while. So finally some time to dive into this topics.


Nested VMs
Not to mess up my machine, I use VMWare workstation to spin up some test machines. As these Devops tools are all about creating and provisioning virtual machines, one must enable "Nested VMs" support. This allows one virtual machine to run in another.

Docker


Docker appeared on my radar while learning about Micro Services. Docker focuses on the creation of light-weight containers in which applications are configured in an automated manner.



The Linux Containers are very small by leveraging OS level virtualization of Linux. Is it some "chroot on rocks". The chroot system call on Unix/Linux changes the root directory for a program and all of its children. chroot allows programs - e.g. a web server - to run in a more protected mode. The OS level virtualization can limit all the resources used by child processes: CPU, memory, disk space, ... Because containers are so light-weight, many of them can be run on a single machine. This mechanism allows each application to run in its own container, its own virtualized OS.


To have a quick try of Docker, there is a great Online Tutorial consisting of 10 steps. Recommended!


As there aren't any books available on Docker, I watched the brand new training material of LiveLessons. As I couldn't find the text material, had to type over the instructions from the paused video. After wasting some time trying to get access the Fedora Atomic container on the Fedora 21 host, decided to switch to another topic, Vagrant. If I have some more time, I'll come back and retry with RHEL as used in the video training. Or switch to Windows and take a look boot2docker.

Author: Guy

Thursday, December 11, 2014

Datapower XQuery replace


One of the clients that I’m working for discovered a problem with a SOAP web service querying an LDAP. The service could contain a ‘*’ in plain text in possibly different fields in the message. When the service is called it uses the ‘*’ as a wildcard. The system should handle the ‘*’ as plain text so we need to escape the character with ‘\2a’ (escape for a LDAP filter query). So they looked in complete web service chain where the least impact was. They decided that an update in the DataPower configuration was the best option.

This is a small message example, but the ‘*’ can occur in couple different WSDL operations and in different fields.
<soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope"
               xmlns:tem="http://tempuri.org/">
   <soap:Header/>
   <soap:Body>
      <tem:FindUser>
        <tem:UserName>KMe_*</tem:UserName>
      </tem:FindUser>
   </soap:Body>
</soap:Envelope>

I immediately thought to use the function str:replace(). But unfortunately it is not supported in Datapower, which brought me to XQuery, as an alternative for XSLT. So this is the solution that I developed.

Because the replacement is only necessary for 3 operations from the WSDL I defined the policy-rule on WSDL operation level.


Below the XQuery code used to replace the ‘*’ into ‘\2’a. The XQuery can be extended to handle other values that need to be escaped for example:  ( ) \ / NUL

xquery version "1.0";
declare namespace local = "http://example.org";
declare function local:copy-replace($element as element()) {
  element {node-name($element)}
               {$element/@*,
                for $child in $element/node()
                return if ($child instance of element())
                       then local:copy-replace($child)
                       else replace($child,'\*','\\2a')
               }
};
local:copy-replace(/*)

The total number of requests that have a ‘*’ or other wildcards in the username is limited. To improve the performance I adapted the standard SQL-injection filter to search for ‘*’ and output the number of hits. This way when the hit count is 0 I can skip the XQuery transform action.

Author: Kim

Thursday, November 27, 2014

SAP PI to Oracle with batch insert - Improved performance

We were implementing a bulk-data transfer from SAP into an Oracle database. Easily upwards of 15 000 materials were being uploaded to the database via a stored procedure (company policy). Normally we like this approach because it decouples PI from the database’s underlying table structure, but we were getting terrible performance.

In testing, the entire workflow took almost 2 hours. Whilst this in itself wasn’t an issue (the process runs in the middle of the night), it was unnecessary load on both systems, and the extended duration put the process at increased risk of failure (e.g. due to network issues).

Keen to improve this, we looked at PI’s “batch insert” capabilities. In order to maintain the decoupling, and in order to protect the destination tables, we created an interface table to temporarily contain the material data, and a procedure that safely updated the destination table.


Testing showed a 30-to-60-fold performance improvement during the PI-DB exchange, and the entire process ended up taking just 10 minutes.

Author: Edwin

Thursday, November 20, 2014

ESB = Erroneous Spaghetti Box?

While re-reading the Microservices article by Martin Fowler, I was triggered by the following footnote #7: We can't resist mentioning Jim Webber's statement that ESB stands for "Egregious Spaghetti Box"I viewed the presentation - from 2008 - in which Jim Webber and Martin Fowler bash the Enterprise Service Bus and translate the acronym ESB into Erroneous Spaghetti Box.

http://www.slideshare.net/deimos/jim-webber-martin-fowler-does-my-bus-look-big-in-this

I do agree that often, the integration platform simply contains a spaghetti of point-2-point integrations. But that's good! Way better than all that integration logic dispersed over many systems. With a wide variety of integration techniques, protocols and message formats. And spaghetti in a box is exactly what I tell when explaining what an integration platform is. Only by taking the next step of careful service and message design, one can arrive at a true Service Oriented Architecture.

Let's sum up the main advantages of an integration platform:
  • A standardized way to have applications talk to one another
  • No coding in a 3GL such as Java or C# but configuration in an application specifically built for the task of integrating systems
  • Support for applications of different kinds and ages, including packaged applications
  • Strongly reduced diversity in the tools and techniques used to integrate applications
  • Support for reliable, asynchronous communication using queuing and message persistence (which Fowler doesn't seem to like either)
  • Trivial connectivity through adapters
  • Central place to monitor and manage the communication between systems, in particular the reliable message exchange
  • Help turn packaged or older applications into services if desired (not everything is developed in-house)
With the disadvantages:
  • That it is a central, separate platform,
  • Requiring some specific skills (XML)
  • The cost of the integration development and support becoming truly visible.
Where Webber and Fowler do have a point, is that middleware vendors come with a whole slew of products. Obviously one should only pick the parts that are useful. And the ESB will definitely not create the Service Oriented Architecture for you.

Author: Guy

Thursday, November 13, 2014

Micro Services - Conway Law and Application Integration teams

"Micro Services" is a new buzzword in world of IT architects. As it talks about application components communicating over a network and contains "services", it probably has something to do with SOA and integration. So I had to look into it.


Let's bulletize the description from the article by Martin Fowler and James Lewis:
  • The Microservices architectural style is an approach to
  • developing a single application < application architecture
  • as a suite of small services, < componentization, no libraries
  • each (service) running in its own process
  • and communicating with lightweight mechanisms, < over the network
  • often an HTTP resource API. < REST, message bus
  • These services are built around business capabilities < Domain Driven Design
  • and independently deployable by fully automated deployment machinery.
  • There is a bare minimum of centralized management of these services, 
  • which may be written in different programming languages and
  • use different data storage technologies < eventual consistency
Microservices are an architecture used by very large, modern IT systems such as LinkedIn, Netflix, Amazon and eBay. There's all sorts of interesting aspects to Micro Services, e.g. the GUI part, security, transactional integrity, versioning etc.

Conway law - Integration Competence Center
But there was one aspect that triggered me in particular when learning about Microservices: Conway Law: "any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure".

So this law states that an application architecture will reflect the way an IT department is organized. Microservices advocates refer a lot to it.

Service boundaries reinforced by team boundaries (picture from article by Martin Fowler)


For Microservices to focus and align with business functionality, the teams developing (and maintaining) the Microservices should therefore be cross-functional, including the full range of skills required for the development: user-experience, database, and project management.

Orthogonal to the view of the Microservices architects, that Conway Law confirms my personal view and opinion that any IT organization that wishes to leverage a central integration platform to a great extent, requires a separate team developing on and managing that integration platform.


How did I learn about MicroServices?

PS: when searching for the term "micro service", I found the term also in the book "Java Web Services Architecture" back from 2003!

Author: Guy

Thursday, November 6, 2014

Message modeling and XSD generation

As an integration consultant I work almost daily with XML messages. In my opinion in order to work efficiently with XML you need to have XML schemas. XML schemas makes it possible to validate your messages (including those hard to find typo’s in mappings), they can be used to generate documentation, they define your service contracts and can be used to generate a skeleton of your code. if and when validation should be enabled is a different discussion. Perhaps in the future I will write another article about it.

In order to benefit from XML schemas they need to be clear, precise, flexible and interoperable with the different technologies you are going to use on your project. Amongst us colleagues we regularly have lively discussions on how to achieve this. We all have the same ideas on the general guidelines but sometime we disagree about some details. Mostly it boils down to the choice of technology we are used of working with. But I am relatively sure I can work with the schemas created by my peers.

One major downside of XML schemas is that it is very technical and functional analyst don’t always understand it very well and why should they? They want to model the messages in their favorite modelling tool. In the perfect world you can generate the XSD’s from the model. This way you can enforce the policy you have defined to which the XSD’s should conform.

So what is wrong with this? Nothing! I even encourage you to do it. If it is done correctly and you keep in mind that in the end a developer has to import the XSD’s in his tool and work with them.

On a recent project I had to import an XSD from a third party in order to interact with them. In their documentation they were very proud of their UML model and how clever they were modelling there data. With the generated XSD they were less concerned. From what the XSD should be: simple, flexible, easy to understand … nothing was achieved. I spend 2 days trying to import them in my tool (IBM Integration Toolkit). In the end I gave up as I could no longer justify the time spent to my client.  I wrote my own (very simple) XSD’s that conform to the messages we need to send and receive and used those within our tools.

For those thinking: then don’t use IBM Integration toolkit. I have quite some experience with IBM tooling and in my career I never before had so much problems importing XSD’s. I find the XML support of IBM tools excellent. We tried to import the XSD’s in different tools and they all failed.

So to conclude I want to give you some advice:

  • Pay attention to your XML Schemas
  • Define guidelines/rules to which you XML schemas should adhere within your organization
  • For a public interface make sure the XML schema does not use too advanced schema features (UN/CEFACT Naming and Design Rules may help you there)
  • Model your data and generate the XML Schemas from the (UML) model but let your developers validate the generated XSD’s

XML Schemas should be an aid not a burden ! Keep it Simple !

Author: Jef Jansen

Thursday, October 30, 2014

Websphere Technical University Düsseldorf 2014 - Part 2

I started day 2 of the WTU conference with a session from Michael Hamann about some of the new features in IBM Websphere Datapower (V7) concerning the Datapower Policy Framework and Service Patterns.
The Datapower policy framework is managed on Websphere Service Registry and Repository and enforced on Datapower. This setup isn’t new, but already exists since Datapower firmware version 5.0.0. What’s new since version 7 is the possibility to use variables in the policy-config, this feature is called dynamic policy variability.
Another new feature in V7 is Service Patterns, these are templates that you can create from existing services in a new GUI, the Blueprint Console.
I experienced myself that many of our customers already created their own scripts to work with some sort of templates for the common integration scenarios, so the use of service patterns will be great for them. They will have a supported way of working that brings more features than what they have right now.

Of course not all sessions that I attended involve my working terrain, but there were still some interesting things that caught my attention:
Embedded image permalink
(Photo: Twitter @bluemonki)
In the session about Cloud Integration by John Hosie, he mentioned ‘Chef’, which is a tool to automate the setup and maintenance of your infrastructure in the cloud. Check https://www.getchef.com/chef/ if you want to know more.
Of course something that came up in half of the sessions I attended is IBM’s answer to Platform as a Service (PaaS): Bluemix. One of the more impressive examples came from the same ‘Cloud Integration’ session. After syncing your local database with a cloud DB in Bluemix you can generate REST-API’s to retrieve the data you want to expose in just a few clicks.
Another hot topic on the conference was discussed by Bernard Kufluk and Bryan Boyd in their presentation about the Internet of Things (previously known as Smarter Planet). He gave us a glimpse of what the future might look like when all of our stuff is connected to the internet using the MQTT protocol. In contrast to most of the existing applications that nowadays use HTTP to send data to the server, MQTT makes it possible to send commands from the server to the client application (for example to stop a car remotely as shown in the demo). The appliance to take care of all this MQTT traffic is IBM MessageSight. My first impression is that this appliance is for bidirectional MQTT traffic what Datapower is for HTTP traffic.
The session about Blueworks Live from Roland Peisl presented another product that I likely won’t be working with in the near future, but nevertheless it was interesting to see how the product evolved since the last time I used it, back in the days when it was called Lombardi Blueprint. While obviously a lot has changed since then, the conclusion remains the same: it’s a great tool for the business to help them with process discovery sessions. If you’re looking for a tool that supports a full business process round-trip, you should rather use Business Process Manager.

Author: Tim


Wednesday, October 29, 2014

Websphere Technical University Düsseldorf 2014 - First impressions

Impressions of the first day of the Websphere Technical University 2014 in Düsseldorf
The Websphere Technical University and Digital Experience conference is held in Düsseldorf from the 28th of october till the 31st. With over 16 rooms for each timeslot there is something to each person’s liking. My main interest for this conference is the integration track and even though this limits the immense choice of presentations, there are still some hard choices to be made.
For this first day I started the day with the general opening session. This featured a great demo that showed the power of Bluemix.
Embedded image permalink
(Photo: Twitter @reynoutvab)
In the afternoon the conference really started for me with a presentation about the trends and directions of the IBM Integration Bus. Speaker Jens Diedrichsen (@JensDiedrichsen) introduced us the new features that will be present in IIB V 10.0
Embedded image permalink
(Photo: Twitter @bluemonki)
Personal highlights for me were:
  • smaller install base (download size < 1GB)
  • MQ is no longer a prerequisite. Not all IIB options will work without MQ yet, but in the future this is the goal.
  • Unit testing is improved with a built-in testservice and CI capabilities
  • Github will be provide extra samples, best practices and also connectors.
The IIB V10.0 Open Beta is now available at http://ibm.biz/iibopenbeta to discover all the new features yourself.

The following interesting session that I attended was the presentation by Klaus Bonnert about API management. In an existing Datapower environment, the API-management software can add some useful advantages without having to rewrite your API’s:
  • Analytics view
  • API manager can become your single console for all deployments
  • Self service for user creation

DFDL
My last session for the day was the session about DFDL (Data Format Description Language) by Alex Wood. Despite being present in MessageBroker since V8, I never really looked at it until now. Much like XSD is for XML, DFDL is a way to describe flatfile and binary data. It is a standard owned by OGF (https://www.ogf.org) and is the way to go for those who want to be able to validate or serialize general text and binary data format.
Some of the features of DFDL:
  • based on XML-schema (DFDL schema is valid xml)
  • human readable
  • high performance, since you can choose the data format that suits you best
  • github for many existing schemas that describe file formats like EDIFACT
  • currently used by message broker / IIB, rational, Master Data Mgmt
  • IBM DFDL also available as an embeddable component (latest release V1.1.1)
Embedded image permalink
(Photo Twitter @hosie31504)

Author: Tim