Monday, February 18, 2013

Looking up Tasks faster and more efficiently in webMethods

In case you want to locate tasks (that use standard business data) on the Task Engine to which the webMethods Integration Server is connected to, an invoke to built-in service "pub.task.taskclient:searchTasks" is used. On the other hand, when working with indexed business data fields service "pub.task.taskclient:searchTasksIndexed" is called.

From a past experience an invoke of service "pub.task.taskclient:searchTasks" resulted in a long running sql query on the related Database source. This eventually led to a " Read timed out" scenario between Integration Server and Task Engine. A result set of 140 active tasks took more than 90 seconds to get returned causing the Integration Server per default to disconnect the ongoing Task Engine query after 60 seconds.

One option is to put on the Integration Server extended setting "WSClientConfig.SocketTimeout" in place and increase the timeout value (in milliseconds).
A better, more elegant and formal solution is to replace invoke "pub.task.taskclient:searchTasks" by  "pub.task.taskclient:searchTasksIndexed" and put boolean businessData to false. The replacement service will take advantage of the built-in index capabilities whereas "pub.task.taskclient:searchTasks" doesn't.

When using service "pub.task.taskclient:searchTasksIndexed" instead of "pub.task.taskclient:searchTasks" a similar result set was returned in a couple of seconds instead of causing an operational problem for the integration.

Author: Johan

Monday, February 4, 2013

Turn Machine Data into Real-time Visibility, Insight and Intelligence

Ever needed to analyze your system? To look what's going on? Always faced the insanaty of huge logs? Then I may have a working solution for you...and yes it is partly free and yes it is cloud based. The magical product is SplunkStorm from

What is SplunkStorm?

From the documentation - :

Splunk Enterprise is the platform for machine data. It's the easy, fast and resilient way to collect, analyze and secure the massive streams of machine data generated by all your IT systems and technology infrastructure.
Troubleshoot problems and investigate security incidents in minutes (not hours or days). Monitor your end-to-end infrastructure to avoid service degradation or outages. Gain real-time visibility and critical insights into customer experience, transactions and behavior. Make your data accessible, usable and valuable to everyone.

How to get started?

  • First create an account
  • Next add your first project
  • Choose the plan you wish to use, in this case I can live with the Free plan1Gb storage.

  • SplunkStorm Main Dashboard

What's next? 

Lets import our first logs file. For the purpose of this post I only use a file based log. Forwarders seems a great approach but this is too far for an introduction.

Press the File menu item :
Upload a log file. I use the log file from an Oracle Service Bus installation running on top of Weblogic.

After pressing the upload button, the Splunk magic is started. Splunk starts parsing the log file, extracting the log based on the timestamps.

Viewing the data

Go to the Project home, then press explore data.



Quickly, as soon as SplunkStorm has finished indexing your log files, you can drill down issue, follow what's going on, ...

Not Cloud minded?

Splunk also has a local installer which can be installed on the different platforms Linux, Mac, Windows, ...  Should I have more time, I 'll drill further into the reporting capacities of this tool in future posts.

Tutorial :

Author : A.Reper