Unexpectedly, it can sometimes occur that you notice that your BizTalk DTA database has grown from a small tracking database to a huge database taking up all disk space on your server. I had the same issue at a client, and have noticed that this can have quite some consequences: BizTalk server stopped processing messages, no backups could be taken anymore from the databases, etc. Not really something you want to happen to a critical environment.
Therefore following little guideline that you can use as a precaution, as well as a fix while dealing with a huge DTA database.
1. Make sure you have enough disk space
The DTA database can grow quite large from one moment to another, so its best to take into account a quite large disk where the database will be stored. A DTA database stays, according to Microsoft guidelines, healthy in size until 15GB. Everything above 15GB is considered problematic and needs to be dealt with.
When adding the numbers in terms of size of the DTA db together with the other BizTalk databases, make sure you have around 30GB of disk space allocated for the databases itself. When storing the backups of the databases also on the same disk, take at least 40GB, but take into account that this is the absolute minimum!
2. Enable the DTA Purge and Archive job
The DTA Purge and Archive job in SQL Server will clear completed and failed messages after a certain given time. We make a separation here between 2 situations: a situation where there is no problem yet with the database and one where we’re struggling with a huge DTA database and disk space issues:
No db problem, just precautionary
By default, the DTA Purge and Archive job will call the BackupAndPurge stored procedure. This call to the stored procedure will take some parameters:
exec dtasp_BackupAndPurgeTrackingDatabase
1, --@nLiveHours tinyint, --Any completed instance older than the live hours +live days
0, --@nLiveDays tinyint = 0, --will be deleted along with all associated data
1, --@nHardDeleteDays tinyint = 0, --all data older than this will be deleted.
'[path to the backuplocation]', --@nvcFolder nvarchar(1024) = null, --folder for backup files
null, --@nvcValidatingServer sysname = null,
0 --@fForceBackup int = 0 –
The first 3 parameters indicate the following: Amount of hours that a completed instance will be kept in the database; Amount of days that a completed instance will be kept in the database; Amount of days that a failed instance will be kept in the database.
The first 2 (both for completed instances) will be added up, so you can configure to keep completed instances for example for 2 days and 3 hours. Everything after this time will be removed.
DB problem, disk space issues
When struggling with a huge DTA db and disk space issues, the above stored procedure which is called might give problems, because the backup of the db can’t be made anymore since there is no more space left on the disk to put the backup on.
When doing this, just to be better safe than sorry, stop all BizTalk services while this script is running and enable them again after step 3.
In this case, we need to change the DTA Purge and Archive job. Instead of calling the PurgeAndArchive procedure, we will call the Purge procedure, without having to deal with a backup of the database.
Replace the original procedure call in the step of the DTA Purge and Archive job by this:
declare @dtLastBackup datetime
set @dtLastBackup = GetUTCDate()
exec dtasp_PurgeTrackingDatabase 1, 0, 1, @dtLastBackup
And after having done this, run the job. BEWARE! This can take quite some time to finish if this script hasn’t run for quite some time. Give it some time to finish (in my case, it took about 3min to finish and it wasn’t that long since that script had run).
3. Shrink the database
This is a step that is often forgotten, but after doing the Purge of the database, we’re not finished yet. The purge will clear the rows that were filled in the DTA db, but will not remove the empty rows itself. Therefore, you might see a little change in size, but not the big change that we’re looking for.
What really matters here is shrinking the DTA database. This will remove all empty rows and elements in the database which are not in use anymore. This will free up a significant amount of space and shrink the database significantly.
Just right click on the BizTalkDTAdb and choose Tasks – Shrink – Database as pictured below.
You will be presented with a window where you can see how much space exactly will be cleared. In my case here, only 158MB will be cleared since I’ve already optimized my DB.
Just click OK and let it do its work. BEWARE again: also this can take a lot of time. In my optimization scenario, I had 2,5GB that would be cleaned and took me up to 10 minutes to execute, so be patient, it will be worth the wait.
After this is done, the window will just disappear.
In case you’ve disabled your BizTalk services/Host Instances, you can restart them here again.
4. Finish it up
Just do a quick check on the size of your database. Normally, you should see a huge difference in db size, depending how much affected your database was of course.
When having any problems or questions, shoot in the comments.
Author: Andrew De Bruyne
Tuesday, December 27, 2011
Saturday, December 10, 2011
Introduction to Android App development (Devoxx 2011)
This session started with an explanation of what android is and how it was built.
Android is a platform developed by open handset alliance.
The platform is built in different layers.
- Lowest layer based on linux used for hardware
- Top of linux libraries and android runtime
- Application framework
As you all know Android development is done in JAVA, but underneath this layer lies a code which is called “dex”.
When you want to run your app on an android phone, the class files will be converted into a dex file so it can be used on any android phone (depending on the apps requirements).
When you want to run your app on an android phone, the class files will be converted into a dex file so it can be used on any android phone (depending on the apps requirements).
For each app we can reuse existing functionality and therefor a subset of java se is available and an android specific api.
Taking this into account we can do amazing things, but remember it’s just a phone.
So we cannot develop apps which need a massive amount of memory or space as this is limited on a phone. We also need to think about garbage collection.
So we cannot develop apps which need a massive amount of memory or space as this is limited on a phone. We also need to think about garbage collection.
How to start developing?
There are plugins for eclipse and emulators for testing. (http://www.android.com/ for more information on how to install the plugin)
How does an android project look like?
Some important files
Mail.xml : Describes content what to see
R.java : is the class file which is automatically generated when compiling your project. This file zwill be converted into dex code so your app can be used on an android device.
Android manifest file : this file cannot be better described as the property of the app. This file is read on installation, can prompt for access to allow connections to the internet, current location ,…
This file is also used on the android market to filter apps.
This file is also used on the android market to filter apps.
strings.xml : when displaying static fields, you create a variable here and refer to this var that will be used to display. (key-value pair)
When creating a new project, you have to create an Activity.java file. This is the root file which is called when the app is opened.
Layouts: you can create specific layouts for portrait or landscape mode.
Lifecycle of apps
Android has a set of rules to kill apps to free-up memory. So when developing apps we need to take in mind so “save” the state of our device before it will be flipped. When flipping your device, android will trigger the lifecycle so your state will be killed and if you haven’t saved the state it cannot be restored.
SQLite
This is the databases used in android. Each app will have its own database.
Hope this helps you on your way to develop your first android app.
Author : Jeroen W.
Monday, December 5, 2011
SPDY protocol
The new Google SPDY protocol is another attempt to make the web more efficient and reliable. The SPDY protocol introduces an extra layer between HTTP and TCP/IP (actually SSL/TLS) that primarily allows for multiplexing and parallelizing multiple HTTP requests over a single SSL connection.
The SPDY protocol is not some lab exercise but used in production! The Google Chrome browser uses the SPDY protocol (or should we say extension?) to communicate with most Google's applications. SPDY remains mostly a Google thing, with no adoption by other big names (except for Amazon EC2 it seems).
With SPDY, the Chromium browser needs to establish fewer SSL connections. But more importantly, the Chrome browser can launch many HTTP requests in parallel, no longer restricted by a maximum number of TCP/IP connections.
But this made me think: could this have a positive impact on how service consumers are implemented? Similarly to a browser parallizing the retrieval of web content, (web) service consumers should also try to parallize as much as possible.
The HTTP request/response model that underlies web services should not lock us into a synchronous RPC paradigm wherey a requestor blocks waiting for a response. To fully leverage this potential of parallellism, we must move to non-blocking, AJAX like model programming model.
While reading about SPDY, I encountered 2 links worth looking into:
- Recent blog entry by F5 that is a critical review of the SPDY protocol.
- Article that starts with SPDY but goes much further into wild (?) and interesting ideas to re-engineer the workings of the Internet in a more dramatic.
Author: Guy
The SPDY protocol is not some lab exercise but used in production! The Google Chrome browser uses the SPDY protocol (or should we say extension?) to communicate with most Google's applications. SPDY remains mostly a Google thing, with no adoption by other big names (except for Amazon EC2 it seems).
With SPDY, the Chromium browser needs to establish fewer SSL connections. But more importantly, the Chrome browser can launch many HTTP requests in parallel, no longer restricted by a maximum number of TCP/IP connections.
But this made me think: could this have a positive impact on how service consumers are implemented? Similarly to a browser parallizing the retrieval of web content, (web) service consumers should also try to parallize as much as possible.
The HTTP request/response model that underlies web services should not lock us into a synchronous RPC paradigm wherey a requestor blocks waiting for a response. To fully leverage this potential of parallellism, we must move to non-blocking, AJAX like model programming model.
While reading about SPDY, I encountered 2 links worth looking into:
- Recent blog entry by F5 that is a critical review of the SPDY protocol.
- Article that starts with SPDY but goes much further into wild (?) and interesting ideas to re-engineer the workings of the Internet in a more dramatic.
Author: Guy
Subscribe to:
Posts (Atom)