COM+, SQL Server, Serializable isolation level and the issues
Last updated: March 2nd '03 | Best viewed with: All popular browsers | Best viewed at: 1024x768 | Links to external sites will open in a new window

About myself
My technical skills
My favorites
My picture album

Shortcut keys
My code library

VB resources
SQLServer resources
SQLServer books
Replication FAQ
Scripting resources
ASP resources

Search my site
Sign my guestbook
Contact information

This month's 2 click survey:
Is .NET important for a database professional?

SQL Server Articles New

Click here to find out the top 15 SQL Server books purchased by this site's visitors! NEW

Featured Book:
Real-World SQL-DMO for SQL Server, by Allan Mitchell and Mark Allison

Click here to read a sample chapter from the book:
SQL Server Security Distilled
NEW!!! Subscribe to my newsletter:
Want to keep in touch with the latest in SQL Server world? Email with 'subscribe' in the subject line

Performance problems accessing SQL Server via COM+

In this article, I'll share with you, how the SERIALIZABLE transaction isolation level enforced by COM+ effected the performance of an application. At the end of this article I provided links to SQL Server performance tuning books for your reference.

Recently I was asked to improve the performance of an application, that was about to go into production. It was suffering from performance issues. Testers were not very happy with the performance of the application and sometimes the application failed with unexpected errors. I agree this is not a very detailed problem description, but this is all I had, to start with. My immediate task was to narrow down and identify the problem areas and fix them.

I used Performance monitor to see if there are any bottlenecks that are having an impact on the overall system performance. I monitored some of the following counters, to get a very basic overview of the system activity:

Object Counter Description
Memory Pages/Sec To get an idea of the number of page faults occurring.
PhysicalDisk Avg. Disk Queue Length High queue lengths indicate that the disk subsystem is maxed/pushed to the limits
Processor % Processor Time Indicates how busy the CPU is. Anything above 70% indicates the need for more processing capacity
SQLServer:Access Methods Page Splits/Sec Too many page splits will impact write performance on the database. Index Fillfactor may need to be looked at.
SQLServer:Buffer Manager Buffer Cache Hit Ratio Indicates the percentage of data that is being fetched directly from memory instead of going to disk. This counter is more accurate on SQL Server 2000 with Service Pack 3 (SP3).
SQLServer:Databases Transactions/Sec Indicates the number of transactions per second.
SQLServer:General Statistics User Connections Shows the number of user connections currently accessing the SQL Server. This may not indicate an accurate number, if connection pooling is enabled. With connection pooling, a pool of connections will be shared across different users.
SQLServer:Memory Manager Total Server Memory (KB) Total amount of dynamic memory the server is currently consuming
SQLServer:Memory Manager Target Server Memory (KB) Total amount of dynamic memory the server is willing to consume. A huge difference between this and the above counter indicates that SQL Server is starving for memory, and that there is a need for upgrading RAM.
System Processor Queue Length Should be as close to 0 as possible. A higher number indicates contention for CPU resources. Need to upgrade the CPUs.

After monitoring for a while, it all seemed normal. The server is sized properly and there's no problem with the hardware resources. So, my next task was to look at the log files generated by the application. I saw a lot of timeouts. That explains why users complained about 'unexpected errors'. The application is handling the timeout error and displaying the error message 'Unexpected error occurred. Please try again'. I saw another problem in the log as well. There were frequent deadlocks. So, now I have something to concentrate and work on.

I immediately enabled deadlock logging, using the procedure explained here (using the trace flags 1204 and 3605).

Then I used Profiler and my scripts from this article, to determine the long running SQL statements and procedures.

I asked the test team to start a load test (using Load Runner). Load test ran for a while, and sure enough I found numerous deadlocks in the SQL Server error log, when I checked using sp_readerrorlog. Then I queried the Profiler output tables, and sure, there were a bunch of long running stored procedures.

Well, my immediate thought was to look at those stored procedures, and see if there's any scope for optimization, and prevent deadlocks by altering the way the tables are accessed. To my surprise, all the stored procedures were well written, and there's nothing much I could do in terms of optimizing them. I did find some unnecessary locking hints in the code, and removed them. Now I'm curious, and decided to investigate the exact reason for those deadlocks.

Took a closer look at the deadlocks logged in the error log and I saw some strange locks. Yes, I saw Range locks. It is unusual to see range locks with the default installation of SQL Server. That is, by default SQL Server runs under the READ COMMITTED isolation level, and range locks occur only when we go for a higher isolation level called SERIALIZABLE. Serializable isolation level is the strictest of all the other isolation levels (READ COMMITTED, READ UNCOMMITTED, REPEATABLE READ), and can badly degrade the concurrency of a database in a heavy OLTP environment. During the load test, I did run sp_lock to verify that there are indeed range locks being held. Here's the definition of SERIALIZABLE isolation level, from SQL Server Books Online:

"Places a range lock on the data set, preventing other users from updating or inserting rows into the data set until the transaction is complete. This is the most restrictive of the four isolation levels. Because concurrency is lower, use this option only when necessary. This option has the same effect as setting HOLDLOCK on all tables in all SELECT statements in a transaction."

To find out more about the Key-Range locks, read the following topic from SQL Server 2000 Books Online: "Key-Range Locking". This page talks about all types of range locks (RangeS_S, RangeS_U, RangeI_N, RangeX_X).

Well, now that I know the application running under SERIALIZABLE isolation level, I'm not surprised about all those timeout issues, deadlocks and slow response time. My next move was to find out, why on earth we needed SERIALIZABLE isolation level. After questioning the developers about this, it turned out that all the data access is happening through COM+ 1.0 and COM+ support only one isolation level, that is SERIALIZABLE. Without much thought, developers went ahead and implemented the application with that isolation level. At first I didn't believe, when they told me COM+ supports only SERIALIZABLE isolation level. So, I did my own reseach and they were right.

Why on earth Microsoft would ship COM+, with SERIALIZABLE isolation level only, while the default isolation level setting for their flagship RDBMS (SQL Server) is READ COMMITTED??? I posed the same question to Microsoft, and the response I received goes like this:

"This was done to enforce correctness of data. To make sure the model always works with regards to transactional correctness the decision was made to only use SERIALIZABLE transaction isolation."

While that makes sense, it sure seems to me like an oversight on Microsoft's part. In fact, all books and articles about COM+ do talk about the isolation level enforced by COM+, and the benefits of SERIALIZABLE isolation level. But they don't talk enough about the problems and side effects of this isolation level, on the scalability of the database. So, many development teams may not pay enough attention to the serializable isolation level, under which their application will be running, once deployed. That's what happened in this case.

Further investigation revealed that, this particular application I'm working on, doesn't need the SERIALIZABLE isolation level. So, I picked the heavy duty stored procedures, and added the following command at the beginning:


This will override the isolation set by COM+, with the default isolation level. After this change, the response times have improved, and there's no sign of any timeouts or deadlocks :-)

Here's some good news, the next version of COM+ (version 1.5) is going to support other isolation levels as well. But if you have some serious problems with SERIALIZABLE isolation level, and cannot wait till you upgrade to COM+ 1.5, here are some articles that'll help you with changing the isolation level:

295570: INFO: Transactions and Isolation Levels in COM+

215520: INFO: COM+ and MTS Always Use SERIALIZABLE Transaction Isolation Level

Configuring Transaction Isolation Levels

Please don't get me wrong, I am not saying COM+ is bad. I'm also not saying one should never use SERIALIZABLE isolation level. Everything has a place for its use. Read books online or other text books and understand what those different isolation levels mean, and which isolation level your application needs. Most of the applications run fine with the default isolation level of SQL Server, which is READ COMMITTED.

Before we end, here is a list of SQL Server performance tuning books. Hope you find them as useful as I did.

Book Title Review available?
The SQL Server 2000 Performance Optimization and Tuning HandbookThe SQL Server 2000 Performance Optimization and Tuning Handbook
Microsoft SQL Server 2000 Performance Tuning Technical ReferenceMicrosoft SQL Server 2000 Performance Tuning Technical Reference
Yes. Click here to read
SQL Performance TuningSQL Performance Tuning
The Guru's Guide to Stored Procedures, XML and HTMLThe Guru's Guide to Stored Procedures, XML and HTML
Inside Microsoft SQL Server 2000Inside Microsoft SQL Server 2000
Yes. Click here to read
Microsoft SQL Server 2000 Resource KitMicrosoft SQL Server 2000 Resource Kit
Yes. Click here to read