How to See Rowcounts and Execution Time for In-Flight Queries in SQL Server
I frequently need to see rowcounts and execution time for queries while they’re running. Maybe I’m troubleshooting a slow query …
Read Moreon • 6 min read
A little while back I wrote about Why Indexes Reduce Locks for Update and Delete Queries.
I got a great question on the post from Zac:
What’s not super clear is why it takes out a lock on the whole table, is this because it does a lock escalation as a result of the Full Scan? Will this always happen, or is there a threshold of record update counts where this will occur?
This was tough to answer in just a comment, so I promised a full post on the topic.

It can be tricky to manage a lot of little fine grained locks. If I take out 50,000 row locks on a single table, it would be easier for SQL Server to manage that by just giving me one table level lock. But this may not be possible if others are using the table.
When you take out a lot of modification locks, SQL Server will attempt to “escalate” them. If it can’t escalate and I keep taking out locks, it will keep trying.
Books Online has a good article about this, which explains a lot of the details about how many locks you need to take out to trigger lock escalation. Here are the (simplified) basics:
No, the update locks do NOT cause escalation. Just as a reminder, “update” locks are weirdly named– these are a special kind of lock (not just a type of lock associated with an “update” statement). Read more in my post on update locks.
Let’s look at a simple test I ran on a VM to show update locks not triggering the “escalator”.
I’m using the WideWorldImporters sample database again. To make sure I get the most update locks possible for my test, I dropped the index on CustomerID:
DROP INDEX [FK_Sales_Orders_CustomerID] ON [Sales].[Orders]
GO
This forces a clustered index scan on all the sample queries I’m going to run below.
The first trace is looking at the sqlserver.lock_escalation event. Note that I’m using NO_EVENT_LOSS and MAX_DISPATCH_LATENCY=5 in this trace— that’s because I’m running this against a totally private test instance, and it doesn’t matter if I impact performance.
CREATE EVENT SESSION [LockEscalation] ON SERVER
ADD EVENT sqlserver.lock_escalation(SET collect_database_name=(1),collect_statement=(1)
ACTION(sqlserver.session_id))
ADD TARGET package0.event_file(SET filename=N'S:\XEvents\Lock_Escalation.xel')
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=NO_EVENT_LOSS,
MAX_DISPATCH_LATENCY=5 SECONDS,MAX_EVENT_SIZE=0 KB,
MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF)
GO
The second trace is counting the locks used used by session_id=56. It’s looking for locks against a particular object (I don’t care about metadata locks), and it’s putting the output in a histogram target bucketed by the lock mode:
CREATE EVENT SESSION [locks_count_spid_56] ON SERVER
ADD EVENT sqlserver.lock_acquired(
WHERE ([package0].[equal_uint64]([sqlserver].[session_id],(56))
AND [associated_object_id]=(72057594047234048.)))
ADD TARGET package0.histogram(SET filtering_event_name=N'sqlserver.lock_acquired',
source=N'mode',source_type=(0))
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_MULTIPLE_EVENT_LOSS,
MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,
TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF)
GO
Looking at this, I’m embarrassed by how inconsistent I am at naming traces. But I’m consistently bad at that, so… yeah.
Here’s our first contender…
BEGIN TRAN
UPDATE Sales.Orders
SET InternalComments = N'Hiya'
WHERE CustomerID = 2
OPTION (RECOMPILE);
ROLLBACK
GO
This does a clustered index scan, but modifies only 165 rows. I roll back the transaction just for further testing.
What do the traces have to say?
We had way more than 5K update locks on this object, but they don’t count toward lock escalation.
Before starting this test, I restarted my locks_count_spid_56 trace to reset it. Then I ran this query:
BEGIN TRAN
UPDATE Sales.Orders
SET InternalComments = N'Hoya'
WHERE CustomerID > 1013
OPTION (RECOMPILE);
ROLLBACK
GO
This also does a clustered index scan, and modifies just under 5,000 rows. So it would seem like this wouldn’t escalate. Let’s see!
4,475 exclusive locks + 1,772 intent exclusive locks = 6,247 locks at the time of escalation.
The update and intent update locks don’t count toward escalation.
Good indexing can reduce the number of update locks that queries take out – that can reduce blocking, because update locks block one another. And besides, good indexing can make queries faster.
Lock escalation converts exclusive and intent exclusive locks. The initial threshold to trigger lock escalation is 5,000 locks used in a single table reference, but you might hit that threshold even if you’re modifying less than 5,000 rows.
If you have to modify a lot of rows in a table that’s being used by others where performance matters, lock escalation is one of multiple factors that makes it desirable to break the modifications up into smaller transactions.
Having lock escalation isn’t necessarily a bad thing. If escalation succeeds, it’s possible that the query with escalated locks doesn’t end up blocking anyone else. You need to monitor the SQL Server to know whether or not you’ve got a blocking problem, and who is blocking whom.
Copyright (c) 2025, Catalyze SQL, LLC; all rights reserved. Opinions expressed on this site are solely those of Kendra Little of Catalyze SQL, LLC. Content policy: Short excerpts of blog posts (3 sentences) may be republished, but longer excerpts and artwork cannot be shared without explicit permission.