How to See Rowcounts and Execution Time for In-Flight Queries in SQL Server
I frequently need to see rowcounts and execution time for queries while they’re running. Maybe I’m troubleshooting a slow query …
Read MoreBy Kendra Little on • 8 min read
Following on from my Learner’s Guide to SQL Server Performance Triage, I’m tackling Query Tuning. In this guide, I’m experimenting with an outline style rather than expanding each paragraph.
We’ve seen incredible improvements over the last 15 years:
Yet there are still professionals who make a good living tuning queries, and training others to tune queries. This process involves finding specific slow queries that are key to the performance of an application and making strategic changes, whether in the code, the database structures, the instance configuration, or something else, to ensure that these specific queries consistently execute with a given speed or to a required standard of performance.
Why?
Query tuning is done by:
Full stack developers don’t generally do query tuning unless they have a specific interest or work experience. This is a specialization rather than a “quick learning” task, so most full-stack developers simply don’t have time, and they need to engage a more specialized person to help. Teams who don’t have a readily available specialist may periodically bring in consultants to help with this.
There are also many database administrators who manage databases where only “basic” availability and performance are required. These databases are used by cost-conscious organizations who don’t need every database to be tuned like a race-car: most of their databases are used by internal users who are used to moderate performance.
I am not a wizard at TSQL, but I still became pretty good at query tuning. A very unscientific estimate of what skills make someone good at query tuning:
These are a “map” of how the query is run behind the scenes.
SQL Server 2016+, all editions
These are tricky to use for query tuning because it’s easy to slow down your workload and cause performance problems when tracing:
Execution plans (filtering doesn’t help in this case, the plans are all examined / collected and the filter is applied too late)
Wait statistics (filtering can help here, but the data collected is so massive that you have to be very careful – and sorting through and querying the collected data is also quite cumbersome
If a query is “sometimes fast, sometimes slow”, this may be the cause
For parameterized queries, SQL Server “sniffs” the values supplied for the parameters on first execution. This plan is reused when other values are provided on subsequent execution unless something happens to cause recompilation.
An example…
The CustomerOrderDetails procedure is initially compiled for @CompanyId= 1001, a tiny customer with one order. A plan expecting a very small amount of orders is generated, which allocates very little memory for sorts and joins.
The CustomerOrders procedure is then run for @CompanyId= 128, our biggest customer with a million rows of details about their orders. As the query runs all the estimates are too small, there are inefficient one-by-one reads and memory spills are happening all over the place
Someone restarts the SQL Server because “it’s slow”. This causes queries to all compile freshly. CustomerOrderDetails runs for the first time with @CompanyId = 128 and compiles a plan suited to many rows, and runs fast. Nobody understands what happened.
It’s difficult to predict how queries will interact with one another in a live workload
Shared resources:
Query workspace memory – a certain amount of memory needs to be allocated for sorts/ joins/ moving data around in a query. Lots of queries executing at once which need significant workspace memory can cause a problem with this.
The number of queries doing modification and the approach to locking is difficult to predict outside of a live workload
Changes in server resources – even improvements – can cause blocking when it wasn’t present before
Often, yes. One of the examples of this is parallelism.
Tuning the level of parallelism for a workload, and for specific queries in that workload tends to be quite hardware specific, and you need a live environment.
Workload “replays” are available within SQL Server’s toolkit but they are:
SQL Server 2017, Enterprise Edition
IQP features have been releasing over the last few versions of SQL Server
Several of these features address common query tuning issues in SQL Server
Examples:
Scalar function inlining: Scalar functions have historically been very poorly optimized in SQL Server, causing slow row-by-row performance
Adaptive joins: Adaptive logic is built into plans in some cases so that not only a “single path” for execution is available in the plan
Batch mode execution:
Optimizes how SQL Server handles scans of data by processing multiple values at once
This originally was available only for queries which reference columnstore indexes.
Batch mode for rowstore indexes introduced in SQL Server 2019
Lack of connection between DBAs and Development teams
Over-reliance on traces
Lack of knowledge of execution plans in the team
Overuse of hints
Lack of understanding of SQL Server isolation levels and “optimistic” options
Copyright (c) 2025, Catalyze SQL, LLC; all rights reserved. Opinions expressed on this site are solely those of Kendra Little of Catalyze SQL, LLC. Content policy: Short excerpts of blog posts (3 sentences) may be republished, but longer excerpts and artwork cannot be shared without explicit permission.