Why Won't PostgreSQL Use My Covering Index?
Dear Postgres, Why won’t you use my covering index?
Lately I’ve been learning to tune queries running against PostgreSQL, and it’s …
Read Moreon • 13 min read
You finally got approval to move to new hardware and a fresher version of SQL Server. After months of work, you do the migration and then… performance gets worse. What can cause this, and what do you look for?
Prefer a podcast? Find it at kendralittle.com/dear-sql-dba-podcast.
Show notes with clickable links are below the video. A poster of the concepts is here.
Here’s this week’s question:
I recently went through a process of migrating databases on a SQL Server 2008 R2 stand alone instance to a SQL Server 2012 SP2 cluster.
The cluster’s servers and SQL Server configurations were built to be as close to identical as possible to the previous instance (memory, cores, disk, maxdop, CTP, etc).
After the migration, I noticed that CPU utilization jumped from the normal 25% to a consistent 75%.
I did several other migrations with similar server loads with no issues, so I’m a bit puzzled as to what might be going on here. Could the upgrade from SQL Server 2008 R2 to SQL Server 2012 simply be exposing bad queries that 2008 was handling differently?
I love this topic! I’m going to go a bit broader and talk about a variety of issues I’ve seen with migrations and upgrades. Let’s get to it!

You’ve already done this one. I’m listing it for others who might hit the same situation, because it’s an important step.
SQL Server defaults “max degree of parallelism” to 0, which means when a query goes parallel, it can use all the cores. If you accidentally left that set at 0 on the new instance, your workload could have queries fighting over all CPUs all the time.
For more information on how to determine these two settings, check out my last episde, “Max Degree of Confusion.”
While you’re at it, compare all the settings in sys.configurations between the instances to check for differences, and look into each one. I’ve seen weird accidents cause performance issues, like people mistakenly setting minimum memory for a query on one instance.
I’ve run into a couple of issues with hardware configuration that weren’t caught until after migration.
In one case, the server power wasn’t completely plugged in. This led to CPUs that operated at lower power, and actually showed higher percent utilization when examined in Windows task manager. They were just capable of doing less!
You can detect this by doing two things:
In another case, following a migration we had one single core that was always at 100%! It turns out that our new NIC was pegging one core under load.
I wouldn’t expect this to make the entire difference between 25% and 75% usage, but it could be one of a few things contributing to the problem. It’s easy to fix, so it’s worth checking for.
Look at the BIOS on the new servers and make sure that power savings has been disabled.
Just changing the power savings settings in Windows Server does not always disable power savings, you need to look at a server management tool like Dell OpenManage or HP System Insight Manager.
Use your server model and look up the manufacturer’s guide to configuration for performance. These guides will list the power settings needed to get “high performance.”
Most servers these days ship with default settings that clock the processors down, and they can be very sluggish to clock up.
This typically takes a planned outage to fix: you need to make a change in the BIOS and then restart the server.
Stack dumps typically causes periodic “freezeups” that are followed by high load. 15 second storage errors mean the storage literally isn’t responding for 15 seconds.
Look in the SQL Server error log. Look pretty closely at everything for a few days for errors, then filter for the terms:
For more on 15 second errors, check out my previous episode, “Outside the big SAN Box: Identifying Storage Latency in SQL Server.”
If SQL Server can’t use all your CPUs, it can cause weird high CPU usage on some of them– and in some cases not allow SQL Server to use all the memory on the server, which can cause unexpected slow performance patterns.
I’ve seen this happen in a couple of different ways:
Look in the SQL Server Error log after the last startup for the message telling you how many sockets and cores SQL Server detected and make sure it’s using all the cores it should be.
You’ve probably checked for this.
It never hurts to doublecheck, because I’ve found surprises on servers that I manage. Stuff just creeps in. Someone decides to use the server for testing before you go live and forgets to tell you, and also forgets to turns stuff off.
Or if you’re me, you did something in the middle of the night before the instance went live that you forgot about.
Check the Windows task manager for processes using CPU and memory, and make sure there’s not scheduled tasks or agent jobs that are running things that didn’t used to run on the old server.
This is specifically for folks who have upgraded to SQL Server 2014 or higher.
Check your database compatibility level and related settings.
The new cardinality estimator is a good thing, but some folks have found it causing performance regressions, so it’s worth checking if this is part of your issue.
Instead of moving the files and attaching, I prefer backup and restore (or log shipping for a fast cutover).
You should be taking a final backup pre-migration no matter what, so you can restore that last backup and compare, even if you moved the database files to do the migration.
In any case, pull the top 10 queries by total CPU usage. Look for:
I like to pull this list prior to migration as well, so I can tell if the top queries afterward are the same or different.
Free tools to pull top queries:
Even if you’re matching things like the number of CPUs, the CPUs themselves are hopefully faster. Because hardware is just faster.
And since memory is cheaper, we often have a bit more memory in the new SQL Server.
When you make one thing faster, sometimes that can lead to a new bottleneck. I’ve hit this situation:
For migrations, I like to capture wait statistics samples for a couple of weeks prior to the migration. That way, after the migration I can compare waits and see if anything new pops up, and track it from there.
Two free tools to sample waits:
Good news, the wait stats check in the previous section will find this, too!
Sometimes you just get unlucky, and your code hits a weird issue on a new version of SQL Server that you didn’t have on the old one. In once case I had very high CMEMTHREAD waits under load on one specific version of SQL Server.
This is a pretty rare wait type and I had to do a bunch of research on it, and we eventually found a fix for our version.
I always look for weird wait types, and want to compare waits pre and post migration when possible.
Sometimes people migrate right before the busy season hits. They know some load is coming, so they buy new hardware.
And sometimes the person doing the DBA work doesn’t get to talk to the people who know when it’s a high or low period for load.
You might be able to see this looking at top queries by execution, but another simple way to check it is by collecting a few performance counters before and after migration
The “SQL Statistics” Performance Object has three counters that can help measure how much work your instance is doing:
Compilations particularly burn CPU, so whenever CPU goes up I’m always interested to see if compilation rates have risen.
Copyright (c) 2025, Catalyze SQL, LLC; all rights reserved. Opinions expressed on this site are solely those of Kendra Little of Catalyze SQL, LLC. Content policy: Short excerpts of blog posts (3 sentences) may be republished, but longer excerpts and artwork cannot be shared without explicit permission.