Introducing the SQL Server 2016 Query Store


Audio podcast:

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes, Stitcher or RSS
Leave us a review in iTunes

Target Audience:

DBAs and developers that are interested in analyzing query performance.

Abstract:

With the release of SQL Server 2016 we were finally able to play with, in my opinion, one of the most exciting new features in SQL Server 2016, the Query Store!

The Query Store serves as a flight recorder for your query workload and provides valuable insights into the performance of your queries. It doesn’t stop there, however, using the performance metrics the Query Store records, we can decide which Execution Plan SQL Server should use when executing a specific query. If those two features aren’t enough, the Query Store provides all this information inside easy-to-use reports and Dynamic Management Views (DMVs) removing a great deal of the complexity of query performance analysis.

During this session, we will take a thorough look at the Query Store, its architecture, the performance impact of enabling the Query Store, DMVs, the built-in reporting and the custom Query Store Dashboard.

No matter if you are a DBA or developer, the Query Store has all the data you need to make your time analyzing query performance a whole lot more fun!

Why I Want to Present This Session:

I absolutely love the Query Store feature and I believe it is THE feature that makes performance analysis and tuning available for everyone working with SQL Server.

Additional Resources:

https://www.simple-talk.com/author/enrico-van-de-laar/

Session Transcript:

Introductions and Agenda

Brent Ozar: Everybody this is Enrico van de Laar, he’s going to be talking to us about SQL server 2016 query store. Probably the one biggest feature why you should jump up and down about SQL server 2016, getting that thing implemented. So take it away, Enrico.

Enrico van de Laar: Perfect, thank you. I can probably just open my PowerPoint I think, I’m taking a guess here.

Brent Ozar: Oh yes.

Enrico van de Laar: Perfect. So Brent did a really short introduction, a little bit more about myself, I’m a project manager at a Dutch company called PinkRoccade Healthcare. We specialize in doing all kinds of IT stuff in the healthcare sector in the Netherlands. I’m an MVP for the data platform, I blog regularly on my blog dotnine.net. I wrote a book about wait statistics, which is my second favorite thing to talk and read about, next to the query store of course. I speak at various SQL Server events and I’m very happy to be a contributor of DBA tools, which is a PowerShell solution that will make your jaw drop to the floor if you see it in your DBA. The picture on the right a little bit of an older picture, the parent isn’t here today so he wasn’t allowed to come into this session today, and this is what we are going to talk about in this session.

We are going to see what is query store is, I’m going to show you what it is, talk a little bit about it, what it does, why it’s there.

Then we’re going to take a look at its underlying architecture because I always like to know how things work to understand them better. I’m going to show you where you can set all kinds of options for the query store, how you can configure it. We’ll briefly talk about some of the things – the options, what disadvantages they have, but we’ll get into those a little bit more later. I’ve added an extra short chapter about understanding query performance metrics because even though the thing is called the query store, there are some things you need to understand when you look at the performance metrics the query store captures, so you don’t draw the wrong conclusions when looking at all those metrics. Then we’re going to get our hands a little bit dirty and look inside the query store, how we can get that data out, how the query store stores its information, its query run time performance metrics, how we can view it through the dynamic management views or the built-in reports.

And then we’re going to look at the second magic trick of the query store, which is plan forcing. Using the query store, we can force execution plans and whether that is a good thing or a bad thing, we’ll go into a little bit more detail later on. And of course, everything has its price, even though it’s in standards and every additional SQL server, there are some performance considerations when enabling the query store.

What Is the Query Store?

So what is this query store? Well, unlike you might think, it isn’t a physical store to buy queries but like we hinted already, the query store is basically SQL server’s flight recorder. That’s basically the way Microsoft marketing is promoting it. The query store captures the performance of queries executed against a specific database. So it can record all kinds of information about that, like the duration of the query, how many rows it’s processed, all those kinds of things. So sadly, it isn’t the place where you can buy queries, even though that will be a pretty decent business idea. It is a place where you can look at the performance of your queries.

What Is the Query Store

So it was introduced in 2016 and like I said, what it basically does, it captures and stores query information on a per database level. This means you will have to enable it on every database you want – you want to use it on, and it will store it on disk. So that’s one of the really big advantages of the query store is that all the data it captures, it will be hardened on your storage sub system. This means that if you ever failover or all those things that would clear your cache in the past, because without the query store you had to go to the plan cache to get all kinds of query performance information. It will still be there, so even after failovers you can still look, okay, what happened before the failover, why was the performance of this query worse and why did we do the failover, without losing the information. The query store allows you to quickly and relatively easily analyze the – your query performance using built in reports and DMVs.

So they build in a few reports, which they add new ones all the time with new releases of management studio and they added a lot of DMVs you can use to just get that information out. Like I said, it retains query history for as long as you want, so if you want to store all kinds of performance run time metrics for two years or three years, that’s perfectly fine. You can do that if you want to. You just need enough disk space to store all that information. Another really big plus is that it’s integrated directly in the SQL server engine, so this means when SQL server executes a query, it will grab all that information from the query and store it directly without having to run a threat in the background for instance that has to capture the information after query processing. So that makes it a bit … it doesn’t take as much performance as just grabbing everything from the plan cache every time or after query is executed. So that’s really great.

Like we said in the introduction, what Brent mentioned is that it’s available on every SQL server edition. So this is not enterprise only, this is available on standard – this is available on every edition you can think and a little funny bit about the query store is that originally it was built for Azure, for AzureDB so if you’re in Azure and you’re looking at your performance, that’s basically – your query performance, that’s basically the query store. It was introduced in 2016 but if you look at certain times of wait statistics, you can basically see that they were built in it in SQL server 2014 already. There are certain wait types that start with query data store, which was the name for the query store in the past and still is in certain wait types. So they have been built in this for quite some time now.

Why the Query Store is Powerful

So personally, I think – like I said, this one is in the top ten of my best features in SQL server and why do I think this is one of the best features? Because this is going to save you a lot of time and if things are going to save you time, they will save you money and there are many cases and maybe a lot of you are familiar with certain queries you need to run to query the plan cache. They can take a lot of time because you have to use the plan hash keys for instance, you have to do a cross apply to get the query text, you have to do all kinds of complex querying but basically what the query store allows you to do is just instantly look at that information and usually in a very graphical manner. So when I have a performance issue with a certain query, I have to go and visit developers, and developers are really nice guys unless you try to convince them they wrote a bad query. So I take a lot of time in preparing why this query is running bad, so I have to make screenshots of the query performance, how it’s running with multiple rows or a couple of rows and I have to do that and I have to copy it in Excel for instance and I have to make a nice graph of it so they can visually understand what’s going on. And with the query store, there’s just one report you can open and you can give them that information directly. As a matter of fact, for our developers, when they have an issue with a query I basically ask them have you looked in the query store already, because all the information you want to know is there. You can just easily go there, you don’t need me to find all that information. It also makes performance tuning available for everyone and there is a little star behind every one because even though the query store makes seeing things a lot easier, especially the plan forcing part is not for everyone. You really need to understand what you are doing when you are forcing an execution plan. We are going to talk a little bit about this later but forcing an execution plan isn’t necessarily a solution for a performance issue, so even though it’s a bit easier, you’re not confronted with very long complex queries and having to understand where all the information comes from. You still have to think a little bit about yourself. So a quick look at the query store architecture.

Query Store Architecture

Like the name says, the query store, everything starts with a query and when a query gets processed, a number of things are going to happen. When it’s being executed, the query store is going to grab three pieces of information from that query.

Query Store Architecture

It’s going to grab the query plan, it’s going to grab the query text and it’s going to get the query run time statistics, and the query plan is basically the estimated execution plan. This is very important to keep in mind. Every plan I’ll show you through the query store is the estimated plan, it’s not the actual execution plan, but always the estimated plan. It will extract the query text from the query. It will – actually if you have multiple queries in one batch, it will actually store every query statement separately and it will gather the query run time statistics for every query statement. So then some magic stuff happens, in which the query plan and the query text is going to be stored in a little place in memory. There is a bit of a buffer there where the query store is going to store that information for queries. The query run time statistics all get their own bucket to store their information and everything here is still in memory. Query plans and the query text, if they are new, if they are unique plans and text or they have been executed for the first time, will directly be passed on to a process called the ASYNC Writer and the ASYNC Writer is basically a collection of jobs, which SQL server has to write things to disk for. So think of it as a list where SQL server places a bit of a work order, like okay, please write this query plan to disk and the ASYNC picks it up and it will write it to disk when it’s ready.

The query run time statistics are not directly written to the ASYNC Writer. They are written at a specific interval, which is depending on a setting called data flush interval and we’ll look at that setting a little bit later because it plays a huge role in the performance of the query store, of the performance impact, but it basically means that if you have a pretty high data flush interval setting and your SQL server crashes, you will lose your query run time statistics that haven’t been written to disk yet. Your query plans and text will still be inside the query store because those are basically flushed to disk practically immediately, but your query run time statistics follow a bit later, depending on the interval setting. The ASYNC Writer will eventually write them to disk, so that’s when your stuff is being hardened. The nice thing about the query store is we can – everything that we do with the DMVs and the reporting, it just gathers the information together. So it doesn’t matter if the data is in memory or the data is already on disk, it will gather it and it will show you in the built in reports, or in the DMVs, so we don’t have to see okay, what’s on disk already or what’s on memory and write a separate query or a separate report. Everything is combined together, which is really helpful.

Demo Time

Alright, so this indicates I need to show you something, so let’s do that. Alright, I have a bit of a machine here which is my virtual machine which has some databases. This is the new management studio so we’re going to see a couple of more built in query store reports than usual, but basically the way the query store works if you want to use it, is you go to the properties of the database and as you can see, in 2016 they’ve added this nice little new option called query store and which you can click and you can configure all kinds of things here. So basically, as soon as you want to use the query store, you can set it on read write, and it’s on.

Query Store configuration

So if I press okay, the query store will start collecting information for me. Now, like I said, there are a couple of settings that influence the behavior of the query store and the first one I said in the previous slide is the data flush interval, and the data flush interval by default is 15 minutes, and it basically means that if you have a SQL server crash or you have a failover, that you will lose a maximum of 15 minutes of query run time statistics. So all those performance information about query execution, you can lose 15 minutes. The statistics collection interval, it basically dictates how long data should be aggregated together, and I’ll get to that in a separate section how this works, because this really dictates the granularity of how you can analyze query performance.

So more general options like max size sets the maximum size that the query store can grow, and one thing to keep in mind is the query store resides not in a separate database, it doesn’t reside in Master or MSDB or whatever system database. It will be stored in the primary file of your user database. So this is something you need to keep in mind. So if this is my adventurous database, I’m basically saying here that I will allow the query store to grow to 250MB of my user database. So if I store this on disk and things like that, I need to make sure I do have that space so the database can grow because it can grow because of the query store. The query store capture mode allows you to set various options, what kind of queries SQL server should capture, and basically you have none, which means it doesn’t capture anything. You have all, which means it captures all queries and you have auto or automatic, which says it will capture queries based on resource consumption, and interestingly enough, even though I’ve set it to auto and all types of settings, it still seems to capture every type of query. Normally the way it should work is when you have a query that’s being executed like very frequently and there isn’t a lot of difference in execution times and everything, it shouldn’t necessarily need to capture that query, or if you have queries that are really heavy continuously, you have one or two really small queries, it should skip capturing that query. But, like I said, I haven’t really seen it work that way. There’s another option, which is size based clean up mode, which is pretty handy, but if your query store, which is configured to 250MB here – if it reaches I think 90%, it will automatically start cleaning up old data.

So it will start cleaning up all kinds of run time statistics that are like really old, or it will start up cleaning information about queries that haven’t been executed in a specific amount of days, and those days you can configure here, which is 30 in this case, one month. If a query hasn’t been executed for 30 days, it will basically clean them from the query store. So that’s one option you can use to keep the size of the query store under control.

At the bottom are some nice pie charts, which basically show you the size of your database and how much the query store is using, and it will show you inside the query store how much space is still available. There’s a really handy button, Purge Query Data, which will empty your query store. So we’ll turn on the query store, so we can work throughout them a little bit later.

As with everything, you can basically configure everything in the query store through T-SQL, using an extended alter database command. So they added a new set query store option and you can do this on the database and configure all the options I’ve shown you just through T-SQL, which could be really handy if you want to configure this on multiple servers, multiple databases. You can just write a T-SQL script that does that. So this is just very simple how you can enable the query store, how you can configure it, so it’s really easy to use and it is collecting data now, so as soon as you start clicking on okay, it will immediately start collecting everything that’s going on. That also includes some internal queries that are running in the background but also everything you do in management studio.

Configuring Query Store Across Many Servers and Databases

Alright, now, like I said, the query store is enabled and configured on a per database level. So what do you do when you are the lucky DBA to have one thousand instances and tens of thousands of databases, how are you going to configure the query store on all those servers and databases? Well, DBA tools to the rescue. If you haven’t heard about DBA tools yet, you should absolutely look it up.

DBAtools commandlets

It’s a PowerShell based script that will make your life as database administrator a lot easier, and by I mean a lot easier, recently I was at a client who was dealing with an issue that they had to migrate, a very large clustered environment to new hardware. So they were like 80 databases there and they had an entire plan written out about how they were going to do that. They were going to take a manual backup of every database; they were going to capture what users were present inside the instance, what writes they have, what kind of permissions, all kinds of functions. It was a – they spent weeks planning on that migration and what DBA tools does, it will help you do that migration in one single line of PowerShell code. If you do not believe me, you should look up DBAtools.io and look at the video there. When I showed that video to those guys, they were literally dancing through the room and cursing because they hadn’t figured this out earlier. It made their migration like ten times easier. They just had to execute a command, I want to migrate from server A to server B, I want you to back up every database and store the databases there and just press enter, and it will migrate everything you configured. It’s really great.

As a contributor of DBA tools and my love for query store, I’ve added a couple of commands to DBA tools, which is get DBA query store config, which is a PowerShell command you can use to get the configuration of the query store, or get configuration of many query store enabled databases inside an instance or get it for all your servers or all your databases in your environment. So you don’t have to write a query to get that information and run it on all those instances. You can just basically execute a command and get the information from all those servers and all those databases. Using set DBAquery store config, you can then configure the query store with your specific settings like your data flush interval setting against as many servers or databases as you want. So basically, if you have one thousand databases on a single server, you can just say okay, I want you to set these settings – alright, am I – is everyone able to hear me now, because I think the audio went off.

Brent Ozar: It only went for like five seconds tops.

Enrico van de Laar: It should be back.

Brent Ozar: Yes, you’re good.

Enrico van de Laar: Alright, let’s see. Another option that we’ve built in to create to DBA query store config is a – DBA query store config, and then you can basically say okay, I have this one database that has a query store configuration that’s perfect. So I configure this database optimally with all the query store settings I want, and I want you to push this configuration to all my other databases. So it will grab the configuration of the single database and it will push it to all the other databases on a specific server or on multiple servers. So this is incredibly helpful if you want to configure the query store on all your servers and your environment or all your databases.

How Query Store Records and Aggregates Statistics

One thing that I always love to explain – let me click this bit away – is how the query store records and aggregates those run-time statistics because the way it does that is by using time intervals and in the settings, I’ve shown you one of those options, that dictates the size of the interval, which is in default one hour, which is the statistics collection interval and when your querying the query store or looking at all the reports inside the query store, you are basically looking at aggregated data.

Aggregating metrics in intervals

If you have a query that’s being executed multiple times in an interval, the query store will aggregate that information. So you have to keep that in mind, you are not looking at a single – the performance of a single query execution, but rather you’re looking at the aggregation of multiple executions of the same query. And the statistics collection interval settings plays a huge role in that because the lower you set your interval, like one minute in the screenshot below, the more granular your data is. So if you collect everything on one minute, the aggregation will give you more details about query, about the performance of the query because it doesn’t have to aggregate like one hour of executions of that specific query. The negative side to setting this really low is that it will cost you more performance because more intervals needs to be maintained and it can cost you a bit more storage, depending on how many times your query is being executed. But this will – this is absolutely a performance hit, and the way I always demonstrate this is by a very simple example, where you have two query store intervals – interval A, which goes from zero minutes to one minute and interval B that starts at one minute and ends at two minutes. What will basically happen, if we query our specific query for interval A, and it has been executed one time and its execution time was one hundred milliseconds. If we go to the DMVs or the reports, we will see that query A has been executed one time and its average duration is one hundred milliseconds. In this case, we are looking at the performance of a single execution of that query.

Frequently, when you’re dealing with a lot of specific applications, they will execute certain queries more frequently. So say we have our same query, query A and in one interval we execute it two times, and the first time it takes one hundred milliseconds and the two times it takes two hundred milliseconds. What the query store will do if we grab the query performance information from this interval, it will show us that the execution count has been two, because it has been executed two times but it will also show you a different average duration. So you’ll have to keep this in mind, that a query can be executed multiple times during an interval and that will affect your run time performance statistics.

Demoing Aggregation

Alright, now I’ve got a short demo of that. Let’s take a look. So what I’m going to do is I’m going to clear the query store using this command, which is basically the same as pressing a button, like the one in the database properties, and I’m going to run a query, which is – the query itself doesn’t really matter because the query store will capture every query and we have to grab the plan ID. Now, this is a bit of a silly query because I already know my plan ID is going to be one because this is the first query executed after I cleared the query store. What’s interesting to mention is the query store has its own IDs for tracking queries and plans.So it doesn’t – it does use a hash but it has its own number of capturing the queries. So an idea of one means this was the first query executed and it will just keep on counting until the end. Alright, so I’m going to look at the run time statistics using this DMV. We’ll get into the DMVs later but I’m basically going to look at okay, give me all the performance information of this specific plan ID, in this case, one, and as you can see, we’ve executed the last execution time was this, we’ve executed the query once but notice for instance, certain performance indicators like the standard deviation are zero. This is pretty normal because standard deviation cannot be calculated from a single execution. But it also means that things like the last minimum and the maximum are all identical because the query has only been executed once in this specific interval.

If we execute the query again, and we look at the performance metrics again, you’ll see it has been executed two times and now the things like standard deviation have been included. A max duration has been calculated and a minimum duration, and we’ll also have an average duration that’s in between those two, since it’s only been executed two times. So how can we look at those intervals? There was another DMV, this query store run time stats interval that shows you all the intervals you have, and as you can see, they have a pretty logical start time and an end time, so in this case my interval is one hour, so that means it starts at 1400 and it ends at 1500. So I think this is the – it’s stored on the central European time zone, so you’ll have to do some calculations if you want to calculate the start time and end time on your own time format. … Column here, that’s always null, I don’t know why it’s there. Probably they’ll build some functionality that you can put a comment at a certain interval like okay, something happened here, we need to check this out, but right now it isn’t doing anything.

Alright, so let’s start this one again. I’ve already shown you a couple of DMVs when we looked at how the interval works and everything, but basically, there are two methods available for you to analyze the data inside the query store out of the box and those are the built-in reporting and the DMVs. The built-in reports are, in my idea, perfectly fine. I know some people have some issues with them, because they aren’t really perfect but they are pretty helpful. They are easily accessible, directly from management studio and they give you access to all kinds of features like compare plans and force execution plans or compare multiple plans, all things like that.

If you want to access the query store information on a more programmatically way, like through running SQL scripts or T-SQL scripts, you probably use the DMVs. Alright, so what I’m going to show you first now is I’m going to show you where you can find the reports and how they look by default. Basically, as soon as you enable the query store and you expand your database, you’ll get an extra little folder here called query store. If you open that one up, you will be able to see all the different reports that are available and right now, I’m using the standard 2016 management studio. If you use the RC management studio, you’ll also notice an additional report. So they are adding reports whenever a new version of SSMS is released, so you can expect multiple new reports here.

Permissions Required to See Query Store Data

Brent Ozar: Do you have a second for a question? Crash Dan wants to know what’s the minimum permissions required in order to see these reports in the DMVs? Do you know?

Enrico van de Laar: That’s a really good question but I forgot.

Brent Ozar: I didn’t know it either.

Enrico van de Laar: No, I think it’s on MSDN, it’s written somewhere what you need but I don’t know it from the top of my head, sorry.

Brent Ozar: Okay, cool.

Query Store Reports

So these are basically the four you got. The request queries shows you queries that query store has seen that performed really good and all of a sudden performed worse. So these are an overview for finding those queries where a user calls, it was really fast one hour ago and it’s really slow now. You can use that report and an overall resource consumption is an overview of the performance your queries are taking, but the one I use basically all the time is the top resource consuming queries. And if we open that one up, and I have the query store enabled for some time now, you’ll get shown into this little report interactive thing.

What it immediately does for you in this case, by default, it will immediately show you okay, what are the top 25 resource consuming queries during the last hour. So here I can immediately identify that my SELECT star, that was my top resource consumer during the last hour. What you see here in this bar chart, you can basically configure yourself. You can also say okay, I want to see based on number of executions or I want to see it on a number of execution plans. Here we have an interesting one. This one already has two execution plans, and as you can see, like I said, this is an internal query. This is something generated by management studio. So apparently this one generated two different plans and you can see all the different plans and their execution time to the right, and this one is really helpful because I can immediately identify that this query has two different execution plans, an ID 11 and an ID of 8. And they are probably a little bit different somewhere – okay, I see a hash match here going on and something different here. But this will help you to find those queries when you have for instance a parameter sniffing problem that a plan is being compiled on a value that only has two rows and it generates a different plan for something that has many rows. You can identify that query generated different or multiple plans and you can basically select the plan and click the button here to force it. We’ll look into this a little bit later but this is how you can force specific execution plans.

If you don’t like graphs, you can always switch to a grid format or one with additional details where you can immediately see the query text and just scroll through it, basically get the same information like the DMVs. So I usually just leave it at this. Depending on your interval, so the granularity you set, in this case it’s one hour, you will see how this – it will aggregate the information so these queries, the little balloons, the little balls can go up and down depending on the number of executions.

Analyzing a query’s history

At the bottom you will see the execution plan, depending on the little ball you clicked on here. So this is really helpful. You can immediately see all kinds of information from this interactive report. If you mouse over it you’ll get more information directly from the statistics like the total execution count, the minimum duration, maximum duration, all that kind of information. So this one is really helpful and you can configure and basically arrange this any way you want. So by using the configuration button, I can say okay, I want it on duration or a logical reads, logical writes or physical reads, I want it based on the average value and I want it in well, okay, let’s say the last month, and I only want the top ten queries and I’m only interested in queries with more than two plans for instance. And I can click okay and it will present me this information, so this is really, really easy to use and way friendlier than writing a lot of T-SQL code to get the same information. I’m going to skip the track queries report here because it isn’t really that interesting. What it basically does is you click on a query in this case and you click on the track query button and then you can basically live see how the query performs. If you set on auto refresh, it will just query the query store every now and then and see how the query is running. You can just follow that specific query if you have a very specific query that you know has issues, you can just like – like the live query statistics, only then you can track the query execution with this one, basically live.

What’s the Difference Between Estimated and Actual Plan?

Brent Ozar: We’re getting a lot of questions around – what’s the difference between an estimated plan and an actual plan? Why can’t it cache actual plans?

Enrico van de Laar: Alright, well that’s basically a question I get a lot. Well, the difference is the estimated plan is like okay, this is what I think I’m going to be doing, of what I’m going to do, and the actual plan is what happened after the execution, and the main reason they are storing the estimated plan is basically very simple. In the actual plan, if you execute a query with an actual plan, you’ll also see how many rows it has actually processed and how many things it has actually done and because that number can change with every execution, it would mean you have to store every execution plan for a specific query because one time you’ll get a hundred rows back, then you’ll get 150, then you’ll get 101 and that case it took 2.3 seconds and the other time 2.1, which basically means that for every execution you will need to add an additional query performance row inside query store and with the estimated plan, you don’t need to do that.

Does Query Store Work with Log Shipping and AGs?

Brent Ozar: Another – a lot of people have asked, Ritesh was one of them, “If I do things like log shipping or always on availability groups, what’s it look like over on the replica?”

Enrico van de Laar: I believe log shipping, you configure it on a per database level and I think if I’m correct, when you use it in an always on configuration, it will set to secondary, it will be read only, if I’m correct, and for log shipping it will probably be read only as well because databases read-only or standby by default. So I think it will be – I’m guessing it will be read only but I’m not really sure about that, I haven’t tested that myself.

Brent Ozar: Okay.

Enrico van de Laar: So I’m not really sure about those.

Brent Ozar: Cool.

Back to the Query Store Reports

Enrico van de Laar: Alright, so this is basically the reports you can use to get an idea of what’s going on and personally, I think these are really great because they immediately show you okay, like what’s going on on my server, what are those expensive queries. Another way to do it is of course by using the DMVs and I’m going to scroll over a couple of slides and immediately show you those. Basically, the query store adds like five or six different DMVs, which store different amount of information. This is like the most obvious one, that stores all your options for your query store. So if you basically get some information, okay, what is the state the query store is in, what is my flush interval in seconds, what is my interval length in minutes, what is my storage size, so this gives you all kinds of information for your query store on this database, so on adventure works. So it will only show you the information on the database you’re connected, so if I switch this to something else and it doesn’t have the query store on, it doesn’t return anything. This was one of those I showed a little bit earlier. This sys query store run time statistics and this is basically your gateway to all your query performance. The run time statistics are all metrics that record how your query was being executed, like when was it for the last time it was executed, what was the first time it’s being executed, what was the average duration, its minimum duration or maximum duration, its average CPU time and it basically does this for all kinds of counters.

So what you’ll see here, we have an average duration, last, minimum, maximum and a standard deviation and it does this for all different areas of interest. For instance, here we have the CPU time. We can see how much time on average do you spend for this specific query when you execute it. I can see what the maximum was and the standard deviation. I can do the same for logical I/O reads in this case or writes if I’m interested. We’ve only been doing select queries so there are no writes here. We can also do it for other cool stuff like physical I/O reads so how many physical I/O reads this query do, CLR time, if you’re using CLRs. One of my favorites, the average degree of parallelism, so we can immediately identify what was the degree of parallelism this query used during its execution, which is really helpful. For instance, I can use this DMV for cases like this, like okay, I want the top ten queries based on average duration and I also want the execution plan so for that reason I need to get to SysQuery store plan because that’s where it stores its execution plans and it does this in the same XML format we all know, so we can just basically cast the column and get the execution plan as a clickable XML.

Query store DMV output

I’m joining some more DMVs here like the query store query, which records the information about the query itself, the query statement. The query store query text records the text of the statement and finally, the query store run time statistics record all the performance metrics. So if I execute this query, I can immediately see okay, these are the top ten queries that have been executed, with their execution plan, based on average duration. And you can go crazy with this, you can do all kinds of things you want, you can do it on the last duration or you can find queries that have been executed more than ten thousand times in the last hour for instance. You can immediately grab this and before we head to query store, we had to get all this information from the cache. So that took way more time and in this example, like I said, it also records the degree of parallelism. I’m going to execute a query, which is going to be executed in parallel if everything is going as planned – yes it does, and I can immediately identify parallel execution plans.

So in this filter I set okay, I only want those queries that have been executed with a degree of parallelism higher than one, and here we are and I still have the regular query I use for querying the cache and that one is pretty horrible on servers that have a lot of parallelism going on because it has to shred the XML and it can take a long time for it to process all the plans in cache and this is just instant. So when I’m dealing with tuning queries like the parallelism threshold, this query saves me so much time it’s silly. It’s absolutely perfect.

Alright, so I’ve shown you basically all the DMVs that are in there, started with the sys database query store options, which stores all our options and basically the two most important DMVS are the query store query and the query store plan. On one side, it stores the query in the query store query DMV, and on the other side it stores your execution plan and basically, on practically all DMVs you can use a certain key to join them together. Most of the time you’ll only use the query store query and the query store plan DMV together with the run-time statistics. But you can see how they are interconnected here on this little slide. Which one I haven’t talked about is the sys create store context settings, which is kind of a special DMV. What it basically does, it records all kinds of set options for your query statement. For instance, is ANSI NULLS set to zero or one, and it will store it in that DMV.

So you can also see what are the kind of options were used when that query was executed. Alright, so we’ve all gone through all kinds of ways we can use the query store as it was intended, like a flight recorder, how we can grab all kinds of performance information. Well, one other magic thing the query store can do is force a specific execution plan and what I wrote here in the slides is for a request query and what I basically mean with that is for instance, say you have a query that generates a certain plan and the plan is executed pretty quickly and everything is fine, and all of a sudden the query decides or the optimizer decides okay, I need another plan and that plan isn’t optimal for the query, so it’s actually performing worse now than it did in the past. That’s when we’re talking about request query performance, that’s what we’re talking about. So that’s basically the situation we’re dealing with. Then the new execution plan or something has been generated, which performs worse than the original plan. So, and this one is really funny because I’m going to show you if what you had to do to force a specific execution plan without a query store and then I’m going to show you how to do it with the query store and I’m going to be honest here, I’ve only had to force a specific execution plan for two times in my entire career.

You don’t want to necessarily force execution plans. You only want to do it if you’re absolutely sure and the reason for this is, when you force a specific execution plan, SQL server will go into stupid mode. It will not generate a new execution plan for that query anymore, it will not generate plans that are better than the one that you have forced, so if you have forced a plan and you forgot about it, it will keep using that plan until the plan guide is killed or is removed. So it will always stay, forcing that plan, so you have to be really careful and you have to be really sure this is the thing you want to do.

Doing a Plan Guide the Old-School Way

So what I’m going to do in this demo, I’m going to show you how we do it the old fashioned way. I’m going to make sure I don’t have any plan guides created and I’m going to execute these two queries, and as you can see, – if I can get them on one screen, that would be nice – these are identical queries. The only difference is there is a different parameter in this query. There’s also a reason I’m doing this using sp_execSQL because this is the most reliable way I can create a plan guide that’s actually going to work because plan guides in the past, are notoriously difficult to use on queries that are created dynamically or being executed by some kind of application, because if you create a plan guide you have to include ever character in the query. If there is a space in the query, it also has to be in your plan guide. If there’s some kind of funky unique code character in there, you will need to include it in the plan guide or it will not work. So – and this is for me, the most reliable way to make it work. What I’m going to do, I’m going to clear the plan cache because otherwise the second query which we use to query or the execution of the first query because SQL server is lazy, and how it has to generate a new plan and if everything goes well, we should see two different execution plans here.

And as you can see, the top one, based on that specific ID generated a clustered index seek here and another index sic here. When we supply the different parameter and it had to go through more rows, which you can see in the result here as well, there’s a big difference in the number of rows here, it selected an index seek but also an index scan. Now, say for instance that I’m in a certain mood and I’m like okay, you know what SQL server, I’m going to be smarter than you, I’m going to tell you you’re always going to use the first execution plan, so the first one. I don’t want you to think for yourself, I know better than that, I want you to always use this plan because I know it’s good. So what we had to do then is in this case, we had to grab the execution plan from cache and the XML text, so to simulate this, I’m going to drop my cache again and I’m going to execute my query again.

This is the same query as this one, only this time I’ve put it on one line and again, I’m doing that so it’s easier for me to create a plan guide because I have to include all the spaces, all the line breaks, all the taps or anything that’s in there, even comments, I have to include everything. So I’m going to execute this query and I’m going to look in the plan cache.

So here we already see some things that we had to do in the past, like cross apply to get information about queries and execution plans and I had to look at my query text, okay, I think it’s this query, with this query plan. I had to copy this query plan and you create a plan guide using sp_createplanguide and what you do here is you can give it a name, in this case, forced plan, you’ll enter your query here, including all the unique character spaces, line breaks, steps, comments your developers has put in, everything, and you run this query. Now, if I execute this second query, it should normally result in the index scan, the index seek and the index scan, it should go for two index seek operations, which it does. It works. Alright, and if you look at the properties, you can see it used a plan guide. Plan guide name, forced plan.

So this is a demo and I think this made it look really easy to force this plan, but to prepare this demo for the first time actually took me two and a half hours to make sure this execution plan was being forced because I did some spacing here like these spaces and things like that, and somehow it didn’t want to force this execution plan. I’m going to drop them and I’m going to show you how to do this in the query store. So in this case, I’m usually filtering on execution plans with for instance, two plans, let’s see if we can find a query here, yes, this is the one.

Forcing a Plan the Easy Way

Notice that we have our index seek plan here and we have our index seek and another seek here, this isn’t the right one. This should be the right one, yes. Notice the difference in plans. Now, without all the plan forcing we had to do in the past, in query store I can basically do this, yes, and it’s forced. So just my clicking that, that can save you multiple hours of work when I’m forcing a plan.

Because it’s this easy, it’s also very, very easy to forget. So I can very easily go into these reports and see okay, look at that, there are two plans, this one performed way better, and I’ll just force that plan. If you force a plan, like I said in the beginning, with the query store, there are still the same rules. SQL server will keep using this plan until the end of times. It’s even worse with the query store because if you force the plan with plan guides and you rebooted your server or you had a failover, you had to create the plan guide again, the query store hardens this information to disk. So if for some reason you are – you forgot to keep track of all the plans you’ve forced, it will keep using this specific execution plan until the end of days. So be very careful with that. Be really sure you know what you’re doing.

Brent Ozar: We got one question, your forcing the plan went so quickly and easily that people are like can you go back and show that again, just because it was so easy?

Enrico van de Laar: Yes, of course. There is another great option, unforced plan, and then it’s unforced again. So like I said, basically you click on the plan you want and in this case there are only two and you press the button forced plan. It will give you some kind of warning from okay, are you really sure you want to do this? With everything, I’ll just click yes and you’ll notice this little feed checkmark thing appearing, this indicates that a plan is being forced. Now…

Brent Ozar: I went the wrong way on mute. We had another question too, is there a DMV that tracks with plans are forced right now? Lee Markam asked that.

Enrico van de Laar: Well, yes, you can do that. If you look at the query store plans, DMV, you can get all of the forced execution plans, but you have to filter on a specific column there. Let’s see if I can try this in one go. Query store plan, there is a small column here which says is forced plan, and you’ll see there’s one forced here. So this is how you need to keep track of them. What I did in the latest release candidate of the new management studio, they’ve added an extra report here which is called forced execution plans, so you can just open that and you’ll get an overview of all the execution plans that are forced. So that’s the way you need to keep track of that, because as you can imagine, usually you will filter this on a very specific time segment, like last hour. If you force this plan three hours ago and it didn’t execute in those last three hours, you will never see it again. So that’s how you need to keep track of those queries. So yes you can, even though you’ll have to dive into DMVs or get the latest version of management studio.

QueryStoreTools.com: Free Tools and Reports

Let’s unforce this again, alright. There are by the way, other ways you can get that information. At the last slide, I’ll show you a link to a website querystoretools.com, and there are various solutions that I built myself that improve the built-in reports but also give you stored procedure that gets all kinds of performance information out of the query store. I don’t really have time to demo those right now, but you can download them from querystoretools.com.

Query Store Performance Overhead

So onto the last topic, how much is this going to cost me? If I enable the query store, what’s it going to take? Well, on average, the performance impact is three to five percent, that’s a statement Microsoft made with the introduction of the query store. It’s a bit more complex than that because it depends on a lot of factors. It depends on your data flush interval settings. How low are your data flush interval settings, the more query information is getting flushed to disk quicker, so your storage subsystem has to process more information, which can lead to a performance impact. The amount of unique queries and normal queries – remember that every time a query is being executed for the first time, it will be flushed to disk, the execution plan and the query text, almost immediately. So if you have queries that generate new execution plans constantly, you’ll have a lot of pressure on the storage subsystem because they are written to disk faster and faster. Of course, general storage performance plays a huge role in the query store, and of course your general performance of your server itself. So if you’re already running against 80% CPU load, enabling the query store to make the SQL server work even harder can have a bigger impact. So of course, the last line, your mileage may vary.

I’ve tested a lot with query store, I’ve done the most crazy things from running TPCH benchmarks to simulating scripts with ten of hundreds of thousands of connections with all kinds of unique queries and I have to agree with the statement Microsoft made. From my investigation, if you leave everything on default, the three to five percent on average is about right. I believe I hit four percent so that’s right in the middle there. When I started testing by setting the data flush interval and the performance metrics interval, the granularity of your run time statistics to a really low value, it went up to 16%. So those are the settings you have to be really careful of. By default, with the default settings, it will be around four percent, but it will go – it depends on many things so you will have to test it, how it works for you. And thankfully, there are many ways we can do that.

Monitoring Query Store Overhead with Perfmon and XE

There are basically like three ways you can – three paths you can take when you want to take a look at the query store performance. There are new perform counters introduced that you can see information about query store, there are of course, new weight statistics and there are a load of extended events you can use.

So let’s quickly look at those. For instance in perform, you can go to the SQL server category and a new category has been added, which should be query store – I skipped it there, here it is, and it shows you some things about the query store and this is basically the impact query store has for instance on your CPU usage, so you can track okay, how much CPU time does the query store take when it’s recording all my query performance. How many logical reads is it causing, how many logical writes and how many physical reads. There is no physical writes counter to track because the writing is being handled by the ASYNC writer so as soon as the query store hands it over, it isn’t concerned about writing the data to disk. The query store does not write the data to disk itself, so it doesn’t record the physical writes it does. So you can use this to track these counters. Another really helpful way, and like I said, my second favorite thing are weight statistics and basically all query store related weight types start by QDS, which stands for query data store, basically run this, you’ll get all different weight types that are involved with the query store. So in this case, there are only 19, I know in feed next there are more added and of some of those I have an idea what to they’re doing and of many of those I have no idea what they stand for. I can guess but I’m not really sure. I’m guessing the ASYNC queue has something to do with handing over the information that needs to be flushed to disk by the ASYNC writer just because of its name. The QDS persist task is also another one that’s constantly running so I’m guessing these two are probably just weights you can safely ignore but I’m not really sure yet. I haven’t done enough testing with these to be absolutely sure of how they are working. But these can be very interesting if you want to see okay, what’s going on, how is the query store working, this can be a helpful thing to look at. I’m pretty sure these two, the ASYNC queue and the persist task are rather safe and you can safely ignore those because they are running constantly on the background.

Maybe a better way to track what’s happening in query store is through extended events, and there are a load of extended events for the query store, in this case, 68, which capture all kinds of things with a very helpful description. So for instance, if you have very specific problems like okay, I’m not seeing new information in my query store, for instance query store read write fails can be an interesting extended event to track to see what’s happening in the query store or why is my execution plan that I forced through the query store, why is it failing? Why isn’t it forcing the execution plan?

There can be numerous reasons why forced plan in the query store is failing, for instance, if it depends on an index and you remove the index, it will fail the plan forcing and this is one way to track when plan forcing is going wrong. Like what’s happening, and there are many, many extended events here, and again, with surf specs and with new releases of SQL servers, new extended events are added for query store. Alright, so these are just like three methods you can use to keep track of what’s going on inside the query store, what’s it doing, why am I running into certain issues.

Recap

So to sum things up for the end of the session, I personally believe the query store is an incredible, powerful addition to SQL server. I hope you think – you agree with that after seeing this session. I hope some of you already played with the query store. If you haven’t, I urgently advise you to just turn it on on a test system, try it out. It will absolutely save you loads of time. It will help you make analyze query performance a lot easier, it will save you – again, it will save you time and money doing that. It will present you with easy to use reports, it’s just a great feature all over. Absolutely try it, give it a try, see if you like it, and it’s there to stay. In feed next, Microsoft added more and more counters for query store so you can also track things like temp DB allocations on a query for instance, so they’re constantly building it to be bigger and better, so it’s absolutely a fantastic option. I’ve added the allows you to easily force execution plans here, again, it does allow you to easily force execution plans, but like I said numerous times during this session, please be careful. Don’t just click on the force plan button and just sit back and think everything – all your problems will solve, it doesn’t work that way. It just makes it a bit easier but keep track of your forced execution plans.

This is the final slide, I’ve added some additional reading material, Microsoft wrote actually a pretty good MSDN article on the query store with a load of information, including a best practice performance guide. Me, myself, I wrote a lot of articles for Simple Talk about the query store, which vary from forcing execution plans to analyzing query store performance and all kinds of things. The link to DBA tools, that includes – well, DBA tools is just generally awesome and if you don’t go and turn on the query store you should at least give DBA tools a try because this will make your life so much easier, and if you want to expand query store a bit, for instance by custom reporting or by using the stored procedure that gets all kinds of information from the query store, you should check out querystoretools.com. All those tools are free, they’re open source, you can modify them, you can do whatever you want with them and there are some really nice things in there.

Query Store Questions & Answers

Brent Ozar: Nice, well you got all kinds of questions in here.

Enrico van de Laar: Okay.

Brent Ozar: One Alaska Pascual, he says, “Does DBA tools also work with Azure SQL databases?”

Enrico van de Laar: I don’t know. I think they – yes, I think they do something with Azure. I’ve only used it on premise, but I think they have some commands for Azure as well, but I’m not really sure, but check it out on the website.

Brent Ozar: Hugo asks, “If query store shows two plans and one of them is way slower, is there an easy way to see whether this is because the plan is worse or just because it got sniffed for a different parameter with more matching rows?”

Enrico van de Laar: The only way to do that is when you actually have to go to the properties of the execution plan. So yes you can, but then you have to dig deeper inside the execution plan, so inside the graphs it doesn’t directly show you the why reason, it just shows you there is a difference. So if you want to find out the difference, you’ll have to look inside the plan, and you can do that in the properties of the plan and you can see on which parameter it was compiles, because it will store a compiled value parameter and you can see this is a value that has tens of thousands of rows and the other plan has a value that only has one row, like the one I showed in the example. But you’ll have to dig a little bit deeper for that.

Brent Ozar: Gene says, “DBA tools is indeed tested against Azure, he’ll find the reference.” I don’t know if he means Azure VMs or Azure SQL DB, we’ll see. Sassan says, “When you disable the query store, does its data get deleted?”

Enrico van de Laar: No. The data will stay in the query store, even if you disable it. If you enable the query store again, it will start collecting again, but your historic data will still be there. The only way you can clear the query store is by using the command or clicking the purge button. So it will still be there.

Brent Ozar: Pascuali asks, “Is it possible to force query store to write on a different file group instead of the primary?”

Enrico van de Laar: Sadly, no. It will always write to the primary. I think there is a connect item for that as well, with a request to change it. I’m silently praying with every release that there is an option in which you can configure it, but until now, it’s still the primary file group.

Brent Ozar: Parasam says in the Slack channel, “Great session, there’s another good session on it from past 2016, if you go to the past TV YouTube channel” – and for those of you who are in Slack, you can go see a link inside there, “where Grant Richie did a session on query store and query tuning as well.” Dale Buyer says, “That three to five percent performance hit, is that the same whether you turn on query store for one database or all databases? Does it just affect all queries or how much of a hit does it have?”

Enrico van de Laar: Basically when I tested it, it was for a single, enabled query store database and because it only captures the query that is executed against that database, so for instance if you have two active databases and ten who aren’t active, and query store enabled them all, it will only count for those two who are active because if there are no queries executed, it won’t be any overhead. So it basically depends on the volume of your queries. If you only have one query every minute or so, the impact won’t be that big. If you have tens of thousands or millions of queries every minute, the impact will be way bigger.

Brent Ozar: Wes Crocket says, “If your company does month end processing on a database, like at the end of every month, how much history should he keep in query store for performance tuning?”

Enrico van de Laar: I can finally use the quote, it depends. How much do you want to see back? If you’re like interested in okay, I want to know every end of the month I want to know how did it go last year, then you’ll want to store a year. If you’re interested, okay, I only want to compare against the previous months and see if it’s slower or faster, then you only need to save a month for one or two, that’s just depending on what you want to do. You can store it for ten years if you want. Query store allows it.

Brent Ozar: Alright, cool. I think that’s all the questions that we have, I want to thank Enrico so much for presenting this and thanks to everybody who voted Enrico in for today’s session. Chanus did say, “Great session Enrico” as well, so thanks a lot, sir and have a great week.

Enrico van de Laar: Thank you as well for having me, thanks, everyone who voted and thank Ben for this awesome event. I think it’s really great and I absolutely love it. Thanks.

The following two tabs change content below.
Enrico van de Laar is a SQL Server geek from the Netherlands that loves to write, talk, present and breathe SQL Server. He is a Data Platform MVP and frequently speaks at different SQL Server events in Europe. He is the author of the book "Pro SQL Server Wait Statistics" and loves to blog about SQL Server at http://www.dotnine.net/blog.

Latest posts by Enrico van de Laar (see all)

, ,
Previous Post
Modern Data Warehousing – The new approach to Azure BI
Next Post
Worst Practices & Less Known Limitations for Columnstore Indexes

4 Comments. Leave new

Thanks for the abstract.

I think the material you want to cover is great, and I think you’re the right person to teach that material. (There aren’t a lot of people talking & writing about using query store, and you’re definitely one of the most prominent.)

I’d skip the first sentence because it doesn’t define the attendee or their pains. Whether someone knows what Query Store is or not, that first sentence doesn’t add value or tell people what to expect. Plus, the term “play with” doesn’t sound good to production folks. That sentence is a good example of why you want to ruthlessly edit abstracts: take out every single word that doesn’t add value or define the attendee. After you’ve ruthlessly edited those out, then you’ve got more space to tell a valuable story.

For example, I don’t think you’re doing a good enough job of pitching why you’re the right person to talk about query store here. You might want to tell a short story about how you solved a production problem with Query Store.

Reply

Hi Brent,
Thanks for the feedback and suggestions! I’m currently in the process of rewriting some abstracts and your tips are absolutely helpful, thanks!

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu