Improving SELECT * Query Performance

Target Audience:

Anyone who’s frustrated with supporting and performance tuning queries with really long SELECT lists.

Abstract:

You’re a DBA or Developer, and you have a gut feeling that these simple queries with an egregious number of columns in the SELECT list are dragging your server down.

You’re not quite sure why, or how to index for them. Worst of all, no one seems to be okay with you returning fewer columns.

In this session, you’ll learn why and when queries like this are a problem, your indexing options, and even query tuning methods to make them much faster.

Audio podcast:

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes, Stitcher or RSS
Leave us a review in iTunes

Why I Want to Present This Session:

This used to be one of the most frustrating parts of my job, and it’s not something there are always easy answers for.

Additional Resources:

Session Transcript:

Brent Ozar: In this session at GroupBy, Erik Darling is going to be talking about how to improve SELECT* queries; so take it away, Erik.

Erik Darling: Alright, now I wrote this deck because you’re all horrible people and you all do this to your servers all the time; or you work with developers who are horrible people and do this stuff all the time.

So the first thing I want to mention here is that there is absolutely no Dynamic SQL involved in this. I’m not going to sit there and tell you … just let your users choose a list of columns, because that’s nonsense. Never give users choices. The more choices you give users, the more pain you’re going to feel, the worse things are going to get. Limit your users to nothing. They get nothing. You get what I tell you you can have, users; you’re awful.

Now, what we’re going to cover is some typical scenarios for why SELECT* queries tend to happen; when SELECT* doesn’t really make me that angry, when SELECT* does kind of make me that angry, how SELECT* makes indexing hard, and then finally how to make SELECT* sort of less bad, how to sort of fix SELECT* queries on your server.

So why would you do that? Why would you ever do SELECT? Why would you ever write a query and put it in production? It doesn’t have to be SELECT; it can just be every column listed out. It doesn’t have to be – it could just be a wide list of columns – but why would you do that? Like sometimes it’s really innocent. It’s like you don’t know if you need all these columns, you might need some later, you just want to get them to be safe. Like I’m just going to get them all now and then, later on, I’ll figure out what I actually need. I’ll like do some work later and we’ll narrow this down. Sometimes it’s just silly, where like you’ll have a project manager who just keeps asking you for more data and you just start making stuff up. I’ve totally done this, where a project manager is like, “But the report doesn’t have anything interesting in it.” I’m like, “Well these are all the people over 65 who worship Satan and have killed a cat. These are the people you want to target for this new bank credit card, right? These are the people you want.” Other times it’s just pure awful laziness.

Now, what I run into consulting, more than anything, is that someone has written their app and they’ve just written all the sort of cursor-y stuff to get data from tables, but they never picked columns from those tables; so every single one of their Entity Framework queries is just the equivalent of SELECT*. But it’s like select Extent1 every column in the table where left join something nonsense garbage. Don’t do this.

Richie wrote a great post about how to select specific columns in Entity Framework after like weeks of me bothering him to do it because I always wanted to have something to when I run into these people who are like, “I don’t know how to pick columns…” Thank you, Richie; you finally contributed something. And of course, I will run into some people who will tell me they didn’t know it was bad to just select every single column in the table. Like, “Why does it matter? I have all this data, why does it matter how much of it I want?” Well, if you go to the gym and you put increasingly heavier things on a bar, that bar gets harder and harder to pick up. If you just want an Int column, that’s like an empty bar; that’s nothing. That’s four bytes of data; SQL doesn’t care. You start tacking some other columns on, all of a sudden you have more and more stuff, that bar gets heavier and heavier. All of a sudden you don’t know what to do. SQL Server has the same problem. The more you want to get from SQL Server, the more work it has to do.

Now, if I asked you to leave your house in the morning – if I selected you from house into car and there was just you by yourself, you didn’t have anything else with you, that would be real easy. If all of a sudden I was like, “Okay, let’s take you and your living room couch and put that in the car. Now let’s take the dining room table and the bed from upstairs and a couple of dressers…” All of a sudden, the more things that I want to get out of your house and into that car, the more work you have to do, SQL Server has to do the same thing.

And when you go to deal with it, you’re left with these awful, awful grim choices. You can make a really wide non clustered index; just like include every column in the table, maybe have a couple of key column just for good measure. You could get really paranoid and maybe rearrange your clustered index – like, “Maybe I chose poorly? Maybe I just used the wrong column? Maybe I need like six columns in my clustered index? Everyone’s just using these columns. I don’t know what to do because they’re [peeling] all this stuff out.” Or you could just create like a narrow non-clustered index with just some key columns. Maybe you’d get away with it to a certain point where the optimizer would use it. It would be like, “Oh I’ll use it,” but not like reliably. It would get some use but not all the use that you want. The optimizer would just be like, “Well it’s good for this one but not for this one. So I’m just going to sometimes use it sometimes.”

That doesn’t help either because when the sometimes not comes along, your query slows down to a crawl and people are calling you and you’re angry and you’re just trying to drink by yourself under your desk. It’s just not the moment of Zen that you wanted. I don’t like any of those, and that’s why I wrote this deck, because I want to give you a better option for dealing with these things.

So when is it not that bad? Well, if you’re only grabbing one row – like if you were literally doing a single row lookup on the primary key, it’s really not that bad. Now, I’m not saying it’s good because you’re still reading a lot of potentially unused columns. And if you have any long data or MAX types in there, you could be reading a lot of extra junk. So let’s go look at what happens when you try to read a whole mess of LOB data from a query.

Now this is the primary key of the post table, and there is nothing else going on in here. I’m selecting one row – I put the SELECT TOP 1 in there because I want you to be extra confident that only one row has come out of this, but the ID column in the post table is the primary key. So this is just going to be one row. How many reads do you think this is going to go to get one row from posts? Go ahead and pick. Four? – it’s an anonymous poll, man. You can just click whatever you want, you can make a weird D equals sign thing if you want. Never mind, let’s move on.

So if I go and I select one row from this table and I get all this stuff out – and apologies because I’m quite new to XSLT2, so don’t worry about that – we do four logical reads, but w also do 82 LOB logical reads. Isn’t that a bummer? 82 to get one row – we read 82 pages to get one row out; doesn’t that suck? Why would you do that?

Now, if we go and we run almost that same query – we’re getting one row again but we’re going to skip getting the body columns. The body column is the only column that we’re not going to pull out of there. How many reads do you think we’re going to do here? Good news, it’s only four. So just to get one row, as soon as we stop getting that nvarchar (MAX) column from the post table, all of a sudden we’re down to just four reads. So that’s pretty nice.

I’m not going to do a big demo about off row data because DBCC PAGE and END are just boring as hell and I kind of respect your time a little bit more than that. You just have to trust me that when data is stored off row there’s a pointer and it points to all the other pages that the LOB data is on. We’re good on that. We’re going to stick to that.

Now, if I needed more rows – so how many reads is this going to do? If I’m selecting ten rows out of post and there are ten numbers here, it’s the primary key, so it’s guaranteed it’s only going to be ten rows. I can put a TOP 10 in there … Alright, we’re doing pretty good. So let’s run this… And now 762 logical reads and 22 LOB physical reads just to get ten rows of data out. This sucks. What no one ever thinks about when they design tables is how big a column should be. Thankfully the folks at Stack Overflow are pretty smart, and when they designed the post table, they didn’t make a horrible mess out of it. There’s one big column in there. It’s the body column, it’s an nvarchar(MAX), it had the text from every question and answer. Now you’re lucky that I didn’t get the post history table involved because that has all the edits in it, and that’s just a nightmare.

But they did a pretty good job designing this table. Bringing back that one column makes our query stink. Now, I find that very distressing. I think the fact that bringing back one extra column can cause hundreds and hundreds of extra reads, I find that distressing; I don’t like the way that works, so we’re going to have to figure out a way to this a little bit better. Now, let’s go back to Management Studio and scroll down a little bit. I did this on my big monitor, so the scrolling takes a little bit longer than I’d like.

Now, the minimum amount of memory that a query will ever ask for is 1MB, right. At least on most servers, the default is 1MB. It’s 1024K, it’s one metric meg. If you want to get into an argument about 1000KB, go ahead; jump off a cliff with it, I don’t care. So this is the minimum that a query will ask for, ever. Every memory that runs is going to say, “I want 1MB of memory.”

So now I have an interesting question that I’m going to post over in here. That is, how does SQL Server estimate the size of variable length columns? I’m going to give y’all a minute to marinade on that. Full column length – that’s a popular one. Average length and statistics; ooh we’ve got some statistics freaks, huh? We have some people who have looked at a histogram.

Daniel Hutmacher: Way too much trust in SQL Server.

Erik Darling: Yeah, right. Look at this, look at this, this is fun; alright. Let’s test this out a little bit. Let’s see what happens. Now, if I have a table that looks like this, I have an Int, identity, which is my primary key, which is the only index on the table. I have an order date which is a DATETIME and I have a varchar and an nvarchar column. Now, the sucky thing is – and I almost just said an awful word – is that SQL Server, no matter what date is on the column, no matter what the statistics say about the average length, it will always estimate that half of that column’s variable length is taken up by data.

Now, the important thing to notice about the queries that I’m going to run is that I’m not sorting by the big columns. I’m not going to say order by nvarchar column – I’m going to order by the order date column, which just isn’t indexed. So let’s look at the memory grant – or let’s look at what happens when I run these.

Now, it’s going to be a little bit different – and I should turn on query plans because that’s half the point of the damn exercise. If I turn on query plans and I look at these, the SELECT TOP 49 from this table where I order by order date, that’s how many rows and columns it took for SQL to ask for extra memory on top of that. So we moved up from 1024 to 1056. It took 49 rows to do that. It only takes 28 rows of SQL having to sort the nvarchar data to ask for the same memory grant. So they both ask for the sdame one; it just takes different row thresholds to get there.

Now, again, I’m not ordering by those columns, I’m just ordering by an unindexed column, so SQL has to do all the work to sort it. And what sucks about those memory grants is that the only data in that column is just the letter A. There’s no empty spaces, there’s no figments of your imagination, there’s no GUIDs, there’s no ipso florum expecto whatever nonsense; there is just the letter A. But SQL Server is estimating that half of those columns are occupied by data, and that’s just a nightmare.

Now, even if you don’t go and say ORDER BY at the end of your query, SQL Server might inject sorts into your plans for various reasons; either it thinks a merge join is a really good idea, it needs to order data for a stream aggregate, or you’re using some windowing function where you’re just like, “Oh I just need to partition by this thing and I’m good.” But guess what SQL Server needs to order data to support different things. So even if you don’t explicitly say ORDER BY this thing, SQL Server may still need to order data inside a query. I didn’t need to do that…

Moving down the line a little bit, how many people have tables that look like this one? Let’s get this pasted in here for y’all. Alright, I should probably scroll to the table. So how many of you have tables that look like this, where some dingdong just went like, “Every column needs to be nvarchar because I don’t know what the data’s going to be – please god, just make the data fit. I’m sick of these string and binary truncation errors. I can’t take it anymore.”

So, if I order again just by order date, it only takes seven rows before SQL Server says, “I need more memory.” So think about it like this, SQL has to do this work to sort the data. It’s not sorting by the big data, it’s sorting all that data by another column. SQL Server needs to get that data, all of it needs to arrive at the sort operator, it needs to order by the order date, but it needs to rearrange all the other data you’re sucking along that query by that order date. So it’s not ordering the nvarchar(MAX) column, but it has to order the nvarchar(MAX) column by the order date column, and that’s what SQL Server needs all that extra memory for.

Now, it’s not going to ask for all the memory in the world. There are some algorithms and heuristics and other fancy computer science things inside SQL Server. It will try to strike a balance between asking for memory to do stuff, spilling some stuff to disk, trying to like be nice to the hardware that you put your SQL Server on.

Now, let’s do some experiments with different columns. So we’re going to order this stuff – and I had to write the numbers in there because I’m so bad at math that I can’t do the KB to MB math on my own. So if I select the TOP 1000 from memory grants – this is the nvarchar(10) column – if I scroll back up here, this is 10, 100, 1000, 4000 and MAX. So if I sort the nvarchar(10) column by order date 1000 rows, I get an 88MB memory grant. Now that doesn’t seem too bad, but it’s only an nvarchar(10). If I go on and I do the nvarchar(100) and I sort that, all of a sudden we’re up to, I think, about 200MB to do it. If I do the nvarchar(1000) and I order that, all of a sudden we’re up to, what was that, 1300MB; so almost 1.5GB of memory just to sort one column by another column. This is starting to get nasty.

If I go and I try to sort that nvarchar(4000), all of a sudden, it’s almost 5GB of memory just to sort this. This isn’t SQL doing anything interesting, this is just sorting the data because you asked for it to sort the data. SQL is a dummy; it just does what it tells you. It’s a very, very good dog, but it’s not a smart dog.

Now, if we select the TOP 1000* for this, we get all these columns back. Keep in mind, these are all just the letter A and we’re not ordering by any of these, we’re just ordering by order date. We ask for 11GB of memory to sort just the letter A across those columns because SQL Server has no damn idea what’s in there. It’s not paying attention to anything. The storage engine knows. The storage engine is hip to it. It’s like, “It’s the letter A, I don’t have to do anything.” The optimizer is like, “I don’t know, man…” It freaks out; it loses its damn mind.

Now this is an example of how SQL Server will ask for memory, even though you didn’t – we don’t need a sort to do that. SO let’s run this. It’s going to select the TOP 1000 rows, but I’m going to join memory grant to itself and I’m going to join on order date. Now, the important thing about joining on order date here is that SQL Server doesn’t have a join choice really, because order date isn’t indexed, which means it’s not ordered, which means that the likelihood that it’s going to do a merge join is very low. It also means the likelihood that a nested loops join is efficient is also very low. This is unindexed data and SQL doesn’t know what’s really going to happen with it.

So if I run this query joining on order date, I get a whole bunch of stuff back, and in my query plan, I get a hash join. And the hash join is also a memory consuming operator. So the two most common are going to be sort and hash join or hash match aggregate. Both of those are memory consuming operators, and both of those require the entire results set arrive at them before anything happens. So sort, all the data has to arrive. It wouldn’t make sense for you to start sorting data, get new data and then say, “Wait, start over. Move the twos over there, move the sevens over there.” It wouldn’t make sense; the same thing with hashing. When all that data arrives, SQL Server has to hash it, make sure the hash is unique, watch out for collisions, this, that and the other thing. And then it has to … – so it needs memory to do that. and the hash tables happen as much as possible in memory. They can spill, but as much as possible they happen in memory.

Now, the memory grant that I have here is huge. I forget what the number is, but it’s gigantic. It’s 10GB just to do a hash join on order date because I’m dragging all this useless data around. And again, it’s just the letter A. I’m not – I have nothing interesting. There’s nothing going on here. I was too lazy to write a good demo with more data. I just stuck the letter A in every single column.

Now, this is just getting us primed up for stuff. What happens here is it makes indexing very, very challenging. Now, how would you index for this query? Just selecting everything from the post table – we know the post table is not that wide but it’s pretty wide and it’s got that body column in there which is an nvarchar(MAX). It’s also got the tags column in there which is an nvarchar(150) and a title, which is an nvarchar(250). So it’s got some fairly sizable data, and using what we know now about memory grants, how do we index for that? Like how do we add an index for this that SQL Server will use and be happy with?

Well, we could make a key on owner user ID with no includes and we can just hope SQL uses it forever and ever. Just no includes – we do some includes, we could do all includes. We just don’t know which one is going to make SQL Server happiest. But the optimizer will ask for this index because the optimizer is out of its damn mind. That is the one key column on owner user ID, and that includes every other column in the table; every other one in the includes. That includes body, that includes title and that includes tags, which is gigantic amounts of data. Can you imagine sticking the body in the includes here? Like what would happen when people start asking a lot of questions? It would be a nightmare.

So you’re just staring at this and you’re like, “I just chose a crappy clustered index. I should have chose owner user ID. It’s not unique, but I’ll bite the bullet on the not uniqueness. This will make everyone happy.” Now, the optimizer asks for that stuff, because like me in my demo data, it’s very, very lazy. There’s a tipping point internally in every single query where SQL will change its mind about how to do stuff.

Now, selecting everything from the post table with an index on just owner user ID – there’s a certain point where SQL says, “Oh, I’m going to do a clustered index scan.” And then one more row means, “Oh, well I’m going to do an index seek and a key lookup because I think this is going to be cheaper. The optimizer doesn’t want to make those choices. It just wants to make one choice, give you your data and be done with it. That’s why that four letter word DTA will ask for the craziest indexes imaginable. That’s the most aggressive index. It’s probably like, “Just index everything, just do it, come on. Column store something, please. Indexed views, I need it. Just give me everything.

Query optimization has a part of the process called index matching. Now, index matching is a bit like the dating game, where the query starts running and it says, “My perfect index looks like this… This is my Prince Charming.” And then the table is just like, “I don’t have anything that looks remotely like that. “Good luck, query. I got this crap… Like, I have a case of Bud, do you want it?” The query, at this point, is like, “Okay, I’ll try my chances elsewhere.”

Now, that’s- what a missing index request is; it’s a cry for help on a really bad date. You don’t want to listen to every single missing index request out there, but what you want to do is think of it as is, “Okay, SQL is clearly telling me that something is wrong here. I could be doing something better. I could be doing something as a DBA that would make SQL Server happy. I don’t need to be dealing with this. I don’t need SQL Server telling me that I’m missing indexes because I’m a good DBA and I am in fact not missing indexes.  You’re lying, SQL Server.”

This, of course, gets harder with joins. So if I select everything from the post table, I get a different index and I get a different join algorithm. Then if I select everything from the users table, they’re doing the same join and the same amount of data is coming back. But when I need no other information from the post table, it will use that narrow index on the post table, and that’s pretty cool. SQL is like, “Oh, I’m smart. I’ll use a merge join here, everything’s ordered. I can just spin everything together. I’ll do my data zipper and shove everything along.” It’s beautiful the way that works.

Now, this is annoying set of questions here because they all will influence what SQL Server chooses for indexes and for join algorithms. So rather than sit there and make you ponder about it, we’re just going to say yes. All of this stuff will influence what SQL Server chooses for indexes and for joins. And I think this is really, really annoying. I just can’t get past the fact that we’re in 2017 and everything is in memory and on SSD and SQL Server is still costing stuff and thinking about things like we’re on hardware where the disk drive looks like a record player and the CPU can’t do two things at once. It’s like you have minesweeper up and you can play minesweeper. Or you can have, like, Netscape up and maybe hope a webpage downloads this month. It’s like the worst combination of crap.

There are ways, thankfully, to defer the pain of the clustered index scan, to make SQL reliably choose the indexes that we want it to choose. So, this will never be pain-free. At some point, your users want that data. That application that you spent billions of dollars developing, has put that data in the database and, gosh darn it, you want it back. How and when you get that data makes a big difference. So you still want to use narrow non clustered indexes because you still have a fairly narrow set of predicates. You have a pretty common set of predicates that you use to join and filter on. And if you’re okay with changing code and experimenting with those index definitions, well, we can do some fun stuff.

So this is our query – and I’m going to head back over to Management Studio because – it happened again. It’s going to happen every single time. This is why I hate PowerPoint. I wish – you know what? If anyone from the Management Studio team is listening, put PowerPoint in Management Studio so I never have to switch windows again. I would love you forever for that – or even like a graphic component.

So, these are the indexes that I have on post and users. They have red squiggly lines, because IntelliSense is kind of a dummy right here, and I’ve already created them, so I didn’t make you sit through the index creation process, because I’m a top-notch presenter. So these are our indexes. There’s one on posts. It’s on owner user ID and I stuck the clustered index in there, just so we’re all aware that it’s there, and it includes post type ID. Over on the users table we have creation date and reputation and ID, and those are the three key columns in our non clustered index. And this is our query.

So I’m going to put this poll into chat and hope that people click on the thumbs, because that’s my only form of validation at this point. No one’s asking questions, so you clicking on the thumbs is all I’ve got going for me. If I don’t see thumb clicks, I might just jump out the window instead. There’d be a lot of dead air for poor Daniel to deal with.

Daniel Hutmacher: Don’t jump…

Erik Darling: I’m only on the second floor; it won’t hurt that bad. I’m fat; I’ll bounce. Alright, look at those thumbs. Thank god, all you nice people out there. So, I want you to be aware, I am using this hint. This is a new hint for SQL Server 2016. It’s not documented. Don’t use it. Don’t put it in production. Joe Sack says it causes stack dump sometimes. Please don’t use it. I’m using it here because I just want this query to go twice as fast.

Daniel Hutmacher: Now everyone’s going to use it.

Erik Darling: Well, everyone would use it if they were on 2016. All these people are on 2008. I don’t want to hear about it.

So this is our query. We’re selecting everything from the users table; we’re selecting the IDF from the post table. We join users to posts, because that’s what relational data does, it joins. We join on owner user ID in the post table equals ID in the users table; that’s the relationship. And we have these predicates on creation date, reputation and post ID. So we’re able to run this query – make sure I have that on, have that on; we are good to go. We’re rocking and rolling. I’ve been a top-notch presenter today. I got stats time and I/O on, and we can see that we did 25,000 or so reads from the post table, we did about 80,000 reads from users and we spent about four seconds of CPU time and two seconds of total time running this query. So we get two seconds to bring all this data back; what’s going on in there? Did we use our index? I don’t think we used out – we didn’t use our index. We scanned a clustered index on the post table. We have failed miserably, as database administrators, to get SQL Server to choose our lovingly crafted index. It has not used it. We have scanned the clustered index, like a loser. We’re losers at this point. No coffee…

So again, these are our indexes. We’re not changing them, I just want the definition there for reference. So, what I want to know is will this query be faster or slower? We’re going to hint our index. In our from clause here we have hinted our index. Now Daniel, is my screen really small for you, or are you just blind?

Dan Hutmacher: I have two monitors actually…

Erik Darling: If I need to zoom, let me know. For reference, I have no idea how big this looks for everyone else.

Dan Hutmacher: I think you’re fine.

Erik Darling: Well hopefully everyone else does too. So let me scroll down a little bit and let’s see here; faster, a lot of votes for faster a couple of votes for much slower; interesting. We got a couple of walking optimizers in this room. Alright, so let’s run this.

That was faster than two seconds, right? What happened? We did about 75,000 less reads on the users table. We did about the same amount of reads on the post table. Post stayed pretty static. We did so many less reads on users … that what happened. And that took half a second of CPU time and 125 milliseconds – what gives, SQL Server? Now, what I was talking about before with the hash joins, that’s here too. So there is our has join, hash match inter join, and that is because in our indexes, owner user ID and ID, they are not the leading columns – or the ID here is not the leading column, so SQL Server kind of has no choice but to do a hash join here. Unordered data, not leading key column, SQL Server is just kind of like, “Hash join.” So keep that in mind, column order in indexes does matter for stuff like that.

So, what we found is hinting the index and the query gets us this query plan. We do a key lookup from users to get all our information out, and we can kind of see why SQL Server is not asking for a missing index here. Because we look at the output list, that’s essentially every single column. It’s like, “Well good luck with that, pal. We’ll just do this key lookup and deal with it.” This key lookup costs 64% of the plan cost, and that’s about 171 query bucks. So why didn’t SQL Server choose this plan, which is demonstrably faster? 100 and something milliseconds versus two seconds – Why would SQL Server forsake us like this, why? Let’s look at these things back to back. Now, I have about two seconds of grace time to breathe here, so I’ll do that, and then I’ll look at the query plans.

Now, we know that in the messages tab, it’s going to look about the same, 25,000 reads on posts, 80,000 on users for the first query, about two seconds total. Down here, about 5000 reads on the users table, still the same, 25,000 reads on the post table, but 105 milliseconds. Why on god’s green earth would you choose another plan? Well, SQL Server is a cheapskate. It is Scrooge Mc-damn-Duck. When you look at these two queries and you look at the costs, well this query costs 154 query bucks, and this query down here cost 266 query bucks. SQL Server, being the miser that it is, chose the wrong plan. It chose a slower plan because it thought it would be more expensive. But by every single metric that we looked at, it was cheaper, right. It did less reads, far less CPU, ran in far less time – why would SQL Server cost things like that? Well, it goes back to what I was saying before. All of SQL Server’s costing algorithms are from like the late 90s. They’re not modern. They’re not adjusting them for SSDs or – the thing that costs all your queries still thinks that you’re topped off at like 3BGB of RAM on a 32Bit 386 just doing nothing.

This is what the optimizer is costing things based on. It’s not based on modern hardware where that kind of stuff is fast, it’s based on old hardware where like the little arm moving around the disk is slow. That sucks. I mean, I understand why, it’s a good point of reference. It’s been working pretty well for Microsoft; they make like a billion dollars per year on SQL Server licensing. So I get why they’re not messing with it, but at the same time it’s like, “Man, do me a solid here.” We’re like moving into the age of quantum computing and we’re still costing queries like they’re on spinning rust.

Now, like we saw, the cost of the key lookup operator was the majority of the hinted plan’s cost; it was about 64% of it. So that key lookup – SQL Server is obviously costing the key lookup very expensively. Now, what we want to happen now: we want SQL Server to reliably choose our narrow index on users. We need to show all that data from users in the app. We don’t have a way around it. We can’t just go to the developers and say, “Look, we’re not going to show them this stuff.” We can’t do that; we need to show it. We don’t want that key lookup to drag on forever. We don’t want to add a really wide non clustered index because then it’s going to play the devil with inserts. We don’t want to stick everything in there, it’d not be good.

How do we make that narrow index a safe option? What can we do? What can we do to tell SQL Server it would be really cool if you did this here and then did something else later? No one’s picking table variables. Thank god, I feel like I’ve been talking to you. You’re like, “Temp table, okay that’s not bad.” Indexed view – jeez, you people are gluttons for punishment. Indexed views – don’t you read the internet?

Dan Hutmacher: I think all the indexed views are BI people, me included.

Erik Darling: Yeah, right. Bunch of banana heads. Alright, let’s go, let’s see what I’d do. Now, I’m going to stop here – and I forget what I was supposed to do here. I’ll go back to the deck; we’ll see what happens. Nothing in the deck – alright, cool. So I like the CTE option; that’s what I’m going to do. I’m going to go with the CTE.

Now, what we can do with a common table expression is kind of tell SQL Server, “We want to do this stuff over here, and then we can do some other stuff later.” And the optimizer will kind of take this as a performance pathway. Now, you could totally do this with a view as well, and you could do this with an inline table-valued function, and we’ll talk about some other things you can do it with. But we’re going to do it with the CTE first, just because it’s really easy to show. Most folks are pretty comfortable with common table expression syntax.

So, what I have up here is a narrow select list. I’m only selecting the ID column, which is the primary key of the users table and the ID of the post table, which is just something I need to show; I don’t actually need that for anything. I’m still joining my tables together the exact same way, I still have the exact same filtering predicates – I still have the 100 there, I still have the post ID … I still have all the same stuff. Now, what I’m doing is – when I’m done filtering data out and just getting the rows that I’m interested in, I join my common table expression back to the users table on that primary key column. So I have a distinct set of user IDs that I can now join back to get all that display level information. And what happens here is things go pretty well.

A little bit better on the lapse time, a little bit less CPU time. I paid a little bit of a price in reads, about 3000 more reads, but I’m okay with that. Like I’m genuinely okay with that because what I get from this is some guarantee that SQL Server can reliably and consistently use my narrower index as efficiently, then I can just rely on that clustered index for that display level stuff way at the very end.

Now, one thing that’s important that I want to bring up here – actually, I’ll wait until we get down to this one. So if I face all three off together – So I have the original, I have the hinted query and I have the rewritten query. I want to talk about a few important things with all of these. Now, first we’re going to skip over the messages, because we know what’s been happening with these all along, with the CPU and the reads. We’re going to go right into the execution plans – let me bump that up a little bit for y’all; getting back into top-notch presenter mode.

First thing I want to bring up is for the first query, that original query. There was nothing technically wrong with it, right. You see that index scan on the users table? All of those predicates were SARGable. SARGable means that the optimizer can take those and apply those efficiently to an index. It’s not doing anything crazy with functions, like, “I need the substring of the third letter of the month times four…” You’re not doing anything weird, right. All those predicates could get pushed down into the index access and rows could be filtered out at that level. The problem is that you still needed to read 5.2 some odd million rows and filter those out in place.

What happens when we use the narrow non clustered index? We only have to read about 283,000 rows. So really, the big, big cost in time savings is not having to read 5.2 million rows – actually, having to read about 5 million rows fewer, or less. Lesser? Fewer? Whatever – you need to do less work on that. The thing that I really like about the CTE method is when we do our joins – when we do our filtering, we do our filtering in place, just like before. Same filtering, we do our join here, we’re not dragging all those output columns along the whole query plan with us. We’re only getting those at the very end. So when we look at the output list of the index seek over here, that’s at the very end of the query. That’s where all the crap is. So we actually save ourselves a little bit of work because when we do the index hinted version, which is this middle one, we do that key lookup first.

This is like the first thing we do, we seek into the users table, we get all the rows we’re interested in, we do a key lookup to get all those row’s output, a and then we have to move all that crap across the wire until we get to the SELECT. And you can see that SQL Server, making a bad judgment on the number and the size of those columns, means we got an excessive memory grant. SQL Server asked for a whole ton of memory and barely used any of it. It asked for like, what is that? 1.6GB of memory? It only used about 7MB of it because we were dragging – we got all those columns at the very beginning of the query, and then we had, say it with me, hash match – a memory consuming operator.

In order to get all that stuff hashed, we needed more memory to it because we needed all those rows to fit, or come as close as possible to fitting in memory when that happened. That sucks, it’s like the worst. It’s like, “Why don’t you just agree with me?  Why can’t you just be agreeable for once?”

Now, there’s some other stuff that’s kind of interesting that happens here. Now, one thing that I really dig about the way this works is the nested loops operation – it’s an awesome optimization called the nested loops prefetch. That means that every time that thing runs, SQL Server goes and gets extra rows because it knows it’s going to need it, but it’s like, “No, we’re just going to get these now because I’m going to need them later,” and just starts shoveling coins in the bag; it’s beautiful. So we get that nice optimization from the nested loops as well. So that’s one thing that I really liked about it.

So, any questions on this stuff so far?

Daniel Hutmacher: No, I don’t really see that many questions, so…

Erik Darling: Sweet. [crosstalk] really good or really bad…

Daniel Hutmacher: I think you’re doing really good, judging by the Slack channel.

Erik Darling: Woo-woo, and it’s early in the morning so everyone’s still drunk from last night. I think that’s [crosstalk].

Alright, so there are other ways to take that syntax and make it work. So you could also use the magic of cross apply. Now, Adam Machanic would be extra proud of me for telling you about the cross apply version. You can do nearly the exact same thing. Now, with a cross apply, you just take it and flip it – the CTE is up top, the cross apply is just flipped down to the bottom. So we’re going to take the exact same syntax, put that in a cross apply, and then we’re just going to add one additional predicate to the WHERE clause where we’re joining the ID from the users table up here to the users table in there. And what’s really cool is we get the exact same plan, the exact same reads, almost the exact same CPU time – you’re going to have to forgive the CPU time, my server is doing other stuff, so it’s not just set aside for this demo. I’m compiling important things in the background, Brent; sp_BlitzCache is looking great. The execution plan is exactly the same.

So the optimizer is so smart, the optimizer is so wonderful that it takes this cross apply plan and it gives you the exact same plan as it did the common table expression plan. The optimizer goes about this really smart. It’s a product of like seven billion grad students at this point. So it’s really smart and it takes both those queries and it rewrites them logically to give you the same execution plan, because functionally they do the exact same thing. Of course, you could also, if you just needed to check the existence of stuff, take that and stick it in exists. And what’s really nice is that exists doesn’t give you duplicates. So this is actually a little bit quicker, but you get functionally the same plan where you get the two index seeks. You get one extra hash match aggregate, where it takes that ID column and it’s like, “We’re going to hash this a little bit more for you.”

We’re going to take these ints, because what’s really nice is when you have an integer and it’s NOT NULL. The hash is considered perfect for int, tinyint, smallint and bigint for NOT NULL columns. The has algorithm is considered perfect, meaning you will have no collisions, especially when it’s unique. So it’s a really nice thing that happens there.

So, I’m just about done because I didn’t have any crazy encryption or decryption routines. There w e go, so how many of you learned something? You can be honest; I’m fine with honesty. Get your thumbs out.

Daniel Hutmacher: While you’re at it, you actually just got a question. “I see sometimes plans are showing memory grant and others are not.” My first guess was that there’s a difference in estimated and actual query plans, but maybe you have a better answer?

Erik Darling: I don’t think you see memory grants in the estimated plans. Let’s go and double check. So this is the actual plan, this one gets a memory grant of about 70MB. If we go and we get an estimated plan – I’m going to say no, but I might be wrong. Yeah, you don’t see memory grants in there. The other thing is, when you run a query, it doesn’t always get a memory grant. It might just use the baseline 1Mb of memory. If it just asks for that 1024, SQL’s going to be like, “Whatever.” Also, memory grants – no, I don’t think memory grants are in cached plans. If they are, I’ll be surprised because I look for that stuff all the time.

Daniel Hutmacher: And my countryman [inaudible] says, “No time to ask questions. Just typing this made me not hear about 250,000 words from you, Erik.

Erik Darling: I know, everyone tells me that I talk really fast, and I agree that I do. But it’s because I am so excited about SQL Server. It’s a measure of my excitement to be presenting this material to you. If you need me to repeat anything or slow down, just yell. I’m not psychic, I don’t know the rate you people hear at. Alright, I’ll keep going…

Daniel Hutmacher: You were wrapping up and everyone tells you to keep going.

Erik Darling: That’s fine – the story of my life. Alright, so I’m going to skip past all these hidden slides, so I don’t have to bore you with junk. Brent has this great piece of advice where he says, “Screen cap all your demos, then either decide if you want to go with the screen caps or go with the demo.” So today I was going through all the screen caps – Sometimes when I present that same thing to clients, I just go through the screens because I don’t want to ship demos around, or it gets all weird and wonky.

So, rewriting queries in that way, sort of like engineering your own optimizer routines, shouldn’t be your first choice. Like I really do think that when you see queries like that, your first stop should be asking the developers, or whoever’s in charge, if they actually need all those columns. Because then if they say yes, you can just start removing them until they complain, which is my go-to gag on that. it’s like when Brent changes the SA password.

Please don’t immediately jump to add very wide indexes. Wide indexes should be an exception to the rule. You shouldn’t just go and have, like, you know, a billion different copies, because you already chose a clustered index, hopefully. I hope you chose a clustered index on your tables. Then, like, if you go and add – if every single non clustered index on the table is like a key column or two and including every single column, you’re essentially just remaking the clustered index, by every definition except name, over and over again. And you don’t need that many copies of your data. That data takes up space in disk, in memory, you start using all these different indexes. All of a sudden, SQL Server’s got the same copy of the same data over and over again sitting up in memory.  That’s no good; it’s a waste of memory. I always say disk is cheap and memory is cheap, but use it wisely; don’t murder it.

One thing that – I know I didn’t do it here, but please try indexing just some key columns first. See if that makes enough of a difference. Sometimes, if that key lookup plan is totally fine long term, because you’re not getting that many rows back. So like, if you just index the key columns that you need for queries, you could totally get away with those key lookup plans. I gripe a lot about key lookups and I warn you about them in BlitzCache, but really, they’re not terribly inefficient all the time. If you’re not getting like a bajillion rows back, a key lookup plan is totally legit. If you’re getting like 5000, 10,000, 15,000, 20,000, even like 100,000 rows, I can live with a key lookup plan. Where I start worrying about it is when that key lookup is, all of a sudden, 97% of the plan cost and my query runs for three minutes because I didn’t do a good job of indexing.

If all else fails, though, now you have a way to rewrite queries that will use your narrow non clustered indexes and still give you all of those awful, awful display level columns back Now, when you’re doing stuff like this, stuff that we talked about seeing in query plans, stuff that we saw happening when we ran other queries, when we were looking at the memory grants. The stuff you need to look for is the sorts and hashes, because those are memory consuming operations. Those are things that all of your rows have to arrive at.

So when you’re doing querying and index tuning like this, please, be mindful of those operators because they can cause problems. It really helps. In our case, we had the ID column, which was the unique primary key for the table. And so we had a unique column to join everything back up. It really helps if you have one of those. If you don’t, you may need to chose a unique set of columns, or do a distinct or a GroupBy to make sure that you’re only passing a unique set of IDs out of your common table expression. Because if you don’t, you can get a lot of weird duplicate data and you’ll come crying back to me like, “This query sucks and that data’s wrong and it’s your fault. You told us to do it this way.” Well, I’m trying to tell you to do it the right way.

But, you are all nice brave people, and I assume that you are fine with trying both at home and on work – at work – I don’t know, I guess it depends on what your job is. You could be on work, if you wanted. Anyway, that’s all I have. That was my presentation. I hope you enjoyed it. I hope you learned something.

I finished early, which is great news for all of you because now you can go to the bathroom and get lunch and stuff.

Daniel Hutmacher: So thank you so much. Brent says that I should say thank you for presenting at GroupBy, which goes without saying. Exactly, so I think he’s relying on us two to keep this thing afloat until the next presenter comes in.

Erik Darling: Does anyone want to see a picture that I drew a long time ago when I was really sober? My wife found this in a bag. I don’t know where it came from. It was in a giant zip-lock bag in the kitchen cabinet that no one ever looks in. She was like, “Remember when you drew this?” And I was like, “No.” She just left it in my office and just walked away, and I was like, “Alright, thanks;” sleeping on the couch tonight.

Were there any questions? Anything that came in, anyone want to bug me about stuff? I’m pretty good at SQL jeopardy, so if you just have weird questions about things, you can go and ask those too. Nice wall, Ron; I like your wall.

Daniel Hutmacher: Actually, I learned about – or I whet my appetite on nested loops prefetch, so that’s something I’m going to [crosstalk].

Erik Darling: It’s a pretty neat thing.

Daniel Hutmacher: I think I’ve seen the hint somewhere when looking at a query plan but never really understood what it was. George asks, “How is the best way to get involved with CTE stuff?”

Erik Darling: Start writing them.

Daniel Hutmacher: I think everyone who has a SQL blog has written a post on CTEs, just out on a limb here.

Erik Darling: Yeah, I can show you something funny about CTEs, if you’re interested…

Daniel Hutmacher: Joseph chimes in, “Can you explain the cross apply syntax versus CTE?”

Erik Darling: Yeah, sure. The CTE – let me see here. For the common table expression – you know what, I’ll do it, I’ll put them right next to each other so it’s a lot easier to see. You guys can still see my screen?

Ronald Dameron: No. This is the reason I wanted to join, because my webcam is using the laptop webcam instead of the Logitech, so I’ve got to figure that out.

Erik Darling: Weird [crosstalk] – yeah, that’s cool…

Ronald Dameron: What’s that? Oh, my propeller head? [crosstalk] other one working. I’m sitting in my den. Although I might keep it this way, because that looks pretty…

Erik Darling: It is, you look futuristic.

Ronald Dameron: Just a quick question, Erik: how many screens are you using?

Erik Darling: Two.

Ronald Dameron: Two, okay, I was just wondering how you were switching back between… [crosstalk]

Erik Darling: Something dangerous – I’m going to take the webcam and I’m going to show you. That’s not toilet paper, that’s athletic tape. So I have one monitor going that way and then the one big monitor going that way.

Ronald Dameron: Oh, you have the gigantic monitor. Okay, I don’t have a gigantic monitor.

Erik Darling: Well, you should get one.

Daniel Hutmacher: Is it curved, or is it just the camera?

Erik Darling: Oh, it’s curved, yeah. Curved and ribbed and all sorts of weird stuff. I don’t know. It came in a Weekend Prince package. I don’t know what that means. I don’t get it.

Ronald Dameron: I’m going to see if I can figure out – webcams, there we go. Alright, I’ll mute.

Erik Darling: We got asked to explain the CTE versus cross apply. So the CTE starts up here with WITH. Keep in mind if the query prior to this does not end in a semi colon, then you’ll run into trouble because WITH requires – so if I wrote SELECT 1 as crap, and then I tried to write a common table expression afterwards, I would get an error because the CTE requires that the query prior to it be terminated. So I can write this and it will work. Let’s get rid of that.

Now, the way the CTE works, you start it with WITH and then you name your CTE. I named mine precheck because I’m a big fan of TSA precheck, because I get to walk by all those people standing sweaty at JFK for 45 minutes and I get to go start drinking early. So I name my table precheck, and then really the select list is just in here. There’s not a whole lot to say about this other than, like, you have to name – like if you have an expression in here – so like if I did one plus one, I would get an error message until I aliased it as obvious, because all of the columns inside a CTE need to be named. They also need to be distinct. So if I also did two plus two as obvious, SQL Server would start getting angry at me because obvious was specified twice.

So all the column names need to be distinct and all the columns need to be named. So if you have an expression in there, it has to be aliased as something. The select in here is pretty normal. It goes between a set of parentheses. And then the statement immediately following a common table expression needs to involve that common table expression. Whether it’s a select, insert, update or delete, the common table expression has to be referenced immediately following that. You can’t do other stuff and then come back to a CTE. It’s not persisted anywhere – it’s not like a temp table, it’s not a physical structure. Think of it just like an in-lying view.

Then cross apply is almost the same thing. You’ll get the same stuff around aliasing columns or having unique column names. But really, cross apply, it’s an open pren and a closed pren. You have to alias the cross apply here, so I need to call it something. I call all of mine CA for cross apply. And then what you typically want to do is, inside the cross apply, have a WHERE clause that correlates the join columns for the cross apply to the outer table. So here I’m selecting everything from the users table and the post ID from the cross apply, so CA.post ID. And inside the cross apply, the where clause here joins U2, which is the users table, to the table aliased as users up here. That’s my wife’s arm. She’s sick of hearing me talk.

Daniel Hutmacher: Another way to put it is that everything within the cross apply is executed once for every row above. So for every user, the cross apply runs that query with a nested loop join…

Erik Darling: It can be a hash join too now.

Daniel Hutmacher: Can it?

Erik Darling: Yeah, yeah. Let’s see [crosstalk] I can show you.

Daniel Hutmacher: My favorite parallelization trick otherwise…

Erik Darling: I’ve seen it mostly with column store, but it’s happened with other stuff. So if I, let’s see, SELECT TOP 1* from users – and you have to forgive the syntax on this one where I don’t have SQL Prompt installed, so everything looks like Erland Sommarskog’s blog. Cross apply, SELECT – at least it’s not in Comic Sans – SELECT – let’s see here…

Daniel Hutmacher: I think he writes his entire blog in Times New Roman without formatting.

Erik Darling: He does, he’s nuts. Let’s see here, dbo.users SELECT* from post – what are you complaining about? Oh, that’s right, from post WHERE – I’m going to be slightly fascist about this [mumbles] and let’s see. No, that – actually, you know what, I bet that’s why. I bet the TOP is throwing it off. There we go; hash join. It took a little fiddling, but you can get a hash join out of it. It will happen with cross apply sometimes, not just outer apply. I noticed when I started using stuff with column store, and it was funny because a long, long time ago I read, I think, one of the formative pieces of literature was like Paul White’s articles on SQL Server Central about cross apply. And I remember reading those and he said, “It always optimizes a nested loops join.” Then I saw that and I was using column store for something, I was like, “Paul White was wrong, ten years ago now.” [crosstalk]

Daniel Hutmacher: The parallelism operators…

Erik Darling: I’m sorry, what was it?

Daniel Hutmacher: Could you check the upper one of the two parallelization operators…

Erik Darling: This one?

Daniel Hutmacher: No, the right one.

Erik Darling: This one?

Daniel Hutmacher: The repartition. Yes – oh, it’s hash. So that’s why then, because it’s usually round robin, and that’s what creates the parallelism. So it creates – it’s like you’re one, you’re two, you’re three and you’re four, and then those 30 individual streams from the bottom…

Erik Darling: It’s funny when you start reading about parallelism, like the different modes that it can operate in. And you’re like, “Why don’t you do that for me?” It’s like round robin, hash, range, demand, some other ones. You’re just like, “Why can’t I just pick which one I want?” I can show you guys something funny that happens with common table expressions. Now, a lot of people will be under this weird impression that CTEs are persisted somewhere, right, but they’re not. Something everyone should be aware of when they start writing CTE syntax – SELECT TOP 1 ID – my Mavis Beacon CD is in the mail. Someday I will have the ability to type. Now, if I select – let me see – CTE.* from CTE, join CTE as CTE1 on the ID – if I run this, and hopefully it will work, I’m going to have scanned the users table twice.

If I just run this, it hits the users table once. If I join it to itself, it re-expands that syntax and does it again. If I join again, CTE as CTE2 on the ID equals CTE2.ID, I will come up with a third scan of the users table. So every time that syntax needs to be joined to something else, it has to re-expand and rerun that query. There’s no way – it’s not like Oracle.  Oracle’s really cool. Oracle has a hint where you can materialize a common table expression, which I think is just dandy. You can’t do that, if ever – I don’t know, maybe there’s a secret trace flag for it but I just don’t know it.

The following two tabs change content below.

Erik Darling

Erik Darling started using SQL Server after a dispute with Excel over a vlookup. Since then he's been a developer at a market research company, and a DBA for an eDiscovery shop. He enjoys arguing with the optimizer and contributing to the First Responder Kit.

Latest posts by Erik Darling (see all)

, ,
Previous Post
Effective Unit Testing for SQL Server
Next Post
Writing your own encryption routines in SQL… and then cracking them

7 Comments. Leave new

Terje Hermanseter
July 25, 2017 5:02 am

As a current DBA and a former developer I’ve always had a gut feeling that SELECT * isn’t good for performance. But I could never explain exactly why. I sure hope this session can! This abstract will definitely get a vote from me.

Reply
Sneha Mannar
July 25, 2017 8:10 am

As a BI developer, I manage reports with complex queries that span several hundred lines. Hope this session can help understand how to performance tune them.

Reply
Aaron N Cutshall
July 25, 2017 8:37 am

So many “rules” I follow are intuitive but hard to prove. Hopefully this is one of those proofs!

Reply

Hopefully this is not just Erik saying “don’t do it!”.

Reply

[…] would either have to create a very wide index, or experiment with rewrites to get the narrow index chosen, even though using the narrow index results in a much faster […]

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu