Elegant T-SQL Solutions for Real World Problems

Target Audience:

DBAs and developers who are interested in effective ways to solve problems using T-SQL

Abstract:

T-SQL can help solve many problems you are faced with at your job.
Each problem can have many solutions, but not every solution is simple to write, understand and maintain, and not every solution is good in terms of performance.
In this session, we will look at real world problems and solve them using T-SQL. We won’t settle for just A solution, but show elegant, simple solutions that will yield optimal performance.

Audio podcast:

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes, Stitcher or RSS
Leave us a review in iTunes

Why I Want to Present This Session:

I feel in love with T-SQL 15 years ago, and I haven’t stopped loving it since.
I love to come up with powerful, elegant and effective solutions to problems I encounter in the wild, and I love sharing those as well.

Additional Resources:

http://www.MadeiraData.com/blog

Session Transcript: Introductions

Brent Ozar: Good morning everybody. Now for the next session at GroupBy, Matan Yungman is going to be talking about elegant T-SQL examples, so take it away Matan.

Matan Yungman: Thanks guys. So good morning, good afternoon good night, maybe, everybody, if you are down under or in Asia, and thanks for tuning in. Just to verify, Pinal, can you see my screen well?

Pinal Dave: See your screen very well, the big large font, so thank you.

Matan Yungman: Great, thank you. So again, thanks for tuning in and welcome to hopefully, elegant T-SQL solutions for real-world problems. I will see a few problems that we encounter in the field, more common or less common, and we will see how we can use T-SQL to solve them, and again, hopefully, elegantly.

So a few words about me. My name is Matan Yungman, I live in Israel, I work for Madeira Data Solutions, and I’m a data platform MVP. I write on Twitter from time to time, sometimes good things and sometimes rants about the fact that I need to write a trigger or a cursor inside a trigger or something like that, and I co-host SQL Server Radio with my good friend Guy Glantser. It’s a podcast for SQL Server professionals, and we talk a lot about technical stuff and community stuff, and the future of the DBA and so on. We release two shows a month. If you know Hebrew, we also have the Hebrew version, so if you haven’t checked it out, please do.

Problem #1: implementing a queue in SQL Server

And let’s start with the first scenario. And the first scenario is maybe one of the less common scenarios in our industry, and it’s about queues. So we have a queue of people in that case, but obviously we can use a queue in computer systems as well. Now, usually SQL Server won’t necessarily be at the first choice or the best choice to implement a queue. Usually you may want to use external systems like some messaging system or [web written queue] or stuff like that. But sometimes you – we don’t want to implement a queue in SQL Server for all kinds of reasons, so what exactly is a queue? A queue is something that you insert something in, some kind of message or some kind of work that a worker needs to implement, and you enqueue the message or the work, and then let’s say you have some kind of system for managing small tasks that you have external parallel rows that each one can implement this task, so you enqueue the work and this worker dequeue’s the work, performs the work, and then moves on.

So how can we implement that in SQL Server? So we’ll go to Management Studio and we’ll use a database called test, and we’ll create a table called FifoQueue, which will have an ID of identity and a message with a type of varchar max. And we’ll create a clustered index and a procedure – this is the queue push or enqueue, as we call it in the slide, and we just insert the message into the queue. Let’s just verify we have the clustered index, and we have the stored procedure, and we’ll truncate the table and we’ll insert ten thousand messages, ten thousand very important messages into the queue.

And we’ll just let it finish. Okay, and now let’s start with the first and naïve version, and the way to perform a pop of the queue pop, or a dequeue, is that we will delete the message so that no other worker can take this message and perform this work because only one worker performs one task at a time. And in atonic operation we will delete the message and using the output clause will get this message to ourselves and work on the task. And since we have a FifoQueue first in first out, what we will have to do is that we will have to take the first ID, the oldest ID that is in the queue, in order to again, to work in a FifoQueue. So this is the procedure, or this is the first version of the procedure.

Load testing the queue, and…deadlocks.

And now we will go to SQLQueryStress and execute the procedure from 200 threads, and each thread will perform ten iterations of this procedure. So a total of 2000 executions of this procedure, and we will hit go.

And this procedure starts to run, let me give it a few more seconds, and we can see that over here, the total exceptions we can see that we have many, many, many exceptions, and if we hit here, we can see that everything – we get deadlock for pretty much every iteration of this procedure, and if we go also to Management Studio, and execute a query on – this is a very long query, but the essence of it is to show – is to go to sys.dm_os_waiting_tasks and see which session is blocked and which sessions block him, and we can see that pretty much we have tons and tons and tons of sessions that are currently waiting.

So this is a problem. Obviously, we can see that there is a problem, and if you close this, we can see that we’ve been running for about a minute, and we had 241 iterations that were completed. Out of those, we had 936 iterations that failed. So something is terribly wrong. So let’s stop this and see what can possibly go wrong with this elegant solution.

So if we see – if we look at the estimated execution plan of this procedure and I will just close and go to … now, and go down here, we can see that we have to go twice to the FifoQueue table. We have a clustered index scan over here and a clustered index seek over here. So essentially, these are two resources that can be blocked, so one session blocks the clustered index, and one session blocks the resource inside – excuse me, we go to the same index, but two portions of this index and since we have two sessions, that each one waits for the portion of the clustered index to be released from the other session, we’re in deadlock.

Elegant solution #1: fast ordered deletes

So how can we solve that? So for this, where we have a trick that is called fast ordered delete, and let’s see this. We create a table called ToDelete, and we might already have this table. And we’ll insert one million rows into this table using a cross join. Let’s just truncate the table first, truncate table ToDelete, and we inserted there 100,000 rows, and if we perform the delete, we see this same thing that we saw just now. We have a clustered index can and the clustered index seek, and this is not good.

But if we create a view that will use select top 100 ID from ToDelete, order by ID, and again, this is an ordered delete. So we create a view and then we delete from the view. We have only a clustered index scan, so we go to the table only once. Why are they different? Could Microsoft implement a solution for the case where we don’t have a view? I assume so. I don’t really know why there’s a difference, but there is a difference. So we can use this method and we can also use CTE to do the same thing. We create a CTE that we perform the select the ordered select and then we’ll delete from the CTE.

So we can use this trick in our queue. So we’ll close this, and we’ll mark this out and implement the second version. Actually, that was the first version, I’m sorry about that. Excuse me about that, so this is a spoiler. The second version of this was to use something that is called to use hints of rowlock and readpast. So what rowlock obviously means is that we will lock only the row that we read, that we delete. And what readpast means is that if we read from the table and see a resource that is locked, we will skip this and go to the next one that is not locked, and – but this solution transforms this solution from a purely FifoQueue into one that is almost Fifo. So there is a chance that the ID one was inserted before – sorry, message one was inserted before message two, and message two will go out of the queue before message one. So if you need a queue that is purely Fifo, you drop the readpast and you will get it. So that’s alter procedure, and execute the procedure again.

Questions about fast ordered deletes

Pinal Dave: There is one question I think we can definitely try to answer while we are – the question is that because there is an ID, it’s in order by, and ID is also in a select in a subquery, and we are doing top, the question is, I think they are saying why are we not using max or some kind of aggregate function where it does not have to scan the entire table? So if that is – I believe it is discovered later on, then we don’t have to answer now, but yes, that’s the question.

Matan Yungman: Can you repeat it please just to verify I understood it?

Pinal Dave: Yes, because right now we are just taking ID, top one ID, so why are we not using something like max or mean instead of just doing ordering that particular ID and just finding the top one?

Matan Yungman: That’s a good question. Maybe you can try that, but obviously, in a real system you will get actually the message, and not the ID. So we don’t take only the ID. The ID is here in order to help us order the message. We actually get the message in order to work on it, and again, the message can be if this is a messaging system, this is the message, if this is a task scheduling system, it’s the task, so something like that. So we don’t work only with the ID; if we would work with only the ID, things would be simpler, obviously.

Pinal Dave: [cross talk]

Load testing the fast ordered deletes

Matan Yungman: Great, and sorry about the spoiler by the way, but let’s execute again SQLQueryStress with the second version. And now we can see that we start to run and the iterations completed is much higher and the total exception – we still got exceptions, but much less than we had earlier. So we’re at a better state but things can be better, and again, this is because we still go to the table twice. Let’s see the execution plan. So we still have a clustered index scan and a clustered index scan, again, because we don’t have the fast order delete trick, we go to the table twice. Once for scan and once for a seek, and things are obviously not as good as they can be.

So now we would use this spoiler version. This version, we have a CTE that performs that select top one message from the FifoQueue, using rowlock and the readpast and again, if you need this trick, FifoQueue, you drop the readpast, order by ID, and then you delete from the CTE and use the output clause in order to get the message. And we will execute the procedure again, and while it runs, let’s see the execution plan, and we can see that we have only one clustered index scan. Clustered index delete, and we go to SQLQueryStress, and we can see, well, we did have one exception, let’s see what it is. We did have one deadlock, not sure why, but obviously things are much better, and we can see that we finished taking things out of the queue very, very, very quickly. So this is a very nice trick if you need to implement some kind of orders delete, you can use the fast order delete delete trick, and if you need to implement the queue, definitely check that out. Any questions? Any more questions up until now, Pinal?

Pinal Dave: Good right now, that was the one question you answered it very well so we can go to the next trick. Definitely a nifty trick. One thing which comes to my mind is right now that do you think – like, this is a fantastic trick, but like, many – I see this in a lot of time in industry, people just use a temporary table to achieve the thing which you have just done, and that’s way more complicated. It’s not consistent, and sometimes we do not – it even does not match what we wanted to delete because they cannot come up with this kind of situation, they cannot write this kind of query, and they end up writing things, first go to a temp table where they write the IDs and then they use the temp table select in an [inaudible 0:16:43.6], not in, and delete the data. I see it all the time people doing it. Such a big thing, instead of just nifty four lines, which you did.

Matan Yungman: Cool. So hopefully elegant.

Pinal Dave: It is, it is.

Problem #2: ID generation

Matan Yungman: Yes, great. So this is number one, and now we will talk about ID generation. So we have all kinds of situations where we need to generate an ID and then we turn that to the client. If you have a single insert procedure that inserts a row and then we turn that to the client in order for the client to the current procedure to insert this ID into our tables and so on and so on, and we have all kinds of scenarios of single inserts, multiple inserts, and we have all kinds of ways to do that in SQL Server, and let’s see those.

Okay, so the first way is obviously the most known I guess, and it’s called identity. So we will create a table called TableWithIdentity, a ID int starts with one, and the increment is one. And we have a procedure that we insert to the table and the procedure we insert to the SomeColumn column, will insert the value of something and return the inserted value to the identity column using select @@IDENTITY. And we will execute the stored procedure, and we can say that we started getting one, two, three, four, five, six, seven, and so on, and so on.

Now actually there’s a problem with this solution, and let’s see what the problem is. Let’s say we create another table with identity, and from the first table, when we insert a row to the first table, a row is inserted to the second table. So if we execute the stored procedure, we actually get the values that were inserted into the second table. So select @@IDENTITY doesn’t return the last value of the identity that is inserted to the table you actually inserted, but actually, it returns the last value that was inserted to an identity value inside this session. So this is kind of weird, I’m not sure why someone would want to get the value of the second table rather than the first table, which he actually inserted to, but maybe there’s some kind of scenario for that. But in any case, we will fix the problem and we will use the scope identity function in order to get the ID that we’ve actually inserted. And when we execute the stored procedure again, we can see that we get the right values from the table we actually meant to insert to. So this is identity.

Elegant solution #2: SEQUENCE

Now, for the relatively new kit, five years old, it was introduced in SQL Server 2012, and it’s called sequence, and sequence has all kinds of advantages over identity because it’s not strictly occurred into a specific table. This is an external object to a table, and we can use sequence in order to insert data to many tables, if you want to show the sequence we can change the data type of the sequence type more easily rather than changing an identity column inside a table, and so on, and so on.

So we’ll see how we use this. So we’ll drop the table and the sequence that we have, and creating a sequence is very easy. We create a sequence that have the name for the sequence, the data type, and in this case, start with one, increment by one distance, and cache of one thousand. So what does cache mean? Since generating values for identity in sequence have to be written to disk. This can take some time, so if you work with cache, this means that we will – SQL Server will generate one thousand – will actually allocate a range of one thousand IDs and then just take those from memory and give those to sessions, and this can help for systems where you have many, many inserts into a table and you want to maximize performance, and then you can use the cache in order to avoid this performance problem. Again, this is a problem only in relatively high-end systems, but it can be a problem. The downside of this is that if you have a system that you strictly have to have consecutive IDs, you can’t have gaps in the IDs; in this case, we can have a gap of up to one thousand. If you can’t have this, then you can’t use the cache.

And by the way, there was a bug in SQL Server 2012 that people who used identity started to see some gaps in identity columns, and this is because they implemented cache over default of one thousand for identity as well, and then they fixed it, and now in SQL Server 2017 you have cache for identity as well.

Concurrency problems with with IDENT_CURRENT

Pinal Dave: So that’s very good Matan, just I’d say in 2008 also, the bug which you mentioned was there. It was popped up and I think a lot of people went crazy in 2008 itself because they never thought about this would happen, and it was jumping in, and I think later on some kind of service pack and things, it was fixed at the time, and I think while I was going through the GroupBy chat on Slack, I think one – another [inaudible 0:23:32.0] I think, [inaudible 0:23:34.4] also brought up another very interesting function. Ident_current. So…

Matan Yungman: Yes, so ident_current is the current identity of the table. So what he means we can use ident_current in order to return the ID that was inserted, so this can be a problem because ident_current, let’s say we have many, many parallel sessions that insert data into the table, let’s say you have one hundred sessions inserting data all at once, so if you use ident_current, you will not necessarily get the value that you’ve inserted because from the time you insert it until the time you execute the select, you could have one hundred IDs that were inserted. So this can be problematic.

Pinal Dave: This can – definitely, that can be a problem, but the people who need the latest ID, they can use it. So it’s on place but scope identity does the task, which identity definitely miss out. And I think there is one last question I think before we continue. It’s on question and answer, is that another option is to get identity – another option to get identity is to use output clause.

Why people want to use @@IDENTITY

Matan Yungman: Right. Perfectly correct, that’s another option to use. By the way Pinal, can you think of a reason why someone would want to use @@IDENTITY rather than output or scope identity?

Pinal Dave: There is really no reason for using it, but I think a lot of people use it when you ask them why are you using it, they are not familiar with other functions I think that come up from different backgrounds, there’s some come from a different SQL relational database background, and they just come across. This, I think, @@IDENTITY is so popular, I think they just think it’s the same thing, I’ll just use it. Personally, I stopped using it. I either use ident_current or another one, but I think it’s al about just goes to the same or little education and little nuggets. Not everybody knows this elegant, and most important thing which you just showed, it’s just beautiful. I mean, that’s how we should be doing it.

Matan Yungman: Yes, great. For some reason, the first function that people know about identity is @@IDENTITY and scope identity. I wonder why; it was the same case for me.

Pinal Dave: Yes, and I think I would little bit blame on the early books which we read in my career, because I was – it’s not a bad thing. It was available there, I read that book and I think I got it from that, and I never moved on. I remember the first thing which I read, never learned the second alternate better option, so I think that is the case. Anyway sir, over to you.

Back to using SEQUENCEs

Matan Yungman: Okay, so we will work with identity and what we will do is we will use a default value for the ID column, instead of using identity we will use the default value is the next value for the sequence we’ve just created. So again, we will drop the – we will create a table and we will drop the procedure and let’s do that again. I guess we haven’t created a sequence, so let’s create it. And we will drop and create a table again, and drop and create the procedure again.

And again, we insert data to the table and we declare the ID as next value for the sequence, and then we insert data into the table, into both columns, the ID and the SomeColumn column, and then we just select the ID. Obviously we didn’t have to insert data into the ID column because this ID column has a default value. So we can use whichever we want.

We can either use a default value at the column level, or grab the next value into a variable and insert into the table. And we execute stored procedure, and we can see that we have the value of one, and two, and three, and so on, and so this is a sequence.

Implementing SEQUENCEs before SQL Server 2012

Now, just a little trick. Let’s say you work prior to SQL Server 2012 and you want to implement this sequence without a sequence because you have a few tables that you want them to show the sequence value and you can’t do that obviously in SQL Server 2000 and because you only have identity.

So what you can do is create a table, sequence without a sequence, and it chooses an identity starts with one. Increments by one, and what you can do is since identity is not – isn’t all back there in the transaction, what you can do is you can insert inside the transaction values to the table, and then you rollback and then you select scope identity and you get the value that you’ve just inserted, and that way you get a sequence without a sequence, and keep the table empty.

And this is only one of the cool tricks I’ve learned from Itzik Ben-Gan, so it’s really worth following Itzik, so this is a sequence without a sequence. I haven’t checked that in production, I can think of a few problems with this solution in a very highly concurrent system, but if you – it’s definitely worth checking if you need this kind of solution.

Inserting and updating multiple rows

Now, up until now we’ve only inserted single rows, now let’s go to inserting multiple rows. So I will create a procedure called insert multiple, and we will insert into the table with the sequence that we have, three rows in this case, using the values clause. We will insert three rows, the first row will have something, the second one will have something else, and another thing, and we will use the output clause in order to get the IDs that we’ve just inserted.

So we will drop and create a procedure, and execute it. Execute it, and we can see that we have ID six, five, and eight, and now we execute it again. Something is wrong with ZoomIt, but when we execute it again, we can see that we have the next rows and so on and so on. So this is a way to insert multiple rows very quickly, and get the IDs out of the plan very quickly in order for the plan to do whatever the plan needs to do with it.

Now, what about inserts and updates? We want to support both inserts and updates, so we have a few ways to do this. And either way is to – let’s drop the procedure and create – what we will do is create a temp table that will have the ID, and then what we’ll do is we will update the table, we will join between – excuse me, I forgot to say that we have an external table that is called InsertedValues, and we will use this InsertedValues in order to update or insert rows into the target table. So what we will do is in order to support both insert and update, we will have an update and get the updated values into IDs to the temp table, and then we will have an insert using the output clause, get the values into the IDs to the temp table. So again, and we insert only values that aren’t inside our target table obviously. And then we select stuff from our temporary table, which will hold all of the inserted IDs.

So we would create a stored procedure, and you get a table of inserted values, and insert two rows into this table. And now we execute this procedure which supports both updates and inserts, and we can say that we got the IDs of one and one thousand. We execute again, and this is the update stage, but again, we see that we get IDs one and one thousand. And this is because we inserted ID one here and ID one thousand here. Now, obviously, this is one way to do it, but there are other ways, and the natural way to do it is to use the merge operation, which was added in SQL Server 2008. So what we will do is we will use the merge operation and the merge uses a select from the inserted value table, and this is the source, and we have some kind of a join them between the source and the target. This is the source and the table with the sequence is the target, and when we have a turnover when matched, when matched update, and when not matched, insert. So in this case, in one operation, we get the options to have both inserts and updates, and we also have the dollar action word that will show us whether we updated or inserted this specific row.

So we create a stored procedure, and we dropped – we will drop the table, the temp table, and insert some values, and execute the stored procedure, and we can see that for the first operation, the first row one thousand one was an insert, and the second and third one were updates, and when we would execute that again, we can see that everything is an update. So this procedure again, supports both inserts and updates using the merge statement.

Now, another scenario is let’s say that we want to support both inserts and updates, but if you have an ID that already exists in the table, we don’t want to update it. We want to return this ID, we don’t want to return any error, but we don’t want to perform the update. So what we will do is we will declare a dummy variable, and we will use the merge, but on the one matched, we will just update the dummy variable. We won’t update anything in table, we just have the dummy variable for it to be updated from zero to one, and obviously in the inserts, for new values, those values will be inserted into the table.

So again, we will generate some values, and execute the procedure. And we can see that you had one insert for the first row, and two updates, and now we have only updates. So this is a cool trick. For some time, it wasn’t so cool but because for some reason this pattern, and I blogged about it once, this post was called the Weird Merge that Killed the Cluster, and this is because this pattern for some reason generated a stack dump for SQL Server, and when you have a stack dump in an Always On availability group, you have a fail. So we got into all kinds of – into a very weird situation because of that, but Microsoft fixed it I believe, and you can feel free to use it. But again, check that you don’t have a stack dump after you use it. So those were the inserted IDs. Any questions please?

Using Service Broker to solve it instead

Pinal Dave: I think right now it’s all fine. This is one, so it’s good. A lot of good conversation going on in Slack right now, which we can talk about this later on. But I think a lot of people are saying it’s interesting to see what you are demonstrating, and along with the queuing operation, which you demonstrated, I think we can also use service broker and other options too. [crosstalk] And this is also another matter if you are not want to go and use those fancy tools, and there are always individuals who like to write things themselves, and when [inaudible 0:38:05.7], I think everybody in their organization are always care, so it totally makes sense.

Matan Yungman: And I should have mentioned, obviously if you work with SQL Server Service Broker is the natural way to do it. The problem with that – I think Microsoft with Service Broker did – they did an amazing job, amazing engineering job with building Service Broker. You can get to amazing places with it, but on the other end, they created a new language, and when you start working with Service Broker, you have to learn all those new terms and this – people don’t really like it. So [crosstalk].

Pinal Dave: I think it’s like you learn one language, you don’t want to learn another thing. Teach me once, I want to use it multiple times, and I totally get what you are saying. I think same with me, when I was learning Service Broker, I just had the same feeling. So yes. One more question just from my side – sorry about that – yes, continue.

Matan Yungman: I just wanted to say that there’s another very interesting implementation I haven’t showed. There was a guy called Thomas Kizer, unfortunately he doesn’t work with SQL Server anymore, but really amazing guy and works with – used to work with the top systems in the world, and he had an implementation of a queue that is only updated if it’s not inserted and deleted, so it’s not shrinking and enlarging. It’s a queue with the same size, and it’s only updated in order to show that I’ve taken the message. The messages or the row is marked as deleted, and you implement that using two sequences of sequence for insert, and sequence for a delete. I actually have this demonstration, but it doesn’t show any performance improvement on top of the one that I’ve showed, so I didn’t show it but I can show the code and it’s very nice. So Pinal, go ahead and ask and I will turn on the light because I see that it’s really dark over here, so I will turn on the light in the meanwhile.

Pinal Dave: Absolutely. I think one of the questions, which I was really, really – I ask everybody when people ask me about this one, about the merge statement, and my question is what I ask to everybody. Have you really used this in a production system except like first time when you’re learning you use it, but have you ever used it second or third time in a production system, or have you ever seen people using it more than once beyond the learning curve?

Matan Yungman: So yes, the merge statement actually I used to really hate it because first of all, the syntax is a little weird. You get used to it but the syntax is a little weird, and if you work with a highly concurrent system, it has all kinds of problems. It’s not really atonic and you can get to situations where it’s not really locking and all kinds of deadlock and concurrency issues, but for systems that are not highly concurrent and mainly for ETL scenarios or scenarios like this, this scenario is based on a system I’ve built for a client, and the merge statement came in handy. So since then I really like it, but you have to know when to use it. If you have any ETL systems, it can be – excuse me, if you have a highly concurrent OLTP system, it can be tricky, it can be problematic, so we really have to check it.

Pinal Dave: Alright, fantastic. I think that was the question so far, I think Slack is loving this conversation very much, talking about RabbitMQ and there is a poll there, so, everybody who is watching live, you should go and participate in the poll, and the rest of the people who should just joined the Slack and talk to us. Continue with the presentation, sir.

Matan Yungman: Cool. What is the poll? What is the question in the poll?

Pinal Dave: The question is, if you have implemented a queue, which solution did you use? And options are Service Broker, RabbitMQ, SQL Server tables, yes with something else, and no. And I think no is leading.

Matan Yungman: Okay, interesting. So, actually, I actually use the T-SQL, but maybe in my next talk I will implement it with something else, with the queuing system.

Pinal Dave: And Brent says that he has left out the option of duck tape so if he has put that one, I think everybody would have selected that. Duck tape, like they say – like, one of my boss used to say, but I will – he used to say, there is no permanent solution than the temporary solution. So you put something, that for a while, it’s going to be there forever. Anyway, sir, over to you.

Problem #3: dates and times

Matan Yungman: Great. So now let’s talk about – let’s see what we have in this slide. Date and time. Some issues and tricks with date and time, and mistakes. Most of us do when we start and then we understand we made some mistake and try to fix it. So for some reason, it always goes to the dynamic search, so maybe we’ll get to that. So we use database called Sale4U, and drop clean buffers, to statistics I/O, and let’s say we have a table called Sales.Orders and we want to get all of the orders out of July 2016 and we need to do the actual execution plan. And we can see that we have a table scan, and over on the messages we can see that we have 92,000 logical reads. So let’s see what we’ve done wrong. The problem is that if you wrap a column with a function, even if this column has an index, SQL Server can’t use this index.

So let’s see what we can do. The solution is to use between and we will say date and time between the start of July 2016 and the start of August 2016. And when we do that, we can see that we have an index seek and the messages – over on the messages, we can see that we have 3000 logical reads, so much better. The only issue is that it’s not entirely correct because the between clause includes both edges, both of the sides, and what we need to do is to either use this thing, or better, use just this thing. Select from Sales.Orders, where date and time greater than or equal to the start of July and less than the start of August. And obviously, when we do that, we have the index seek and we have 3000 logical reads. And for some reason, I have that here, I already have this index, so we will move on.

Now, the mistake number two as we said is wrapping a column with a function in the where clause. So let’s say you want to get the sales – all of the sales from the last, in this case, 520 dates. So again, and we talked a little bit about identity earlier, and I think most of us when we start writing T-SQL, we encounter the date function because as humans, this is more natural. We say the difference in days between today in this order is 520, and this gets us all of the orders from the last 520 dates.

And when we do that, again, we can see that we have a table scan, and over in the message, we can see again, those 92,000 logical reads. So what can we do, because again, we have an index on the date and time column, so why can’t SQL Server use it?

Elegant solution #3: indexing for datediff

So what we will do is the way to solve it is to use some kind of opposite function. We will use instead of using date, if we would use the opposite function of datediff, date in this case, and we will take out the date and time column so it won’t be inside the function. So what we will do is say well date and time is great, and then date at day, meaning we will work in days, we will take 520 days – we will take today and reduce subtract 520 days from it, and this will give us the solution. So we get the row that we need, and in the execution plan we can see that we have an index scan instead of a table scan. Excuse me, we have an index seek on the orders on the date and time column on our index, or at the messages tab, we can see that we have only 2000 page reads. So again, much, much better.

Moving on, how can we make it even better? We can add to the index that we already have, the customer ID column. Since we query for the customer ID column and it doesn’t exist inside the index that we can, we can include it in an index. And when we do that, and it executes the query again, we have only 16 page reads, and only a single index seek. So again, we have a few stages. First of all, write your queries correctly, don’t use functions, excuse me, to wrap columns. Have proper indexes and for those indexes, have included columns as much as possible in order to prevent what we call the key lookup or in the past we called it bookmark lookup, in order to jump from non-clustered indexes into the clustered indexes or to the table in order to get the values of the rest of the columns in your query.

And now let’s drop this index. Now, the only problem is so as I say here – by the way, is it really the same between datediff and dateadd, and I think most of the more experienced guys in the crowd know this pattern and have used it in order to transfer datediff to dateadd. This is relatively common scenario, but the problem is that those functions don’t work exactly the same, and let me prove that.

Indexing for dateadd

So I will use the dateadd method in order to get the values, and using the except clause, which gives only the rows that exist in the first data set, and don’t exist is the bottom data set, I will get the value that I return from the first query and not from the second query. And we can see that we have 155 rows that we get from the dateadd solution, we don’t get from the datediff solution. And this is because they work differently. Dateadd gets the exact time of now, and takes 520 days back. The datediff functions just compares between days. So it treats a day as one unit, and some situations, rows that are close to midnight or something like that will get left behind, and making a long story short, we can get values that we didn’t think we will get, or we didn’t get when we used the datediff solution.

So in most cases, it won’t matter. But if you work in a situation where it does matter, what you can do is you have all kinds of solutions. Guy and myself, Guy is the CEO of Madeira, and in some situations we both wrote a post about datediff and dateadd at the same week. So you can see Guy’s solution, my solution is just to use both terms, both functions in the same select. So the dateadd solution gives us the index seek, and the datediff solution gives us the “correct solution” business wise. So if we see – we can see that we have almost 4000 page reads, but it’s not 92,000, and in the execution plan we can see that we have our index seek. So this way will get you both good performance and good results.

Problem #4: querying for time ranges

Matan Yungman: Okay, so now let’s talk about another scenario, and this scenario is again from another client I worked – I worked for a client who used to buy and sell stocks and in some situations, he wanted – he ran a query which wanted to get only the stock buys from specific hours, from 10am and 2pm. Not really sure why, but this was the scenario, so let’s see how we do it. And I insert all kinds of data into the table, not really interesting, and we will go – let’s just verify we have this index. So let’s just see this table, and we have a table called StockBuys. We have the stock ID, buying price, buy date and time, and buy time, in this case, which we will try to use and we will see why.

And we will have set statistics I/O, and for a start, what we will do is we will use the buy date and time. We will convert the buy date and time column into date, and say that this is between those dates, and when you convert to the time, you convert this column to time data type, you say give me only the stocks that were bought at 2pm. So we run this, and it takes some time, and we can see in the execution plan that we have a clustered index seek, which is theoretically good, but over at the logical reads we can see that we have 22,000 page reads. Obviously, in a production system, this is not too much, but we can do better.

And why can we do better? Because over in the clustered index seek, we can see that we have a seek predicate, and the predicate. Over at the bottom, we have a seek predicate which is the value that is used to traverse the index, but we also have a predicate. And this predicate in many cases can be very, very problematic because when SQL Server builds statistics, the most dominant part in the statistics is the first value of the index, which can be used as seek predicate, but if you filter on other parts of the index or on included columns, SQL Server in many cases won’t have a good estimation about the amount of rows that it has and this can lead to some cardinality issues and some problems.

So let’s see how we fix those. So second try is that we don’t really need to convert this to date, but we still do need to convert this value to time to filter on 2pm, so let’s do that, and we don’t see too much difference in terms of logical reads. We do see some difference in terms of the execution plan. We can see that again, we have a clustered index seek but we get a parallel plan. SQL Server got to the conclusion that a parallel plan will be better, so it performs the query in parallel, and in many cases, runtime will be better.

But it’s still not enough because we still have 22,000-page reads. So third try, what we will do is we will add the buy time column. This is a computed column, and the value is the cost of buy date and time to time. So we essentially get the time value out of the date and time value, and as we saw, I already have this value in the table, and I create an index on the buy date and time and buy time. I already have this index, and now when I do that, it doesn’t have too much. And again, because SQL Server couldn’t know the exact value of rows, the exact amount of rows inside – in each day, we have many, many stocks and many, many rows that were bought. And since this value, the time value is the second value in the index, SQL Server can know that and again, the plan is not good enough because we have to read many, many, many rows, we have an index seek. We get to a specific day but in this day, we have to read a very big amount of rows until we get only to the rows that are at 2 pm.

Elegant solution #4: date & time tables

So what can we do? Let’s move on. We will use a date table, and a date table is just a table with all of the dates. In our case, from the start of I believe, 2010, we have each and every hour. You know what, let’s look at this table in our case, and call it dates, order by one, two, and we can see that this is from the start of 2010, and as I said, this is actually all of the minutes from the start of 2010 and until 2020. Something like that.

So a date table can help us do that because for each and every hour or minute, we will have only one row in the table. So if we go on this table, we will very, very quickly filter many, many, many rows out, and using this table and using a join, we will get very, very fast into our actual table, our StockBuys table, and since we already mitigated many, many, many rows out, the query will be much faster and will read many less rows. And when we do that, again, what we do is we join between the StockBuys table and the date table on the date time column, and to filter itself instead of the StockBuys table, we filter on the date table again, from 2010 and until 2011, and filter only the rows on the stocks that were filtered – that were bought at 2pm. And when we do that, we can see that we have only 4000 page reads instead of the 22,000, and in the execution plan, we can see that we have an index seek on the StockBuys, an index seek on the dates, and thinks we’re very, very, very fast, and we also have instead of a parallel plan, we have a serial plan, which in this case means that SQL Server estimated that the cost of the plan will be lower to use a serial plan, and in this case this is a good plan.

There’s all kinds of debates about parallel being good or bad. In, obviously, many cases, it’s very good, but in some cases, it just says that the query isn’t written good enough. So in this case, we made the query be written better, or actually use more proper tables, or table structure, and we got the serial plan. In this case, this is a good thing.

Improving it with a function

Now, we can make it a little bit more elegant if you use an inner function because this client, he has all kinds of analysts that perform this search, and all of the time when I told him about this solution, they would have to change all of the queries to perform an inner join to the dates table. So what I can do is use an inline function, and why inline? Because table value function and scalar functions are very, very, very bad because they work row by row. If I execute a function on a one million row table, we would have to execute the function one million times. On the other hand, if we write the function as inline, it’s essentially not a function. It’s just set-based solution that SQL Server can take and add to the query, and it still runs fast and it gives us the opportunity and the benefits of encapsulation and reuse, that are good programmatic skills, programmatic behavior.

So what we will do is as we say that [inaudible 1:03:55.3] table and just return, select one in this case, where date and time is equal to the parameter of date and time, and the date time is between date one and date two, and the time is equal to the time. And when we run this function, we run it using cross apply in order to get only rows that have a match inside the function in this case, and when we do that, we both use a function and have used encapsulation, and we get good performance of 3000 page reads, and a good execution plan. So this is dates, and inline functions.

Problem #5: dynamic searches that can filter on anything

Matan Yungman: I will talk about dynamic search really, really quickly. I won’t get into – very into the details. But just want to talk a little bit about – let’s say you have some developers or program managers that want a dynamic search screen, they want to filter – the option to filter on everything or not filter on anything and stuff like that, so how can you implement that inside SQL Server?

So let’s close this. So what we will do is we will write a stored procedure called find products dynamically, and the first solution is to use – dynamic solution is to say something like this. The select part isn’t very interesting, the interesting part are that we move – we pass parameter values for the parameters that we want to filter out. So I want to either filter on product ID or name, or product number or color, or any combination of the [inaudible 1:06:23.9]. So what I do is select the values from the table, where product ID equals to the value of the product ID if it’s passed from the client. If it’s not passed, this part of the query of all product ID is null. If it’s null, it will just say don’t take this value into account, don’t filter on the product ID. And this is the same for name, product number, and color.

So what we will do here is we execute the stored procedure with the parameters of product ID and color. Sorry, I want to have the execution plan. And when I include the execution plan, I see that the plan is actually pretty good, but when I go here to the index scan, what we can see is that we have over at the predicate column, we can see that even though we didn’t filter on all of the parameters, SQL Server generated a plan that does take that into account. Obviously, this hurts performance because SQL Server had to generate a plan that takes into account all of the implementation and all of the variations of those parameters that were passed, and this can hurt performance because the execution plan is more complicated and SQL Server had to take more stuff into account. And we can see that we have 604 logical reads, and let’s see if we can make this better.

Elegant solution #5: dynamic SQL

So the second option is to use dynamic SQL, and what we do is select – we have the select part and then we start concatenating and in case let’s say the product ID is not null, it means that we want to filter on it, so we concatenate that and we concatenate an empty value, and so on and so on for the rest of the values, and we execute the dynamic – we execute the command using dynamic SQL. And when we do that, and go here, we can see that we have a better execution plan that took into mind the values that we actually passed to the stored procedure.

Okay, so this is better, and let’s see, and we have a plan of three logical reads. So this is much better because when it works specifically for parameter values, you get a better plan rather than generating a plan for all possible values. The problem is – the problem, in this case, is SQL injection. If we use dynamic SQL that way of just concatenating, we can – a bad guy can say, okay, instead of concatenating the value, the parameter value, what we concatenate is this and drop table, very important table. And if we do that, what we see is that SQL Server says cannot drop the table, important table, but if we would have a user table or an important table, this table would be dropped, and actually the smart hackers what they do is intentionally, they try to generate errors in SQL Server in order to reverse engineer your database and there are all kinds of very scary videos of people who does that. Program called [inaudible 1:10:38.9] that was written by a few Iranian guys, and this is pretty scary how they can map your database and understand what’s going on.

So SQL injection is still dangerous and what you can do in order to solve that is to still use parameters, but instead of just concatenating, what we will do is we will say inside the dynamic SQL, product ID equals @productID, name equals @name, and so on. And use instead of exec, we will use sp_executesql, it allows us to pass parameters and to reuse the execution plan. And when we do that, and pass a value of – so first of all, we can see that we get a good result in a good execution plan, and when we try to drop the important table, nothing happens because there is not a product number called drop important table. So if you want to use dynamic SQL with a dynamic screen, this is the way to go, and not just concatenate values.

And the fourth option if you just want good performance and don’t want to start concatenating values using dynamic SQL, which can be a little hard to debug, what you can do is just return to the original solution and use option recompile, which is kind of magic, because what option recompile does is tells SQL Server generate an execution plan for this specific execution, and this specific permutation of parameters, and when we do that, we get a good plan of three logical reads, and a good execution plan which tales into account only the parameter values that we’ve just passed.

So option recompile is very good and very easy to work with. The only problem is if you – if let’s say you work with booking.com, or a very high-end website, if you have let’s say one thousand such searches in a second, option recompile will generate a ten for each of those executions, and this can start getting a little heavy on your CPU. So in this case, I would work with dynamic SQL. Obviously, the good permutation of dynamic SQL, and so what it will do, it will generate a plan for which permutation, but the number of permutation is relatively capped so at some point it will start to get reused of your execution plan instead of just generating a plan again and again and again. So I think that’s it for this session, and I will be happy to take any questions.

Pinal Dave: So there are no questions, but this is so interesting to see that dynamic SQL comes at a rescue where everybody says don’t use this, don’t use this. Now the same feature comes and saves you when your boss is asking you to write dynamic search and there is no other way to figure it out. And I still remember one case – before we end I’ll just tell you, I see a customer writing a dynamic case. They had only three filled, so they wrote like combination of each three filled so that becomes two to three, like eight different combination but that’s a six actually, and then as it starts building up and you just start talking about two’s power and it’s just become so complicated, and eventually you just end up writing either dynamic SQL or just going to some third party tool. So I think it’s fantastic to see this one, definitely elegant solutions, and I think somewhere in Twitter, I think I heard – I read it somewhere, it says, “Elegant solutions can be arrogant” so I don’t know how – what does it mean, but this looks amazing. Yes, if a developer knows the trick, they can be a little bit arrogant and say, you know what, they know little more than other. So thank you very much.

Brent Ozar: Perfect. Well thanks so much Matan for hanging out with us today, thanks Pinal for doing the questions during this session!

The following two tabs change content below.

Matan Yungman

Latest posts by Matan Yungman (see all)

,
Previous Post
Query Store and Query Optimization on SQL Server 2017
Next Post
Voting Open for GroupBy March 2018

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu