SQL Server 2017 Community Driven Enhancements

Target Audience:

Targeted to DBAs and Developers who want to get familiar with new features in SQL 2017.

Abstract:

Will cover all new SQL 2017 features that the Data Platform Community has requested and Microsoft has included in the next major version of SQL Server.
200 level

Why I Want to Present This Session:

I am a Data Platform MVP and have been speaking about SQL Server in several PASS Saturdays and PASS Online events, in Spanish and English

Audio podcast:

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes, Stitcher or RSS
Leave us a review in iTunes

Additional Resources:

https://sql-javier-villegas.blogspot.com.ar/

 Full Session Transcript:

Javier Villegas: Well Hi, everyone. My name is Javier Villegas. I’m from Buenos Aires, Argentine. I work at Mediterranean Shipping Company for a number of years. I am involved with SQL Server since early ages. I was on SQL Server 6.5. I specialize in performance tuning, server administrator and high availability; that’s why I liked in Derik’s presentation before. I am a Data Platform MVP and also have a blog – a technical blog – and I contribute to the MSDN forum. I am also a technical speaker on different events and conferences like SQL Saturdays, 24 Hours of PASS, now GroupBy and soon Data Platform Geeks. I’m also a member of the SQL Argentina [inaudible] groups from PASS.

So let’s move on then. Okay, this is the agenda, some of the topics that we will try to cover today. So we’re going to talk about these features. These are a particular set of features in SQL Server 2017. These features are kind of particular because, as the title of the presentation said, those are community-driven features or enhancements, right.

We all know that Microsoft is a very different company now. Right now, they listen to us. They listen to the vendors, the listen to the customers, they listen to the data platform community and the [CSVs]. So a number of years ago, they used to have this site, Microsoft Connect. Now it’s gone because of the shitty PR things. They have something different, but they require feedback from the community to see which features we like to see in the next SQL Server version. These are some of them.

So we will try to cover them all – smart differential backup, smart transaction log backup, SELECT INTO on file group, tempdb improvement and monitoring transaction log, monitoring diagnostics and we’re also going to see one of my favorite ones which is resumable model online index rebuild and automatic plan correction.

So let’s move on. This is SQL Server 2017 today. This is this rich platform. It runs on almost every OS, almost every platform. It supports graph queries. It has advanced machine learning. And now, with the query processing, it’s having some additional intelligence to optimize and provide us with better execution plans. This is really what we want at the end of the day, right. And this is the industry leader and one of the most secure data platforms that are available.

In terms of SQL Server itself, now we know it’s very different than before. Now we can work with any data type. We can work with structured data, like we’ve always been, and also unstructured, right. We can work with a number of programming languages. We have R, we have Python – so a bunch of things that are new to this field. Also, as I said before, we have a choice of OS. We have Linux, we have Windows; we’re going to cover that in a minute. And also, we can deploy it anywhere; on-premise, as we always have been, in the cloud and in a hybrid environment. That’s really good; plenty of choices. So this is how we can call it the intelligent data platform.

So, in terms of platforms, as we said before, Windows, Windows Server, Windows 10. Linux is a huge thing. I remember not too long ago, if someone was telling me that SQL Server was about to run in Linux, I would say, you’re joking. And now it’s real and it’s really good actually. In terms of Linux, it runs on the major distributions like RedHat, SUSE and Ubuntu, but it also runs in other distributions like CentOS or Fedora. And also, it runs in Docker; Windows, Linux, Docker and also in Mac. That gives us the possibility of run SQL Server on a Mac OS machine; something crazy if you ask this question not so long ago.

Well also, SQL Server is recognized as the best database platform for the last three years in a row. So Gartner, who provide this beautiful magic quadrant, now put Microsoft with SQL Server and the whole data platform as a leader, as number one.

Okay, with SQL Server 2016, what Microsoft told us is that it was faster just out of the box. Just install it and it’s going to be faster. And with SQL Server 2017, they promised, and they have accomplished this, to be even faster. And besides being faster, it has more features. Let’s talk for a second about machine learning. In SQL Server 2016, they introduced the R platform. Now in 2017, we have the addition of Python. In terms of adaptive query processing, as I mentioned briefly before, they are trying to put additional intelligence into the query processing to get better execution plans. This is huge. And to me, my favorite feature is resumable online index rebuilds. This is my favorite feature as a DBA.

I know that maybe running SQL Server in Linux is the bright star, and this one, resumable online index rebuild, is something also huge. We’re going to talk about that later. Okay, let’s go straight to the first feature – smart differential backup.

What is that? Okay, let’s say that we, as DBAs, we have our maintenance with a full backup on Sunday, and then during the week, we perform differential backups until the following weekend, right, to the following Sunday. So now, in SQL Server 2017, Microsoft have introduced a new column in the sys.dm_db_file_space_usage which actually provides us the amount of extent that we’ve modified since the last full backup. So why is this so important?

Right now, every day, we can check the amount of pages that were modified since the last full backup. So having the total amount of pages and the ones that were changed, we can easily calculate the percentage of the database changes. And maybe it’s a good idea to take another full backup instead of doing differential backups. But now, we have the choice. We can query this column on this DMV before doing the backup and decide – okay, if the threshold is over, I don’t know, 60, 70, 80%, we decide and we say, okay, let’s continue doing a differential backup if it is below our full threshold, or let’s do another full backup if it is over. That’s really, really good.

This is one of the items that was voted by the community. And here is the example of how to query this percentage of change. We have the modify extent page count and the allocated extent page count. So we do a percentage and easily we get this value. We’re going to see some examples later.

Additionally, to the smart differential backup, we also have the smart transaction log backups. So, there is a DMV called dm.db_log_stats, and now it exposes a column called log since last log backup in megabytes. What is this? Okay, also as our maintenance plan, let’s say we do transaction log backups every ten minutes. So we do our full backup or differential backup and we continue doing the transaction log backups.

So querying this new column form this DMF, what we get is the amount of megabytes in the log since the last transaction log backup. So why is that so important? Because now we can compare this amount of megabytes versus the total size of megabytes for the transaction log and decide, okay, is it good to take a transaction log backup, or maybe I wait?

My example of how I implement this was a bit actually. In my experience, what I did is instead of doing transaction log backups every ten minutes, I modify the schedule of my transaction log backups to run every minute. But instead of just going in the backup log, I’m just checking this column. And this column has a certain amount of megabytes, or a certain amount of percentage – I do the transaction log backup.

And now, what I can tell you, for example, is that over the weekend, when we don’t have a lot of activity – or at least we don’t have the same activity done during the week – the amount of files for the transaction log backups have been reduced a lot. So again, with the smart backups – either full backups, differential log backups or transaction log backups – now we have a choice. I mean, Microsoft have put this intelligence and we have the choice to use it or not. So in terms of solutions, you can use your imagination to see how you can use that, but it’s really powerful.

Okay, another highly bold feature, SELECT INTO on file group. What is that? we all know that writing T-SQL, maybe we have to write or create some intermediate tables before we’re able just to do SELECT INTO on the destination table and, by default, it’s going to be to the full file group. These kinds of things we can see very often on warehouses, right. But what is this feature about?

Now we have the possibility of the output of the SELECT INTO into a particular file group. And why is that so good? Because on a warehouse environment, for example, we can set up a permanent or temporary file group with data files on another drive, on really fast storage, and we can improve the performance of this SELECT INTO. So we send the output of these intermediate tables into a table on a different file group which is running on data files which are on super fast storage. So that is really, really good.

Here is the syntax. We’re going to cover it later, but basically what it’s doing is adding a new data file for that filegroup and the last line is saying SELECT INTO and redirecting to the file group FG2. We’re going to see some live examples.

Alright, tempdb setup improvements – this is nice. Before SQL Server 2016, we go to store any SQL Server [reg] – I’m going to say 2014. And we did the whole setup, et cetera and we know, by default, that we had to go – I mean, when the setup was complete, we had to go to the tempdb configuration and change it because by default it was creating one file for data, one file for log, with crazy outer-growth values in percentage et cetera, and probably on a crazy path, right. So in SQL Server 2016, Microsoft added the possibility to set up the tempdb from within the setup. So we are now able to choose the amount of files, the destination of the files, the initial size, they have to grow volume for data. So that was a huge, huge advantage because I’ve seen over the years production environments that, by default, were running with the default tempdb; on file growing by 10% on the system disk. So that was not good in terms of best practices.

Okay, in 2016 there was limitation. Each data file cannot be over 256 shakes. Now, in 2017, they have removed this limitation and we can setup bigger files. And also, we get a warning if the files are quite big, giving us a reminder that we have to enable instant file initialization. So this is something that now, during this setup process, we have this possibility, whereas before, we had it in our to-do list post installation. So that’s also important.

In terms of tempdb monitoring and planning, one of the pain-points for every DBA, especially when we have the situation where tempdb grows and grows and grows and we just watch and tempdb keeps growing and we have to pick up the phone and call our storage administrator saying we’re running into a problem, please expand the drive where the tempdb lives, et cetera. Well now, we have a new DMV which is the starting point – and it’s something that’s going to help us definitely under certain situations – to see the amount of space used in tempdb, but only if we are using version store.

So if your workload of your database is configured to use version store, now querying this DMV, we can check speed per process ID the amount of size on tempdb. As I said, this is good. This is the first step and they promise to keep improving in this area because tempdb monitoring and planning is one of the critical points on every production environment, I would say. Here’s a quick example. So we’re going to see it later – we’re going to create the table, we’re going to enable snapshot isolation and we’re going to check this new DMV, dm_trans_version_store_space_usage.

Okay, transaction log…

Edwin Sarmiento: We have questions, like Gordon for instance, and I think he’s referring to the backups. He was asking, “Are the new backup columns exposed in the GUI?” I’m guessing SQL Server Management Studio, in some way, or will smart backups be T-SQL-based, I’m guessing, or DMVs.

Javier Villegas: For now, it’s T-SQL only. So you have to query DMVs or DMF, right. And these new columns are only available in SQL Server 2017; so a good reason to go and upgrade as soon as possible.

Edwin Sarmiento: And one thing, by the way, SQL Server Management Studio is decoupled from SQL Server’s installation, which means the updates and changes that are happening to SQL Server Management Studio is a lot faster now. So you can make some suggestions so that the next release will include it, SSMS 17.5. So again, just because it’s decoupled, you can ask Microsoft, hey, can you include those columns in the GUI or in SQL Server Management Studio.

Javier Villegas: Definitely. About the topic, Microsoft promised that they’re going to release regular updates to Management Studio once every month or two months. So alright, transaction log monitoring and diagnostics, right. One of the – also another pain-point that we have as DBAs are the VLFs. So, for the transaction log, check the data log file, the amount of data log files, the sizes, et cetera. So we know that if we have a database and we set up the auto-grow in a small value in the database, the transaction log is growing quickly and so often and then we just go and shrink. Well, we are not doing a favor to the database because then when we have to recover the database because we have to, we may have to wait.

So keeping the number of VLFs controlled is a good practice, since ever. But now – I mean, we had tools to check that, which was DBCC LOGINFO. This is in previous versions of SQL Server. But now, in 2017, we have a DMV called dm.db_log_info. There is also a DMF where we can specify the database ID. With that, we get all of the information for all the database. We just do a select from the DMV. For every single database, we see the amount of VLFs plus other information. But really important, we can easily see the VLF. So we can start refactoring our maintenance script, right, instead for just doing the – for each database – DBCC LOGINFO, which is sometimes expensive and sometimes not very recommended. Now, in 2017, we just can query this DMV. It’s quick and it’s very useful. We’re going to see something later.

Okay, this is something that is also important and useful, and in some cases, we used to work on databases and our workstation development server – a server that, in terms of hardware, is small because we work just developing and preparing our solution. So let’s say that initially, we came up with a small database, but the day that we decide to deploy to production, to our super-duper high end million core production server, we put that database over there and we notice that the backup times for the small databases were taking so long and was not reasonable. We have a few megabyte database in our high-performance server with super-fast storage, et cetera and when I do a backup it’s taking crazy-long; why is that?

Now, in SQL Server 2017, Microsoft have refactored that thing to improve the backup time for the small to medium databases on high-end servers. Here we have some examples, right. The blue line is previous SQL versions and the orange line is 2017. So you see, 8MB, 256MB, 1GB – the time spent in doing backups was long. And in SQL Server 2017, it’s immediate. When the database starts growing in terms of size, most numbers were getting close, but especially for small to medium sizes, this is an advantage. So it’s something good to consider.

Okay, processor information in SQL Server. We always talk about processors, CPU, cores, sockets, et cetera. Before – I mean, since a couple of versions before – we have sys.dm_os_sys_info, which is a good DMV to check information regarding to the processor. But there were some important counters that were missing; socket count, core count, and core per socket. If we have to check our hardware, beside going to SQL, we had to ask the sys admin, hey, we’re going to need the amount of sockets that we have, the cores and the cores per socket. And the sysadmin had to go to the sys info or the server or contact the vendor or whatever. Now, in SQL Server 2017, we have all this information on the DMV; socket count, core count and cores per socket. This is also important. And again, this was also voted by all of us.

There is also a new DMV called dm_os_host_info. And now that we have multiple platforms, we can easily check the host platform, the service pack playable, the versions, et cetera. It’s just doing a simple select to this dm_os_host_info – we get information about our platform and the OS also.

Resumable online index rebuild, as I said briefly before, my favorite feature, I came from SQL Server 6.5 times, so doing maintenance, an index rebuild on a big database, was painful. We tried to avoid it as much as possible. It was something that we had to get everybody off the server for, for a number of hours or days, and it was really painful. Also, handling the transaction log size for this offline index rebuild was also quite complicated and, at the time, was something that was all or nothing. So you cannot do partial index rebuilds, you just have to do all or don’t do it. So you have to schedule a reasonable amount of time to do the whole thing.

In previous versions, Microsoft introduced the online index rebuild. It was a huge step. It was something really good because in certain cases we were able to do index rebuilds online without locking the table. Before, we just locked the whole table and nobody was able to touch the table. But this online option, we had this new shortcut. It was good, but again, it was not ideal because it’s not always online and it’s not always that fast. You know, there are some complications, but again, it’s really good. There are many, many blogs about the online index rebuild and the maintenance plan by Ola. So that was really good.

But now, we have another situation. When we do the index rebuild, we had to do it and just finish the index for that particular table. This is the most complicated situation, for me. If you have a huge database with huge tables and that database is within an availability group with several replicas, of course in full recovery model, so you have to allocate enough disk space to handle transaction log size. That was a pain point – really, really complicated.

Well, now with this feature, resumable online index rebuild, we can start an index rebuild on a particular index and pause it. Yes, you heard well; pause it. Just hit the pause button, put everybody back to the server, keep working, and then when you have the following maintenance window your next moment in where you can work offline with this, you can resume from that point. Also, when you do this pause, you can also take the opportunity to do a transaction log backup and boom, reduce the percentage use of the transaction log. That was also huge.

So with all these opportunities that we have, we can, again, rethink our maintenance. We have better and smarter choices to do; we said before, backups and now index maintenance. And we can start our index maintenance today, we can do certain indexes for certain tables, and we can pause it – reach one point and wait until, you know, one week or one day or whatever, and resume it from there; huge. As I said, I love this.

This is the syntax. We have the alter index with the option resumable equal on, and we have a DMV called sys.dm_index_resumable_operations. Pay attention to this one because you can query this DMV and check all the index rebuild operations, the percentage of completion and if they are in a running state or in a post mode. We’re going to see them later.

Edwin Sarmiento: I have a question Dylan, “Resumable index rebuilds – will it also allow resuming a rebuild that bombed after the wait at low priority timeout?” And he’s saying they’re using Ola’s index maintenance scripts. “And I have a few heavily hit indexes that sometimes get rebuilt but never get swapped in due to lock timeouts.” I’m guessing he’s asking about the impacts of this or the indexes that were set to wait at low priority forever.

Javier Villegas: My recommendation, in that case, test it. Try it. Not every [crosstalk] is the same. Not every table, not every scenario. We have the tool, it wasn’t there before, so let’s give it a try. And if you have any questions in particular, we’re going to see it later in action, but it’s good to at least give it a try.

Edwin Sarmiento: Very good suggestion.

Javier Villegas: Okay, automatic plan correction. In SQL Server 2016, Microsoft introduced Query Store, this black-box of every plane that it was recording every single action. You had to enable per database or you were capturing execution plans, you were capturing runtime information, et cetera. Huge – Query Store is a great feature and it is in there since SQL Server 2016. So, in a few lines, if you had, let’s say, a report that normally takes a few seconds to finish and one day you arrive to the office and that report was running for hours, before Query Store, you had to start doing some crazy things; troubleshooting, find out the problem, et cetera. With Query Store enabled, you just can go to the Query Store report and see if the execution plan was changed over the way.

So graphically, with the report, or even by T-SQL, you can easily find out that problem and also fix it. How? You have the ability of force plan to a particular workload; a particular query. But what SQL does is identify it and provide it to you. And you, as a DBA, what you had to do is read the information, evaluate what is the best plan and force it manually. Let’s say that you had the regression during the night and then you arrived at 8AM in the morning and for the whole night the workload was running with a plan regression and, you know, it was really, really bad because it was an important report or whatever.

Now, in SQL Server 2017, at the top of Query Store we have automatic tuning. What is that? We can enable a database level to specify to SQL that automatically can set the last known good plan for every workload that has some sort of regression. So, if something was running good with a normal query plan, for some reason it changed to another execution plan because of the statistics or whatever, automatically SQL can set the last known good plan. And in a matter of seconds, keep running with the best-known plan. We’re going to see it later.

Here is the syntax with Query Store. We have to identify the query ID and the plan ID and force it, or unforce it, from T-SQL or also from Management Studio. Now, in SQL Server 2017, this is also a feature that is per database. We have the alter database, set automatic tuning, force last good plan equals on. We do that, and you’re going to see it in a moment – the regression is a matter of seconds and then the workload is good again. So this gives us a lot of opportunities and intelligence to the platform that we all use and love.

String functions – wow, this is why I say that Microsoft is now a different company than we had. How many of you were using LTRIM and RTRIM to trim a string for a number of years? Now, we have the TRIM function. I remember a couple of conferences where they present this and people stand up and applaud. This is really good, and again, it’s simple. So all these functions TRANSLATE, CONCAT_WS and STRING_AGG, you’re going to see it later. So those are the thing that before SQL Server 2017, they were possible to do, but we had to use several lines of code; in some cases, ugly code. Let’s say to concatenate, email address and put a semicolon in between, we have to use FOR XML and crazy things. Everything doable, but with a lot of code. Now, with a simple function, we can all do that in one sentence; one line.

Okay, demo time. Let’s try to cover all of this. Are we good with time? Yes. Alright, this is a smart differential backup. We have SQL Server 2017 installed on my machine and the AdventureWorks data. We’re going to check the differential, you know, the percentage of change in terms of extent page since the last full backup; in this case, 43%. So what I’m going to do now is do a new full backup. And the end result of this will be that right after the full backup, now this differential is zero. We can, you know, create some tables and add some workload, do multiple inserts. And what we can see is at the end, we’re doing an insert of 15 lines into a table, this value changed.

Okay, not too much, but now the difference is 1%. So with the code, what we can do, as I said a couple of times, we can check this percentage change, do an IF, and if it is less than 70%, we do a differential backup. If it is over 70%, we do our full backup. Okay, this is not what I wanted.

For transaction log, we have the database in full recovery. We check the percentage of the transaction log usage. Now, for AdventureWorks 2014, it’s not too big. So we check the DMV and we see that we have total size of log, active VLF, active log size in megabyte, log truncation reason, in this case nothing. But we also have the time of the last transaction log backup, the amount of megabytes since the last log backup and the percentage that we’re using. So again, we can, if we do a transaction log backup, this one is now zero percent and zero megabyte use. So it’s really useful to query this info and decide what to do. That’s the end result.

Sys.dm_os_sys_info, right, as I said before here, at the top I expose the three new columns; circuit count, CPU count and cores per second. And down here, the host information. Windows, Windows 10 and the version, Service Pack, et cetera. This is also new. So now, let’s go to the index rebuild then.

I have this query which takes some time, so that is why I run it in advance, which basically tells us the per index, per table, the fragmentation level in percentage, the page count and the row count. So we have here this – for example, we’re going to take the IDX_2_2 from table two, which has 73% fragmentation. So we go to the other window – so we say, let’s do for the index 2_2 of table two. So let’s do our alter index rebuild with resumable on.

We get an error message. The resumable option cannot be set on. When offline, option is off. That’s why it’s an online resumable index rebuild. This has to be an online operation. So what we’re going to do here is we will do the alter index rebuild with resumable on and online on. We have an additional parameter, max duration. This is the number in minutes. We can specify for how long we want this index rebuild – this online index rebuild – operation to run. We can say, okay, run for 10 minutes; I have only ten minutes to do the index rebuild and then I have to resume something. So we have this possibility.

So let me bring this guy, and this is, what I tell you before, this sys index resumable operations. Right now, there are no operations. Now, I will add my alter index rebuild. Going back here, I have – let’s see what we have. Percentage complete, 9%, the name of the index, the index ID, the statement that I ran, MAXDOP, the status and the time. SO I can go here, or in another session, just do the option, pause. Let’s just cancel – it’s going to take a little while to cancel, but when it cancels, we’re going to see the operation here as paused.

It’s been completed by 88%. So then, I can go and do backup log from my database main DB, reduce the transaction log usage, and then hit the option, resume, and then continue with the index rebuild operation. That’s not huge. SELECT INTO in a file group. We have AdventureWorks database, AdventureWorks DW, actually. So what I will do is I will add a new file group called FG2. I will add a new data file on my D drive called AdventureWorks DW data two, and I will add it to this file group. When we check the data, now we have the primary file group as well as the FG2 file group.

So, this is the one that does all the magic. When running, we send this to the FG2, and this FG2, this file group is on a super fast storage. Not in this case, it’s on a virtual machine, but you know, you can accomplish this very easily. Let’s see, let’s jump to the automatic tuning demo. This is a demo that was made by Bob Ward in the last PASS Summit. And it was huge. This is about automatic tuning.

So we have our PerfMon, and what we have added as a counter is batch request per second. So what I will do here is I will generate activity. And we see that we have certain batch requests per second. So our workload is running pretty normal. But now, what we’re going to do is we’re going to introduce our planned regression. So we introduce the regression and boom, the batch requests per second went down a lot. And it can stay down forever.

So if we go to management studio, we check the recommendations that the database has for us is saying, oh, we have this ID that the CPU changed from this value to this value. So it’s probably good that if you force the plan using this statement – this is actually all recommended by sys.dm_db_tuning_recommendations DMV. So we get this. The other thing we have to do here is just grab the script, paste it there. So using Query Store, we’re going to force the query ID 29 to use the plan ID 7. When we go back to our PerfMon, the batch requests per second go back to normal.

But what happened? All this time the workload was running with regression. So as I said before, we had to go there, enable it manually, force the plan manually, et cetera. So let’s cancel this. Let’s clean what we have; no more activity.

So now, we will go to auto-tune. Alter database, WideWorldImporters, set automatic tuning, force last good plan equals on. So we are going to run this. Again, we’re going to generate workload and we have batch requests per second. Now, note what’s going to happen when I generate the regression. I’m going to do it – when I did, the batch requests per second went down, but immediately went back to normal.

So it was immediately – the engine forced automatically the last known good plan. This was huge, actually. So let’s go back to the presentation.

Edwin Sarmiento: I do have one question here. Pablo was asking, “Is there a cost in the server when you activate auto-tuning?”

Javier Villegas: I mean, what we see is what Microsoft said with Query Store; that it’s a minor performance cost that we may need to pay. But I believe that it’s worth it because we’re putting our database in auto-tune mode. So it’s – but what Microsoft always said is it’s not considered – it’s not going to hurt our server. Of course, we have the freedom of enabling and disabling it as we wish. But everybody’s saying that it’s not considerable. In my environment, I haven’t noticed it. I enable all this and I have not any performance degradation because of having either Query Store or automatic tuning enabled.

Other features that we may need to pay attention to is graph processing. I am leaving some links. Show plan enhancement – with management Studio we can see that we have better presentation of show plans – and adaptive query processing – something that we have seen before.

Now, in SQL Server 2017, there is this modern release model, which basically Microsoft is not releasing Service Packs anymore. They are just releasing Cumulative Updates, one per month, and besides fixing things, it’s also adding features. So this is what they call the Modern Service Release. There is a link here to read about it.

And finally, SQL Server Operational Studio, which is in preview – it’s not the replacement of Management Studio, but it’s a platform that you can use on every OS; Linux, Mac OS, or Windows. And with this, you can monitor or connect to your SQL Server. Briefly, this is what it looks like. And it’s multi-platform; you have your servers here, you have your query – what I just did before, you can do it here. So this is useful and very nice, actually. It’s still in preview but it’s doing really, really well.

And with that, I’ll just finish, if there are more questions? By the way, this is my contact information for Twitter, LinkedIn and my blog. So feel free to contact me with questions or concerns about this presentation or other things.

Edwin Sarmiento: So, Javier, here’s a question for you. Jeff asked this through the Slack channel. Jeff was asking, “So Query Store has overhead. The guess is, on top of that, auto-tuning will also have overhead?” Which you’ve already answered. But because Query Store started off in SQL 2016 and it is part of 2017, so you’re simply adding auto-tuning on top of Query Store; is that correct?

Javier Villegas: Yes, and also in query store now, it tracks the wait stats that in SQL Server 2016 it was not. So that was another feature of the new Query Store. But at the top of that, you have automatic tuning, which is a guy that is always checking the Query Store dashboard to see for any regression or on any plan.

Edwin Sarmiento: Why do I get the feeling – I don’t know if it’s just me. Why do I get the feeling that all of these enhancements are actually geared towards eliminating the DBA’s job in the future? I don’t know if it’s just me or if everybody feels the same way.

Javier Villegas: It seems like it, right, but I really don’t believe that. I believe that with the cloud and all the new possibilities that we have, maybe what the DBA should do – a friend of mine told me this a couple of days ago – DBAs maybe are becoming the new [cover] writers, right. So I kept thinking about that and I said, you know, maybe we have to expand and evolve like SQL Server. Right now, it’s a whole data platform umbrella, right; Power BI, Azure, all the services in Azure – well I saw Brent was playing with managed instances – my new best friend actually – and it’s huge. So we have to evolve, right. We cannot just write T-SQL and do performance tuning and take baselines. We have to evolve also as DBAs. It’s a good question. It’s a good suggestion. I think that more than one is thinking about that. I did, but I believe that the answer is to keep evolving as the platform and you will keep having your job.

Brent Ozar: Cool, well thank you very much for presenting today, Javier. Great session – lots of comments over in Slack if you want to look at them afterwards. Thanks for hanging out.

The following two tabs change content below.

Javier Villegas

Latest posts by Javier Villegas (see all)

Previous Post
The Junior SQL Developer’s Kick-Start
Next Post
Be My Azure DBA (DSA)

4 Comments. Leave new

Hi, Javier, and thanks for the abstract.

I’d put more detail in here. Describe the exact things you’re going to cover, and for each one, mention whether it’ll include demos. Looking briefly at the YouTube video, it looks like you’ve got a lot of demos – be proud of that! Shows those off in your abstract.

Reply

Hi Brent, thanks for your comment.
My presentation is related to all the features driven by the Data Platform community that are available on SQL Server 2017 .
We will talk about the following topics
1 – Multi Platform
2 – Smart Differential Backups
3 – Smart Transaction Log Backups
4 – Select Into Filegroup
5 – Enhancements related to Tempdb setup and monitoring
6 – Transaction Log monitoring
7 – String Functions
8 – Resumable Online Index Rebuild

The idea is to demo all the features giving more relevance to Resumable Online Index Rebuild.

I had the pleasure of presenting this topic during the last Speaker Idol at PASS Summit.
I really found this presentation very interesting since it is also targeting to the new culture at Microsoft of actively listen to community and customers.

Additionally , you pre-conf at the PASS Summit was excellent !! Thanks

Reply

    Javier – thanks, glad you liked it! I had a lot of fun with that one.

    For the list of topics – I’d edit that into your abstract. To see how to edit your abstract, click Future Sessions, Speaker FAQ at the top of the site. Thanks sir!

    Reply
James Dandridge
May 8, 2018 3:18 am

Hi Javier, thank you for the informative presentation! Would it please be possible to provide the PowerPoint file?

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu