SQL Server Infernals – A Beginner’s Guide to SQL Server Worst Practices

Download BitTorrentMagnet – Audio Podcast:

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes, Stitcher or RSS
Leave us a review in iTunes

Target Audience:

This session is for DBAs and DEVs that have little experience on SQL Server and want to improve their code and performance with small changes.

Abstract:

Are you faced with complaints from users, poor performing code from developers, and regular requests to build reports? Do you uncover installation and configuration issues on your SQL Server instances? Have you ever thought that in dire times avoiding Worst Practices could be a good starting point? If the answer is “yes”, then this session is for you: together we will discover how not to torture a SQL Server instance and we will see how to avoid making choices that turn out to be not so smart in the long run.

You are probably thinking: “Hey, wait, what about Best Practices?”. Sometimes Best Practices are not enough, especially for beginners, and it is not always clear what happens if we fail to follow them. Worst Practices can show the mistakes to avoid. I have made lots of mistakes throughout my career: come and learn from my mistakes!

Circles in the SQL Server Hell:

Design sins:

  1. Undernormalizers
  2. Generalizers
  3. Shaky Typers
  4. Anarchic Designers
  5. Inconsistent Baptists

Development sins:

  1. Environment Pollutors
  2. Overly Optimistic Testers
  3. Indolent Developers

Installation sins:

  1. Stingy Buyers
  2. Next next finish installers

Maintenance sins:

  1. Careless caretakers
  2. Performance killers

Why I Want to Present This Session:

Beginners need beginner topics. Many attendees get excited about super advanced topics and wish that conferences offered only brain-melting sessions. On the other hand, we all have been beginners at something at some point – and there is nothing wrong about it. Let’s make room for the uber-geeks of tomorrow, starting from the simple things!

Additional Resources:

Demos:

https://github.com/spaghettidba/CodeSamples/tree/master/GroupBy201701

Session Transcript:

Brent Ozar: Alright, and with that, it’s time for our next session. And in this session, Gianluca Sartori is going to be talking about SQL infernals. Not internals but infernals. Take it away, Gianluca.

Gianluca Sartori: Thank you, Brent. So let’s go – can you see my screen alright? Cool, so let’s go. A few words about me before we o, my name is Gianluca Sartori, I’m an independent SQL Server consultant, data platform MVP, and I’ve been working with SQL Server since version seven, so a long time ago. At the moment I’m working as a DBA at Scuderia Ferrari, and if you want to read my blog, you will find it at SpaghettiDBA.com, and my handle on Twitter is @SpaghettiDBA.

So the agenda for this session, first of all we’ll have a look at worst practices compared to best practices, because we have lots of choices for best practices, but we will see they’re not always enough. Then we will see different areas where things can go wrong; design, development, installation, and administration.

Okay, before we start, a few words of warning. Not everything is black or white, and remember there are some edge cases where doing things that are listed here as worst practices may be the only possible solution. So if you see something you are doing listed here as a worst practice, don’t get mad. Probably it’s the only possible solution in that case.

Okay, let’s go. Why do we need worst practices? Because we also have best practices, so are they enough? Well sometimes best practices help to a point because, let’s face it, there’s too many. There’s no way you can know everything about the subject, so there’s always something you don’t know. And another thing is that there’s no time. During your work day you have to answer the phone with angry users because something is blocking and you have deadlocks and you have tuning. And you have lots of things to do and that report that the CEO wants in ten minutes – and you don’t have the time to take 200 pages white paper and read through it and learn everything inside.

So sometimes you just have to know what to avoid, especially if you don’t have experience. Best practices don’t tell you exactly what is really important and what can be overlooked. In this case, worst practices help us because they teach us the mistakes to avoid, and also we have the possibility to learn from somebody else’s mistakes. So, as I said, I’ve been working with SQL Server for something like 20 years, so I will show you a small fraction of my mistakes. It would take a whole day session t show you all my mistakes, but some of my mistakes will be in this session.

Okay, if we organize worst practices in areas, we could have things that have to do with the design of the database; so the schema design itself, the choice of the data types and naming conventions. And then things that have to do with development; so the way you write your code, the way you test it, the way you set up your environment. And then things that have to do with more administration concepts, like installation. So the choice of hardware, the way you configure the operating system, the installation of SQL Server itself. And then things that have to do with administration, so recovery, security, capacity, performance, and monitoring.

So if you think of worst practices as sins, these are the sins against the SQL Server gods that will put you into SQL Server hell. And I will be your guide through the circles of SQL Server hell as Virgil was the guide for Dante. So usually when I do this session in person, I put on my laurel at this point and my cape but it doesn’t really work online so I will get rid of that. Another thing we will do, there’s SQL infernals bingo. You can go and download from here an Excel sheet and it contains all the SQL Server worst practices. You can check the ones you have, the things you are doing, and there’s a prize for the worst sinners – no, there’s no prize, it’s just for fun. But you can go ahead, download it and check the worst practices you have.

Okay, let’s go, straight to the first circle where we find the Undernormalizers, so those who fail to normalize their schemas properly. SO what do we mean with normalizing? The father of relational model, a guy named Edgar Codd, left us the rules to design a schema properly; and these rules go under the name of normal forms.

There are many normal forms, but a schema is said to be designed properly if we follow at least the first three normal forms. So, first normal form says that every relation has to have a primary key and only atomic attributes. Well, a primary key is pretty straightforward; if your table doesn’t have a primary key, it’s not the physical implementation of a relation but it’s a physical implementation of something else, because relations have a primary key. So your table must have a primary key. Atomic attributes means that you cannot have compound attributes, so things like commas or even XML or JSON in your columns; it’s not atomic attributes, strictly speaking.

Second normal form says that every attribute depends on the whole key. So if you have a compound key, every attribute in the table needs to depend on the whole key; so not depending on one or two attributes from the key.

Third normal form says that every attribute depends only on the key. So if you have any attribute that depends on anything other than the key, you have a problem, you’re violating the third normal form. So to sum it up in a single sentence, the key, the whole key and nothing but the key, so help me god.

Clues you’re doing it wrong, data repeating in multiple places; this is what we call redundancies. The same I formation in multiple places but with different values; this is what we call anomalies. Data separated by commas, so imagine having a column like email and you have a value similar to this. John@gmail.com, John@business.com. In this case, you would need to have separate columns for this information, and this column is a violation of the first normal form. Structured data in notes column; this is what I said about structured data in big columns. Notes columns are usually very big, so it’s the perfect place to store XML or JSON. This is a clear violation of the first normal form. Columns with a numeric suffix, things like zone one, zone two, zone three, this is a good clue that you didn’t normalize your schema properly.

So going to the next circle, we find the generalizers, and you might think, “Isn’t generalizing a good thing?” So at college, they taught me, when I was studying object orientated programming, that generalizing is a good thing. You have complex hierarchies of classes and classes inheriting from other classes and so on; so generalizing is definitely good. Well, in a database, this is not a good idea, and I will show you why with an example. So, imagine having a schema like this. You have two main entities, customers and orders, and then you have some small tables where you look up some attributes. Example, you have state ID, which points to a small table with just state ID and description. Then you have country ID and description, and the same here for other status, satus ID and description, and again, ID and description.

So there’s a pattern here. You have small tables, one lookup table for each attribute. So wouldn’t it be cool if we could simplify this and have a single table to look up and get rid of all these stupid tables with just two columns? Yes, that thing exists, and it’s called One True Lookup Table. So how does it work? Instead of having multiple tables, you have just one lookup code and then a lookup description, and then a table name that describes which table you have to point to.

Isn’t it cool? Because this time, you have a much simpler schema. You have just one lookup table for all the attributes. It’s wrong on many levels because this way you cannot have foreign keys. There’s no way to set up a foreign key because you don’t have the table name to point to the key of the One True Lookup Table. Also, you have to use generic data types, which will be something like nvarchar, some high number. Something like nvarchar 4000 which leads to implicit conversions every time you join or filter the data inside these tables. And also, you might have incorrect data; imagine having a date stored here. There would be nothing preventing you from having a value like February 31st. Also, having a huge column will lead to huge memory grants; so it’s really bad even for performance.

Regarding having incorrect data, you could use check constraints to help to a point, but really it’s messy to set up and not a real solution. And also, everybody will be hitting the same table, so you will also have locking problems. Taking this to the extreme, you have what we call the EAV, the entity attribute value schema. In this case, you still have customers and orders, so the normal entities, and then you have custom attributes that you don’t add directly to the tables, top the main entities, but you store them in a particular table that is called attribute values. So you will have an entities table which will store one row for each of the entities here; so one row for customers and one row for orders in this case. And then attribute names, so this points to the entity ID and then we have an attribute ID which will identify the attributes we want to add to one entity. Example, for customers with shoe size, and the attribute values here will have the attribute ID, the entity ID, so coming from attribute names, and then this ID is the ID from the referencing table, so from customers; the value will be the shoe size in this case. And again, we need to use a generic data type. So this is really bad on so many levels that will get you the instant damnation badge.

Okay, so let’s see why it’s bad. Again, as we said for One True Lookup Table, you will have to use generic data types. Again, you cannot use foreign keys, so really bad. No check constraints because the values are stored in rows and not in columns, and if you have a multi-column check constraint, you cannot create one because you cannot reference anything that is not on the same row.

Every time you read a value from the EAV schema, you have to access the values table; so one access per each attribute. There are some – I wouldn’t call them advantages, but I see where the thing is coming from. There are some situations where altering the schema is really dangerous; think of replication or highly distributed environments where you don’t have direct control over the database schema. In these cases, changing the schema is a bit dangerous, so you tend to use dynamic attributes in this way. Another thing, reporting on top of EAV is really hard because you have to turn things that are in rows – you have to turn them into columns to report on those values. The same goes for writing to the schema, because you have to write rows. One way to work around this is to use a picot query and transform what you have in rows in columns with a picot or crosstab query.

For writes, you can have a view on top of that query, and instead of trigger, to transform what you have in columns, this time in rows; but it really is complicated so don’t do that. Alternatives, you could use SPARSE columns, which is an optimized way of storing NULLable values. If you don’t know upfront what the meaning of a column will be, just assign a random name like dynamic column one, dynamic column two, and assign a meaning from the application. This is way better than having entity attribute value. You could use XML or JSON, but that’s really horrible; don’t do that, I hate it. You could use something outside of SQL Server, so a key-value store database or a document-oriented database, so the answer doesn’t have to be relational. Nonrelational databases handle dynamic schema in a better way.

So moving to the next circle, we’ll find the shaky typers; those who fail to choose the appropriate data types for their schema objects; examples of this, storing nonnumeric data inside numeric columns. Example, VAT number, even if the name is number, is not a number. It’s a string of 11 numeric characters; you don’t square VAT numbers, you don’t multiply VAT numbers. So they are not numbers, they’re just strings of numeric characters.

One thing that will get you the instant damnation badge is storing data in the human-readable representation. Think of dates, for example. If you store a date as its human-readable representation, first of all, you will have to choose a date format, which is really hard because we have hundreds of date formats, and it will use more space and it will not work in different locals. So what you have to do here is choose the appropriate data type and let the database do its job. Working on new projects, don’t use deprecated data types; they have been deprecated for a reason. There are better alternatives nowadays. They can still be used for old projects but there’s no reason to use them for new development projects.

The use of large data types, just in case – so let’s make all the character columns nvarchar max, because why not? So the size of the column is one of the constraints on their correctness. Think again of the VAT number, it’s exactly 11 characters, so making it 12 characters doesn’t make sense; it’s plain wrong. And along the same lines, if you know that one information is going to be always of the same size, use that size. There’s no use in a verbal data type. So VAT number is exactly 11, so anything shorter than that will be a mistake. Another thing that I see all the time, storing duration and in date/datetime columns; no, durations are scalar, they’re plain numbers. Getting Unicode wrong – this is twofold, you could either be storing Unicode data in non-Unicode columns, which will not work; or the other way around, wasting storage space to store things that will never be Unicode into Unicode columns. Example, URLs will never be Unicode, so there’s no point in using Unicode columns for URLs.

If you have the same information in multiple tables, please use the same data type, otherwise, you will not compare them. I’ve seen, many times, this violated. If you have foreign keys, using different data types will get, somehow, harder.

Moving to the next circle, we find the anarchic designers, those who failed to comply to the rules. So rules in the database, they are called constraints. Primary key, we have already seen that having a primary key is required, so when you can’t find a primary key, you just stick an identity column and make it the primary key. No, surrogate keys help when you need to shorten your natural key, because it’s handy, but the natural key has to be there and it has to be unique. So if your table doesn’t have a natural key, you have to ask yourself a couple of questions.

No foreign keys because they’re awkward; they force you to write tables in a forced order, while foreign keys enforce the correctness of the data, so they have to be there. No check constraints because the application will guarantee consistency – no it will not. If I had one cent for every time I heard this, I would be a millionaire. Another thing to consider is that the database is a hub for multiple applications. So if you want to implement a constraint in the application, you will have to implement it in every application that accesses the database. Using the database, you will have that in a single point.

We already have seen that the data type is important, but it also is the first constraint on the correctness of the data. Another thing, really important, is the choice of NULLability for your columns. If an information is required, you don’t have to mark it as NULLable in your schema. On the other hand, you don’t have to mark every column as NOTNULLable, otherwise, people will start putting dummy data into it; things like empty string or dot or zero. And at that point, there will be no way to tell the difference between a zero that is actually a zero and a zero that means NULL.

Moving on, let’s go to the fifth circle where we find the inconsistent Baptists; those who fail to choose appropriate names for their database objects. And it looks like many of them come from Hungary, because we have the Hungarian Notation. Hungarian Notation is something that we had in the early days of computer programming where we had programming languages that didn’t have proper data types. So in order to identify the data type of a variable, you had to put a prefix on the name. So we had STR_something for strings and I_something for integers, and so on. Translated into the database, it becomes TBL_something for tables, and VW_something for views. It’s also called Volkswagen naming. And this is really bad because we have the metadata for telling the difference between the types of objects in a database. There’s no need to say that in the object names. And also, if at some point you decide to refactor your schema by breaking a table in two and you put a view to join the two tables and call it TBL_something to maintain the contract between application and database schema, at this point you have a view called TBL_something, which doesn’t make things any clearer.

Insanely short names – I have worked with AS/400 a couple of times and objects there have only eight characters, so you have tables named like 14ZEAROOF and things like that; which doesn’t help give a clue of the meaning of the table. So please use longer names that describe the contents of the table. But on the other hand, don’t get too creative because you will have to type those names in your queries; so not too long please. And one thing I’m sure doesn’t happen in the US, but happens all the time in Italy, is mixing languages. So having half of the schema with objects with English names and the other half of the schema with tables with Italian names, it gets really messy; don’t do that please.

Another thing that is really popular is the use of the sp_ prefix for stored procedures. So it’s wrong for at least two reasons. The first one, it’s Hungarian Notation, because it’s stored procedure_something, so Hungarian Notation. But also, it’s wrong because sp_ doesn’t mean stored procedure, it means special. The objects at the start with that sp_ prefix are in the master database, so they are looked up in the master database first, and then in the user database. So there’s a small performance hit every time you execute one of these objects. It’s a small performance hit, but if you call the stored procedure often enough, it can adapt quite quickly.

If you hate yourself, you can use reserved words or illegal characters in your object names; but at that point, you will have to use square brackets every time you type one of those names, which is not ideal. Another thing that you can do, if you hate yourself, is fail to explicitly name your constraints. If you don’t give a name explicitly, SQL Server will do that for you and will use a random name. so if you distribute a script that creates a constraint without specifying the name and you execute it in multiple instances, each instance will get a different random name. So when you will try to drop or alter that constraint without another script, you won’t know which name to put in your script.

Well, the worst thing is to have no naming convention or multiple naming conventions, which is pretty much the same. So choose one and stick with it.

Moving to the next circle, we have the environment polluters. It’s a popular topic nowadays, but it’s not the environment you’re thinking of. It’s development, test and production environments I have in mind. Well, what will give you instant damnation here is development in production, which is pretty obvious, or using the test environment for development. Again, this is wrong, the test environment is for testing, it’s not for developing. Another incarnation of this sin is the use of a shared database for development. Each developer should have its database on his machine, and this way, the work of one developer doesn’t interfere with everybody else’s work.

No source control – it used to be insanely hard to put the database schema under source control in the old days, now it’s that simple because you have tools from Microsoft, tools from Redgate, tools from Apex and I’m sure I’m forgetting somebody; but it’s that simple nowadays, so do that. Developing with sysadmin privileges is a really bad thing to do. If you develop with sysadmin privileges, nothing will fail. You will have privileges to do anything, so you won’t have to take note of what is really needed for your application. So when you will go to the DBA and deploy the application, the DBA will ask, “So what permission does this application need?” and you won’t know because you didn’t take note. So the correct way to do this is to start with no privileges and every time you try to do something and it fails, you grant permission to one or more rows inside the database, and when you go to production, the DBA will ask, “What permission does this application need?” And you will say, “I have no idea, but it’s what’s in the rows, so just use add users to rows and you’re done.”-

Another thing that is really bad is developing, targeting an edition or version different from what you have in your development environment. Regarding edition, it got less problematic after 2016 SP1 because the development features were unified; but it used to be a big problem in the old days.

Going to the next circle, we find the overly optimistic testers – those who fail to test properly. In fact, they don’t- test all the code, which is pretty much the same as not testing. When you test, use meaningful data volumes so that you test not only correctness but also the performance of your code. While testing in production will get you instant damnation immediately, because you can alter production data you are testing, so it could be wrong. You haven’t tested it yet, it could alter, in evil ways, that data you have in production. But also, it interferes with production users. So imagine testing a very difficult procedure, so you’re debugging one line at a time, and then, “Oh, it’s lunchtime.” So you step out for lunch and you have a transaction open, and when you come back after one hour, nobody is going to be very happy with you.

Okay, another thing that is not really good is testing in a development environment, because as we said, in the development environment, you just have a handful of fake rows you use for development. So it’s not enough for testing performance. It’s useful, at most, for unit tests.

Moving to the next circle, we find the indolent developers, because you know, developers are lazy. It’s all about being lazy in the correct way – there’s nothing wrong with being lazy, but some things are really dumb. Example, having no transactions because managing transactions is hard. I was once to a customer – I was hired by this big Italian shoe company to fix a warehousing system, and it was a huge VB6 application with SQL Server as a backend. So the first day on that gig, I opened up the DB6 project and I looked for begin transaction and I got zero occurrences and I knew I was in for a lot of trouble.

No error handling – errors must be handled correctly. And if you’re doing that in T-SQL, don’t use @@ERROR; it’s a thing of the past.  Nowadays we have TRY…CATCH. Using wrong isolation levels, because yes we have read committed, that is now for everybody… No, it’s not. Sometimes you will need to go higher, even to [inaudible], and sometimes you will need to go lower, even to read and committed. But remember that NOLOCK is not a turbo button on your queries, it just means what’s [other team]. So, no locks honored and no locks put. So, this will include your own session. And when you read your data with no shared locks, what happens is that you could end up reading the same rows multiple times or not at all. And this is no consistency at all.

Another thing is SELECT*. If you don’t need columns from a table, don’t select them. When you use SELECT*, what happens is that, most probably, SQL Server will give up using indexes and will scan the clustered index or the heap directly because lookups are expensive. And also, remember that the columns you don’t use in your application but you use in your queries still travel across the network, and you don’t want that to happen.

Another thing that lazy developers do, because using parameters in queries is hard, they concatenate values inside SQL statements. And this creates code vulnerable to SQL injection, which is one of the worst threads for security.

Another thing that I see all the time is not using an abstraction layer, so querying the tables directly. In this case, we should use views and functions and stored procedures to have a contract between the application and the database, so that when you have to change something inside the database, you still can maintain the contract by simply altering your views and functions.

Moving to the next circle, we find the stingy buyers – those who don’t size hardware for SQL Server properly. What we have here, the use or inadequate or imbalanced hardware. So you might have noticed that SQL Server licenses are expensive. Not as expensive as Oracle licenses, but still not on the cheap side. So having expensive licenses running on inadequate hardware is one of the worst things you could do to yourself. The thing that will get you instant damnation is the reuse of the commissioned servers for new installations. Your sysadmin, your Windows admin will try in every way to make you reuse that old server that has been serving you so well in the last ten years; no. Never accept that because they will have slow CPUs and SQL Server licenses cost the same on five CPUs. It will address less RAM, and RAM is one of the cheapest ways to improve SQL Server performance. Taking this to storage, your storage admin will try to give you big storage because it’s cheap, but you don’t want big storage. You want fast storage for SQL Server for the same reason. So choose the appropriate write read levels, don’t create volumes on RAID 5 arrays because it’s dead slow in writes.

Moving to the next circle, we find the next, next, finish installers – those who install using all the defaults. SQL Server defaults are really, really bad. We have things along the lines of data files on the system drive; you don’t want that to happen. You have MAXDOP zero, so that means use all the CPUs in parallel queries, or use all available threads. Nowadays, with multi-core servers, this can get out of hand quite quickly. And max server memory plus infinity, so this means use all the memory in the server and choke to death any application, including Windows. Also, don’t install anything on a SQL Server box that you won’t use, because you will have to patch it and you will have a bigger surface area. So only install the components you will use.

Taking this to the extremes, don’t install any service on the machine that you won’t use. Use a single machine for any service. There’s no point in using the same server for analysis services and the relational database. They will compete for the same resources and they have resource usage patterns that really don’t get along; so separate those services as much as possible.

There’s also some easy wins on I/O that you really shouldn’t be giving up. They’re so easy that it’s a sin to do that. Things like aligning petitions – while this used to be a problem in the old days up to Windows 2003, now it’s something that really is worth a quick check, but it’s a non issue nowadays. But one thing that is still something to pay attention to is the default allocation unit in partitions in Windows. When you create a volume in Windows, the default allocation unit is for KB and SQL Server works best with 64KB allocation units.

Going to the next circle we find the careless caretakers – so those who fail to maintain their instances properly. This is a pyramid coming straight from Brent’s blog from some years ago. This describes what a database needs. So at the bottom of the pyramid, we have the foundations, and the foundations are the most important things needed by a database, so backups and security. And on top of that, you have capacity, performance, and future proofing. Everybody obsesses about performance, but what really is important is the things here are the foundations.

So we will try to see the worst practices at each of these levels. So talking about backups, the worst thing you could do is have no backups at all. So when disaster strikes, you have nothing to recover from. But it’s not only a threat to recoverability; it can become a threat to availability as well. When you have a database in full recovery model and no backups, the disk or the file will fail and the database will stop working; so it’s a time bomb. And also, keep in mind that you will have to design your RPO and RTO figures with your stakeholders; it’s not your decision. Well RPO means recovery point objectives, so how much activity on a database you can lose when disaster strikes. And RTO means recovery time objective, so how long it takes to put the database back to working.

These two numbers are decided by the stakeholders, so don’t use a blanket policy for the whole company, fulls every night and logs every 15 minutes. Yes, that’s a good starting point, but you cannot use that as a blanket policy without talking to your stakeholders. What will get you instant damnation is not testing your restores. There is no point in having boxes full of tapes with backups inside when you can’t restore from those tapes; so test your storage. Along the same lines, not checking consistency on the database will get you instant damnation as well. If the database is corrupted, the backup will be corrupted as well; so test it. If you have a corrupt database, using DBCC REPAIR_ALLOW_DATA_LOSS does exactly what’s on the tin; so it will repair the database and it will allow data loss, and you are paid to avoid data loss, so test your restores. The way to recover from failures is to restore the database, not lose data. So remember, the responsibility of the DBA is to perform restores, not backups; nobody gives a damn about backups. It’s your tool and nobody else cares but you.

Okay, so regarding security, what’s the worst practices? Too many sysadmins? I was once to a customer and they had domain name backslash authenticated users in the sysadmin role. So that means that everybody in the company is a sysadmin for that instance, which is less than ideal. Everybody authenticating as SA, this will get you instant damnation. Everybody knows the password for SA, so this is bad on so many levels. SA is a well-known login for SQL Server, so it’s an attack factor that attackers will try to use quite quickly; so disable that login and forget you ever had an SA login in your instance.

Using SQL authentication, it used to be less secure than Windows authentication, the password was traveling in clear text up to SQL 2000. Now it’s traveling securely across the network but using SQL authentication is still a bad thing because of the password. You can bypass the password policies with SQL authentication and you can use weak passwords, such as 123 or password, or the same as the username. So really, try to avoid SQL authentication as much as possible. And also remember that people are really bad with passwords, they usually write them down on sticky notes and place them on their keyboards; so as little passwords as possible.

Also, if you are storing sensitive data, set up some kind of auditing; so at least when you have somebody sneaking into your instance and steal your data, at least you will know who was to do that. Regarding capacity management, the worst thing you can do is not checking your disk space, so when the disk space runs out, the database will stop working; so check your disk space. Keep in mind, full recovering NOLOCK backups, keep that in mind – along these lines, we also have relying 100% on auto growth. Auto growth is good for unexpected growth of a database, but you should be planning the growth of your database files by looking at the growth trends to avoid fragmentation.

Well, what will get you instant damnation here is the use of auto shrink and auto close. Well, auto close will just give you funny errors, because the database has been closed when not in use used. And auto shrink will fragment your database really bad because it tries to shrink the database whenever possible. So shrinking a database file means that the pages at the end of the file are moved to the first available place in the file, which fragments the indexes really, really bad.

Not pre-sizing the capacity management is especially important for tempdb because it’s a shared database used by all the databases in your instance; so it’s particularly important to use correct policies for tempdb. The recommendation is to use as many data files as cores as you have in your instance up to eight cores, and size them appropriately; so when you have the same size, you avoid a latching penalty that you have in tempdb. Having different sizes will give you latching and striping penalties.

Regarding maintenance – well, not maintaining indexes and statistics is a really bad thing. You need to maintain your indexes, but you don’t need to obsess over maintaining your indexes. People often are obsessed about having their indexes contiguous; that’s not the goal of maintaining indexes. It’s internal fragmentation that really harms you, so the empty space inside your data pages. Nowadays, with the kind of storage that we have, so SANs and Flash, it’s not that important to have contiguous indexes. It really doesn’t matter, and your reads are never sequential anyway, so that’s not the important thing. The important thing is to avoid reading too many pages, so having too much empty space in your pages; that’s the reason why you need to maintain your indexes.

One of the worst things you could do is to use cache all maintenance plans. So you need to have some kind of maintenance, so let’s use maintenance plans because it’s easy. So what you do is you rebuild all your indexes, because why not? No, you don’t need to do that, you need to rebuild only the indexes that need to be rebuilt, so those who have massive fragmentation. And then we reorganize all the indexes, because why not. No, you just need to reorganize the indexes that need to do that; so those who have some kind of fragmentation. And then, after that, we update statistics, because why not. This gives you sample statistics, even for the statistics that you just have with full scan from a rebuilt task; so it really makes your statistics worse and not better. This has got much better in the latest versions of SQL Server, so you have more control over what the maintenance plans are doing now.

Okay, going to the next circle we find the worst offenders in SQL Server hell, the performance killers. So, talking about performance, if we think of performance tuning and we take all the possible measures to tune performance and again put them on a pyramid, what we have at the bottom is the things that will likely have a huge impact on performance but are really difficult to implement; so schema changes. Then on top of that we have query optimization and indexing. And on the top of the pyramid, we have locking and server tuning. These two are really easy to operate on, but they don’t expect to do miracles. So what is your best bet is indexing and query optimization for improving performance. So let’s try to see worst practices on these levels. We already have talked about the schema design, so we’ll focus on the rest. Regarding your code, the worst thing you could do is to have a RBAR approach to your coding. RBAR means row by agonizing row. It means everything that you do, one row at a time. So think about cursors, and then while loops or application side cursors. If you cycle through your rows from c#, it doesn’t make it any better. Just because it’s c#, it’s still RBAR.

Scalar and multi-statement functions, it’s hidden RBAR. These functions can be applied multiple times for each row in the input set, so it’s again RBAR. Another thing that I see all the time that is really, really bad is views on views on views on views on views on views and so on. So it might look like a good idea because it allows you to reuse your code, but it’s not really because you could end up losing control. What does that mean? You could have unneeded accesses to tables you don’t really need in your queries. You might be accessing table multiple times; even if you don’t need o do that. You could have unnecessary joins, so it’s a really bad thing – so if you think of this movie, you had the dream inside a dream inside a dream inside a dream, well it didn’t turn out very well for the characters in this movie; well, the same applies to views on views on views. It’s not really a good idea.

Then we have one query to rule them all. So you have to extract some data and you write a huge query that extracts everything in a single go. Well, keep in mind that the optimizer is good, it’s not perfect. So fitting it at 2MB SQL query will make it think for a long time, so it will have to parse the query, which will take a long time, and then it will try to come up with a plan. At some point, it will simply give up and feed you the first plan it could come up with, which will be likely not very optimal. So the right approach here is to divide and conquer, which will give you much better performance in the long run.

Another thing that I see all the time is DISTINCT in all the queries, because who needs stinkin’ duplicates; so distinct everywhere. Well, distinct comes with a cost. Often, what happens is that you have some duplicates somewhere because you’ve failed to join properly or you join with something just to filter rows. And you really didn’t need to join but you could use something like WHERE EXISTS or something like that, and you end up multiplying your cardinality while traversing some table. So DISTINCT has a Refugium Peccatorum – so DISTINCT comes with a price; it usually is implemented with a sort, which is one of the most expensive operations you could have in a plan. So check your joins if you have duplicates.

Query hints all over the place – because once a query hint saved you bacon, so let’s put hints all over the place. Well, it doesn’t really work. When you use hints, you tell the optimizer to give up doing its work, and the optimizer knows better, so let the optimizer do its work. Index hints are especially bad because if you drop the index or change the index, your query will stop working. Regarding indexing, worst practices – well the worst of the worst is accepting all the suggestions from the tuning advisor. I have to admit that the tuning advisor has got better in the last few versions, but really, it used to be really bad. It came up with very aggressive suggestions, so duplicate indexes and including all the columns from the table, so it was really, really bad. And whenever I see a DTA_index, it comes from the tuning advisor, so I get a bit suspicious and I know it’s the probably the first index I will have to drop.

Duplicate indexes often coming from the tuning advisor. There are scripts to identify them, so go ahead and drop them. One index for each column – indexes don’t come for free. There are some shops where there is a blanket policy of creating one index for each column. No, indexes are copies of the data so they don’t come for free; you have to write them and they use space in the disk. So really, don’t do that. Suboptimal clustered index – when you choose a clustered index, choose wisely. A good clustered index has to be unique, small, unchanging and ever increasing or decreasing. Well, there are some situations when you really don’t want to do this because you could create a hotspot, but really, it is good for 90% of the cases.

Okay, regarding server tuning, what are the worse practices here? Well, throwing hardware at the problem. This is the approach that managers often have. “The system is slow, so what’s the quickest way to solve it? Let’s buy a bigger box.” But a twice as fast machine might, but will not, make RBAR code run twice as fast. But it will never give you the performance improvement that changing your code to proper set based code will give you. So when you use set-based code, you might make queries 500 or 5,000 times faster, which you will never get with a bigger box. Using advanced options without testing – so in SSMS you have two handy, easy to reach, options that you should never use. NT fibers, lightweight pulling or priority boost – so never check those checkboxes in your server properties, because it will probably kill your SQL Server or other processes with priority boost. And NT fibers will make many pieces of your SQL Server to stop working Example, the database mail doesn’t work with lightweight pulling.

So these are some of the worst practices. There are many others. I have some blog posts on my blog on SpaghettiDBA.com for some of the circles; so you can go ahead and have a look at the category “SQL Server Infernals.”  Then, before we go, there’s a tool for you. It’s called Best Practices Analyzer, and it highlights potential problems in your databases. So go and check this tool on your databases. It offers some recommendations to fix the things that are off.

Ad before we go, there is the bingo. So if you checked all the worst practices you had, you can go ahead and Tweet your score with the GroupBy hashtag, or you can go to Slack and post your score, and you win absolutely nothing, but it might be fun. So that’s it, those are a couple of resources of you. If you want to shoot me an email, ask questions, go ahead and do it, I will try to reply to your emails.

Brent Ozar: I’m surprised you can keep it to just one hour, given how many bad things people do. Do you get tempted to grow this in longer and longer with the more things that you see out there?

Gianluca Sartori: I wish I could show you all the hidden slides I have in this deck, but it really doesn’t fit into 60 minutes.

Brent Ozar: Very nice. Well thank you, sir…

Andrea Allred: I do have a question for him, though. You mentioned auditing; what would you recommend for using auditing? Let me give you more context. You mentioned that if you have sensitive, that you should be auditing who is accessing that sensitive data. How would you audit that? How would you track that?

Gianluca Sartori: Well, you have audit in SQL Server Enterprise Edition. It’s a bit expensive, but if you have sensitive data, the lawyer might get even more expensive.

Andrea Allred: Awesome, thank you.

James Anderson: Gianluca, did you consider mentioning anything about ORMs in this at all?

Gianluca Sartori: I did mention that when I mentioned SELECT*, which is what ORMs do. So it’s implicitly a worst practice. Honestly, some ORMs are better than other and they can make projections, but I really don’t like ORMs. I always feel that querying the database directly is so easy that there’s no point in using ORMs. It can be handy to translate from the result set to the object, so things like micro ORMs, like Dapper, are really good, but things that build queries for you, no thanks.

James Anderson: Yeah, it’s a bit scary isn’t it? I always get told by developers that one of the things they really like about ORMs is the type checking; when they compile their code. But the performance is quite a cost for that, I’d say.

Gianluca Sartori: Well regarding type checking, you have strings treated as Unicode all the time, which might get really tricky and skip using indexes. So it’s a double-edged sword.

Andrea Allred: I liked how you talked about sizing your columns properly because I’ve always had the mental note in my brain, you can always make them bigger but it’s a lot harder to make them smaller once they’re big. And so, I thought that was fantastic.

Brent Ozar: Very cool – we do have questions coming in. Let’s see here. Danesh says, “I’m writing CLR code, is that a worst practice?”

Gianluca Sartori: I don’t think so. There’s a place for CLR. There are some things that really perform well when coupled strictly with a database, and CLR can be really good for that. Just don’t try to do what you would do with your client application in CLR, because that’s not what it was designed for.

Brent Ozar: That’s true. A lot of people have requested that you say the word focus again, because they really like hearing you say that. People love your accent, yes.

Gianluca Sartori: I never imagined hearing that, because – did I mention I’m Italian?

Brent Ozar: No, well you said you work for Scuderia Ferrari, which is amazing enough as it is. I don’t think enough people understand what that – so to explain to everyone, when you say Scuderia Ferrari, what exactly does that mean to American people who aren’t smart enough to follow Formula 1?

Gianluca Sartori: It’s the team name; the Formula 1 team.

Brent Ozar: Yeah, not just the people who sell gorgeous cars, the people who drive really, really fast cars.

Andrea Allred: It’s like a huge wow.

Gianluca Sartori: As we said last time I had this session, I don’t see the car very often. I see SQL Servers. I could also work for a sock factory and it would be the same. [crosstalk]

Brent Ozar: No one can see the car very often because it goes by so fast, that’s part of the problem.

Gianluca Sartori: Great point.

Erik Darling: It’s a beautiful blur.

Brent Ozar: Larry asks, “If the query IF EXISTS SELECT* FROM, is that appropriate? Is it any worse if I say EXISTS SELECT* FROM acceptable?”

Gianluca Sartori: It’s completely acceptable because everything you put in the select list in a WHERE EXISTS query is ignored. There was a nice blog post a couple of days ago from Andy Mellon. It was really good. he demonstrated how – or was it Kenneth Fisher? I don’t remember. Well somebody blogged about it this week…

Erik Darling: Easy to get them confused.

Gianluca Sartori: Well somebody blogged about it; sorry if I got the wrong name. Anything you put in the select list of a WHERE EXISTS query is totally ignored.

Brent Ozar: Jeremiah Peschka used to use SELECT one divided by zero, so it was always funny to read his code. [crosstalk] Several people said – I was really surprised by this, several people, Dory is the latest one who says, “I second the request for more normalization advice.” They would love to see a presentation about what’s first normal form, second normal form, thirds normal form. Folks, if you’re watching this, any time you go, “I would love to see a presentation on,” You should think about writing it. You should totally think about how would I build an abstract for that? People absolutely love getting up and presenting. Once you get started with it, it’s absolutely amazing.

Gianluca Sartori: I will try to submit one for the next GroupBy.

Brent Ozar: There we go, submissions are open now, so yeah. Nice job, well thank you very much, Gianluca, nice job, very excellent there.

Gianluca Sartori: Thanks to you.

The following two tabs change content below.
Gianluca Sartori is a SQL Server MVP, independent consultant and performance tuning specialist. He has been working in the software industry since 1999 and has been working with SQL Server ever since. He also works as a SQL Server trainer and in his spare time he writes technical articles and participates the SQL Server forums. Gianluca enjoys presenting SQL Server topics at conferences around Europe. He is currently working as lead DBA at a famous Formula 1 team.

Latest posts by Gianluca Sartori (see all)

, , , ,
Previous Post
SQL Server 2016 “Hidden Gems”
Next Post
High-Performance SQL Server Reporting Services: Lessons Learned

9 Comments. Leave new

Gianluca provides a fast-paced, overview of at least 100 “infernals” ranging from naming, to hardware, to RBAR, to security, and any other aspect of SQL you can think of.
Well worth an hour to listen even if you experience some audio issues.

Reply
Adam Machanic
January 13, 2017 3:18 pm

Really fun session, Gianluca! You had me nodding in agreement the entire time. I just wish the internet connection would have cooperated a bit more.

Reply

I thought the content was worthwhile and important, but the demo provided regarding normal forms seemed to me like beating a dead horse – the point was already made. And I thought the presentation would have been so much more powerful if at least one quick example was listed on the slides for each infernal listing. All of us have seen these bad things (or done them) but I was thinking that for people who really needed to know these things, they would need a quick reason why rather than just a statement saying “don’t do this”. I look forward to more powerful presentations from this speaker in future.

Reply

    Thank you Anne for your feedback. I will try to work in that direction for future presentations, but I feel like there is no room for one demo for each worst practice.

    Reply

      I did not mean to suggest any demos at all. Just one written sentence on the slide or an example mentioned by you when presenting to give the user an idea why something is good or bad beyond you just saying that a it is. I don’t think this talk should have any demos.

      Reply
Bradley Stiritz
January 13, 2017 5:21 pm

Extremely helpful session! It was a shame that Gianluca had so much difficulty with his internet connection.

Gianluca, thank you very much for providing the code samples. I will be studying these closely, in preparation for an upcoming EAV design project.

Reply

Gianluca – your session is absolutely stunning. Thank you very much to put together great things in very presentable and clear presentation.

Reply

[…] SQL Server Infernals – A Beginner’s Guide to SQL Server Worst Practices with Gianluca Sartor… […]

Reply

Nice presenation. It’s good to see that Hell got finally upgraded after Dante’s visit. Nine Bolgia’s were definitely not enough.

I just wonder how I can be in more than one Circle at the same time?

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu