SQL Server and Continuous Integration

Audio podcast:

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes, Stitcher or RSS
Leave us a review in iTunes

Target Audience:

This session is for SQL Server developers.

Abstract:

How do you know your database change won’t affect something you haven’t thought of?

Database objects can have many dependant objects in the database and in the application. Keeping track of these dependencies can be difficult, especially if the database is used by multiple applications or services.

The first step is to have a project for the database and get it into source control.

Source control is the single source of truth that all deployments should be kept in sync with. Source controlling the database also acts as the foundation for automation of:

  • Builds
  • Tests
  • Coverage Reports

This automation is key to the Continuous Integration methodology. After every commit, builds and tests will run in the background and only alert if there is a problem. Builds test the deployment of the change and tests check that everything affected still works. Coverage reports indicate any gaps in the testing suite.

The tools I use to make this possible with SQL Server are:

  • Redgate’s ReadyRoll plugin for Visual Studio to build migration scripts
  • GitLab for source control and project management
  • tSQLt for unit tests
  • SQL Cover for unit test coverage reports

By the end of this session I hope you will see how a CI approach to database development can remove the unknowns from deploying database changes.

Why I Want to Present This Session:

I’ve had database changes cause failures in the past and wondered how such a difficult problem could be solved. I’d like to share the tools and methods that got me through.

Additional Resources:

http://thedatabaseavenger.com/2016/07/sql-server-and-continuous-integration/

Session Transcript:

Brent Ozar: Alright, so in this session at GroupBy, James Anderson will be talking about SQL Server and Continuous Integration. So take it away James.

James Anderson: Thanks, Brent. So, we’re talking about Continuous Integration with a SQL Server slant today. If you do have any questions that maybe are going to take longer than what we can answer during the session today, do get in contact with me and any of the things that are on the screen at the moment, or if you just want to know more or how I did certain things, just get in touch. I’ve got a link there to a blog post, which is the first in a series of blog posts that I’ve done on this topic. You can also find my presentations if you go to TheDatabaseAvenger.com/presentations. They’re all up there so you can get the links and you can see all of the screenshots and whatever else.

So let’s get started. So what is Continuous Integration? Well, my brief description of Continuous Integration is to be continually integrating. Fairly obvious, but what I mean by this is when we’re developing code, quite often we have teams of developers. So when we’ve got all of these different developers working on the same code, they could be making changes that are affecting my workflow, I’m not expecting that to change, but someone’s changed it underneath me. So a common approach to getting around this problem is to branch our code. So in effect, that’s making a copy of the code, pasting it somewhere else, and we can work over there in isolation. And that’s fine, and once we finish doing our work, we then merge those changes back in.

The problem comes with the merge process. If we’ve been out here for a week developing in isolation, we’ve got our new code, our new feature, it may be that there’s a function that our new code relies on, and that’s been changed in the main branch. So we’re working okay in our branch, we merge back in, everything seems okay, but when we run our tests, we find that our code’s failing because the function that we’re depending on is no longer operating in the way we expected.

So that could be a problem. Now, a way to alleviate that problem is to merge our code more often. So we’re not going to eliminate that problem, that thing is still going to happen. Whenever we merge code, there’s potential for there to be conflicts like that, but if I’m over here for a week and I’ve based this week’s amount of development on the false assumption of the way that function works, when I come and merge my code and it fails, I then have to unpick that week’s worth of work. I need to undo all of that and I need to figure what I need to change and make sure I don’t alter my functionality and then merge it back in and hope that everything’s okay, but if I was merging my code in regularly, I would capture error much earlier and I wouldn’t have to build this whole stack of code on top of that assumption. So that’s kind of the idea in a nutshell. We want to catch those errors early, and it’s all about getting ideas and getting them into production quickly. So if we’re all doing that as a team, we’re no longer having these painful big merges and dreading the point where we have to merge our code. We’re actually much more of an efficient team, we’re humming, we can get these ideas in, requirements for new features, and actually get them into production in a much quicker, more efficient manner.

So I would say at this point, big text to make a point, that it isn’t just a suite of tools. Now, Alex Yates did a talk last week and he really goes over the concepts of Continuous Integration, the working practices, the way we should be doing things. If we as developers aren’t integrating our code at least twice a day, then we’re not doing Continuous Integration. We need to have that amount of regular integration and the problem is that is if we – every time we integrate, we need to merge our code, we need to run all of our tests, and we need to make sure everything’s okay. Now, if we’ve got any manual processes as part of that, then that’s going to slow down our cycle times. So what we really want to do is we want to harness a load of tools to give us a level of automation that allows us to have very short cycle times. So we make a change, we merge it in, and then automatically our tools notice that, run all of our tests and only give us feedback if there’s a problem. That way, we can keep quickly merging our code and having those efficient working practices.

Now with that said, I will be focusing on the tools today. It is quite easy very early on in Continuous Integration to think it is all just about the tools, but do go back and watch Alex’s talk, getting CI right for SQL Server. He did that last week and he’ll really explain more of the theory about the whole process.

So you don’t need the Ferrari of build servers. A build server is one of the biggest tools that we’re going to use in the CI tool chain. That could be quite expensive. Not going to name any names, but there are some expensive build servers out there, there are also some free build servers out there, and those are the kind that I’m going to be looking at today. So I’m going to briefly cover one community edition free build server that I’m using, because I used to have hair like this guy in this car at one point, so I have an affinity with doing things on a budget there.

So this is a database conference, so I’m going to talk about Continuous Integration or the tooling for Continuous Integration with SQL Server. But deploying changes to databases is difficult. We briefly mentioned this in the talk before my session, and applying new code changes to production is a fairly simple process, we just overwrite whatever’s there with our latest version of the code. It’s just a text file, kind of, but with databases, we’ve got a thing called state that makes it difficult, so we’ve got a database, we write some code to change that database, but we’ve got to be very careful we don’t disrupt the data. So the classic example is we’ve got a column called Name, and that has our first name and surname in it, and we decide actually we want to split that out into two columns, so we write code that creates a new column, rename our existing column to first name, the new column is called last name, and then we need some code that’s going to take the data and split it into those two columns. So maybe it says, well, take all the data up until the first space, put that in first name, and then put everything else after that into last name. But then you get people with middle names, double barrel names, two spaces in between and the whole thing breaks down.

So these are the difficulties – just some of the difficulties. There are many more. We can easily build up dependencies when we have our databases. So we’ve got to make sure we execute scripts in correct order. So those are some of the difficulties, and if we’re going to automate this whole process so that we can deploy and test every time we integrate our code, and do that all automatically, we need to come up with a way to manage these difficult deployments.

So typically, there are two methods of doing this. So we’ve got state-based and a migrations-based. So the state based is the kind of magical approach where we use a schema compare tool, we point it to our live production system and then we point it to our development Sandbox system where we’ve got our new changes and we ask it to figure out what the differences are and make production look like our development one, and it will generate a script and go off and do that. Bit scary because we may not see what that script is, or maybe a big, long, ugly script that’s difficult to read, but are we sure it’s going to be able to deal with the data exactly how we want it to? Is it going to be intelligent enough to do that? So sometimes we may have to step in and write a little migration script, augment the script that it generates in order to handle the data correctly.

Now, that can be a little bit concerning, but it’s still possible to do, and I don’t want to start any flame wars about which is the best way to manage this. There’s a lot of strong opinions on both sides, but I feel more comfortable with the migrations approach. So this approach is where we – as we’re developing our changes to our database, we’re generating these small granular scripts, migration scripts, and we can have our changes built up and actually in our solution, we have all these scripts listed in a specific order showing – a basic example would be if we created a table and then we created a stored procedure that read from that table, there’s a dependency there. So we’re going to want to create that table first. Now, migration – state based would probably figure that out quite easily, but in our migration tool, we can really manage these difficulties and we can also code in the – the code to look after the data as well, and then when it comes to deployment, we know exactly the code that’s going to be deployed. It isn’t some magical script that’s been generated, we can test not only our code, but we can test our deployment as well. So that’s my kind of view on it, but the state based is very useful. Sometimes useful big teams where they’re all merging and branching and stuff like that, migration scripts can be a little bit more difficult there sometimes.

But I’m going to move on to a tool that I’ve been using for a while now. So this is a Redgate tool called ReadyRoll, and it’s kind of a hybrid between the two methods. So it uses the migration method for the difficult stuff, which is tables. They’re the only objects really that store state, and everything else – so stored procedures, view functions, which are referred to as programmable objects, they’re just scripts really. So we can just apply the latest version of those like we would with code. So we don’t pollute our migration scripts with long, big stored procedure code, so we only change one line in a hundred line stored procedure. That won’t be in a migration script, that will just be stored – the latest version will be stored aside, and I’ll demo that in a second.

So I’m going to use screenshots today because I think it’s going to be a little bit difficult to see Visual Studio up close on a webinar like this, but if I go through – here I’ve got – I’m opening a new project, so this is in Visual Studio. Now, this may put some DBAs off straight away, so we’re Management Studio guys, this is where we do all of our work, all of our development, but ReadyRoll is a plugin for Visual Studio, but we can still do all of our development in Management Studio. We just come back to Visual Studio to sync the changes and I’ll show you what that means in a second. If Visual Studio really is something that’s going to put you off, there are other tools, there’s SQL Source Control, which is a plugin for Management Studio, which kind of does the same thing, it’s kind of hybrid, but it just doesn’t handle migration scripts any near as well as ReadyRoll does in my opinion anyway.

So let’s move on. So we’re creating a new project, and we can see that I’ve ringed here that that’s also going to create Git source repository. So we’re straight into Source Control from the beginning, and once we’ve created that empty project, we can see that we – it’s asking us to deploy our project. So I’ve got a local SQL Server instance, and I’m going to deploy my empty project to that instance, so test the connection, it connects, then ReadyRoll tells me it wants to connect that database that I’ve just created. So there’s this loop you have to fill to connect the project to the database in the local instance, and once we’ve done that it starts importing the objects, there isn’t really any objects in there, it’s just the default empty dead space, couple of database uses and roles, whatever.

So at this point, I’ve got my Sandbox development database, and the only thing in there at the moment is a _MigrationLog table. Now, this is a table that ReadyRoll uses to track which migration scripts it has already ran against this particular database. So I’ll explain some more about that in a second, and we’ve also got this second database that it’s created, which is the same one with my username and shadow underneath it, and same there, I’ll explain exactly what that does in a second.

So first things first, once we’ve got our ReadyRoll project set up, before we do any development, you really want to get these settings right, because if you start making changes and creating migration scripts and you haven’t got these settings how you want them, it’s quite a pain to figure that out how to backtrack that. So when we start, we want to select the version of SQL Server we want to support, and we want to select the oldest version of SQL Server that we’re going to support. So I’m going to support everything from 2012 up, so I’m going to select that, and for output types, I’m going to select SQLCMD Package, so this just creates a big SQLCMD script, which can deploy my project entirely, my whole database from scratch to an instance, or it can actually upgrade an existing version of my database. So I’ll go through a bit more of that in a second. Now, this section down here, the programmable objects section, this is where the hybrid nature of ReadyRoll comes in. So by default, the setting is Import into migration scripts, and that means it’s going to import any changes to programmable objects into migration scripts. Now, as these are just code, I want to treat them in an easy way, I don’t want to pollute my migration scripts, I want to have them very granular and very easy to see what they do. So when we put it to this setting, Import into separate script files, all of our programmable objects have their own file, which is just a create script, and that’s good for a couple of reasons. One, as I said, migration scripts are nice and short, but two, when we have a separate file for each programmable object, that’s really nice from a Source Control point of view, so as we make changes, that create script will just get updated and then we can monitor how that object changes through the life cycle of the database in Source Control, so it’s really useful for that.

I also have a link at the bottom there to a blog post that I’ve done, which goes into much more detail about getting all of these settings right for ReadyRoll before we start developing changes. So we also have an online schema model, so this will take all of my table objects and script them out, and that’s great for the same reason I said for the programmable objects. So I’ve got one file for each object, and that allows me to track the changes to those during the life cycle within Source Control, so that’s nice. The last setting there is semantic versioning, so the classic problem with migration tools is that as you develop more and more changes, your project gets more and more mature, you end up getting this ever growing list of migration scripts, it just gets longer and longer and it just gets a little bit annoying when you’re developing. So we can apply semantic versioning to that, which basically means we group those scripts into folders with the version name. You do still get the problem of you get an ever growing list of folders, but still, it’s a bit better because you can collapse them all down and then you show the ones you’re currently working on.

So I’m going to create a customer table, so I’ve got my ReadyRoll project set up, it’s empty, there’s nothing in it. I’m going to create a customer table, pretty straight forward, I’m going to create a config table with some settings for my application and I’m going to create a stored procedure that reads all of those settings and returns them. So if I look at my database, we can see I’ve got my table objects, so I’ve got my stored procedure, and at this point, I’ve done my development, so now I’m going to go to ReadyRoll and hit refresh. That’s then going to compare my Sandbox database to my Shadow database, so I pointed out the Shadow database earlier, I think this next slide we’ll see – here’s the Shadow database, and it’s empty at the moment. It’s got that migration log table but that’s just something that ReadyRoll needs, and what happens at this comparison point is that ReadyRoll is using schema compare under the hood to compare my Sandbox database to my Shadow database, and it’s the result of that comparison that ReadyRoll then surfaces on its main screen to show me it’s figured out what’s changed. So it can see that I’ve created a couple of tables and I’ve created a procedure.

At this point, I’ll click import and generate script, and that will script out those objects and it will create my migration script, so there isn’t the hand-rolled approach to handwriting all of these migration scripts. ReadyRoll will do a lot of that for us. So we can see here, might zoom in a little bit to make this a bit easier. So I’ve pressed down import and generate script, and we can see that that has created my first migration script here, and you can see it’s in the migrations folder and it’s in version 1.0.0. We see the programmable object has been put into its own structure, so this migration script will not contain the code to create stored procedure, because it’s not a state based object, and I’ve got my schema model so the tables are scripted out into individual create scripts.

So at this point, I would then click refresh and verify scripts. So this is quite a nice feature, so when we press verify scripts, it’s going to run this migration script and this programmable object script against my Shadow database, so I’m developing my changes, I’m syncing them into a project that’s then easily to Source Control, because it’s files, and then I’m deploying those changes. I’m deploying those files to my Shadow database, so I’m testing my deployment at development time. So there’s a nice little feedback loop even before I get to any sort of integration server where I’m going to deploy the changes. It’s really nice, and sometimes you’ll find that when it goes to deploy to Shadow database that you’ll hit a problem, and you can fix that here without holding everyone else up whilst you’re doing it, so it’s a really nice feature.

So now we can see that my Shadow database has all of the objects that my Sandbox database has, and ReadyRoll will just show that they’re currently in sync at this point. So if we take a closer look at the actual migration script, you see I’ve highlighted at the top here there’s this unique ID, and that unique ID is what ReadyRoll stores in that migration log table. So that’s how ReadyRoll knows if it’s actually executed this migration script already. So if I was to run the deployment again, it wouldn’t run this script again, so we’re safe there, but the really valuable thing about that is if I’ve got a project and I’m up to version 10, I could deploy that SQLCMD script that I said about before I set in the settings, that’s our deployment package, I can deploy that against an empty database and it’s going to create my version 10 database from scratch, but if I had a version 5 database and deployed that package against that, it would also upgrade that to version 10 because it would know the scripts that already ran against version 5 and just run the ones that it needs to bring it up to 10, so it’s quite cool that way. Because it uses this ID, we can also change the name of this migration script to something a little bit more readable that’s going to be a bit useful as we start building up a long list of those migration scripts.

So this is the migration log table, just some of the columns, and we can see that it’s got my migration script, and it’s also got my programmable object script, because it has to run both of those scripts to create all of the objects you need. But it’s only the migration script version because that’s the only one that sits in those semantic version folders.

So this is the very top of that SQLCMD file, so this is much too long to show in one slide, it would have all of the code that would create my database from scratch, it will create those tables and the procedure, if I have any bad pre-deployment, post-deployment scripts, they’ll all be in this one script. But at the top we can see that we’ve got some metadata here, but we’ve also got these variables. So we can actually deploy our project with a different database name, so maybe when we deploy to test environments we want to append to the end of the database name to test or QA or UAT. I’ve seen that done to avoid accidental I thought that was production mistakes that can happen. We can also have variables for where the data files are stored, so different environments will probably store the data in different paths potentially. So our build server – when I get to that in a second – that’s going to inject the correct settings into these variables. So this one script can be used in all different environments to upgrade existing ones and to also deploy new ones.

So let’s make some changes. So I’m going to alter the column in my config table, called value, bad name for a column I know, but it’s only a demo. I’m going to NVARCHAR(MAX) so that say we’re going to store some JSON in there, will help us with that. I’m going to alter my config settings procedure, so I have an option of just returning the value of one particular config setting, and so once we’re done with development, we’re going to go to ReadyRoll, click refresh, and it’s going to notice that those two have changed. I’ve also got a context menu here, so I can view the object differences and the Visual Studio kind of Git diff screen, Visual Studio one of that is really nice. I can include table data, so I can Source Control my reference data, which is good, but not a good idea if our reference data is more than a few hundred. Well, even more than a hundred rows, we really want to think about different ways of doing that, maybe merge statements and post-deployment scripts. We can look at revert scripts, so in some shops we can’t get round it. We have to for reasons out of our hands, we have to generate rollback scripts for if our deployment goes wrong. So these revert scripts can take out some of the manual work of that, that can give us the basis of a rollback script, but we’re going to have to think carefully how we’re going to manage the data in that situation. Another potential flame war rollbacks roll forwards, sometimes we just don’t have the option, the luxury of just saying roll forward.

We can also view the synchronization script, which is the script that’s going to deploy our changes. So I’ve created a new migration script here, and we see it’s only got the change to my column, it hasn’t got the change to my stored procedure, because that’s just overwritten the file in the programmable objects section, and you see I’ve renamed these migration scripts to something a little bit more readable.

So that’s ReadyRoll in a nutshell, and the whole use of this tool is, like I said, it allows us to manage that difficult problem of deploying changes to our databases, our stateful objects, and we can manage that now with that deployment package, we can automate the process of that deployment not only to new environments, but to upgrade existing environments and we may even want to test the process of upgrading existing environments. So this really lends itself to the automation that we need to be able to do that quickly.

So as we integrate our code, we’re doing that regularly, and we need a way of knowing – a way of figuring out those broken dependencies I was talking about earlier. So I can merge my code in and the merge works and I’m fine, but I don’t know that the functions that I depend on are still working in the same way, so I need to write some tests that test my code to make sure that it’s still working. So I need to ask it if it’s still returning the correct values when I pass it the specific parameters. So one element of that whole process are unit tests. So unit tests, we’re looking to test very small parts of our code and make sure that they’re behaving in the correct way. So some of the – one of the frameworks for SQL Server is tSQLt, so this is quite an interesting one. When Alex was doing a talk last week, after the talk there was an interesting discussion between Alex and Adam Machanic, and Adam was saying he likes to test his code from the application and test his database code from the application because the tools there are much more mature, it’s much easier to write. I get that, and that makes a lot of sense. Alex was making the point that he likes to write his tests in the same language as he’s writing his code, and I like that approach too. tSQLt has a very powerful feature that Alex mentioned, and that’s the fact that whatever happens in your test, whether it passes or fails, it’s in a transaction, and that transaction is always rolled back, pass or fail. So you run a test, everything that you did is then rolled back, so you’re back to a known state for the next test to run and it will run and that will be rolled back and you’ll keep going like that.

Alex asked Adam how does he do that from the application layer, and Adam said that he uses database snapshots, and yes, I’ve considered doing that myself because sometimes tSQLt can be difficult especially if you have a procedure that actually does rollback a transaction for whatever reason. It ends up rolling back the transaction, the wrapping transaction of the test, and that can be really confusing at first. But the thing is, I can have tests run in milliseconds, so I can run my test, and it’s rolled back and we’re ready to run the next one, and sometimes sub-ten milliseconds we can get really quick. I just don’t think you’re going to get that speed with database snapshots, because we’ve got to create the file on the operating system, we’ve got to run our test and then we’ve got to revert the database, but I think it could be a really good fit for integration testing when we need to run loads of different stuff at the same time, and database snapshots for me make a lot of sense there. But I’m not going to argue with Adam on this, I’m sure he knows a lot about me I expect, especially from application development. So I’d be interested in having that discussion at some point.

So I’m going to mention PowerShell at this point, because PowerShell is – in the Windows world, PowerShell is the glue that allows us to stick the blocks of whatever automated system we’re trying to build together. So with PowerShell, we can automate processes, run applications in a scripted way that we may not necessarily do – we may currently be doing manually. So PowerShell is very powerful there. That leads me on to Pester. Now, Pester is a unit testing framework for PowerShell, so we can use Pester to not only write tests for our PowerShell scripts, but because it is PowerShell, we can write tests for other stuff, so I could have a small PowerShell script that checks that a server is responding to a ping, and my Pester test will pass if it is and fail if it isn’t and check if files exist, and I can check if certain applications are running, and Mr. DBA with a beard, Rob Sewell will be talking about Pester a little bit later, and he uses really cool method and I used it today and I’m going to use it for my talks, where I have a suite of Pester tests that just check that my laptop is currently set up correctly for a talk, so it checks that it’s plugged in, checks that the files that I need for my presentation exist, it checks that Twitter is not running, it’s a really cool idea. He’s going to demo that later, so I’m not going to steal his fun doing that one.

So let’s have a look at tSQLt. So tSQLt is an open source project, you can go and get it from tSQLt.org, and really it’s just one main script and that’s going to install a load of procedures and functions into your development database. When you do that, you probably want to tell ReadyRoll to ignore those, because you don’t want to deploy those to production, and there’s a setting in the project file of ReadyRoll to do that. So when we start with tSQLt, we – it’s a good idea to group our tests, because as our project matures and we get more and more tests, it’s going to take longer to run all of those tests, and when we’re working on a very specific part of the application, we’re probably just going to run a small group of tests that are relevant to that, and then once we’re done and we’re about to merge it in, maybe we’ll run all of them at that point, but whilst we’re developing, we’ll probably just run the small amount.

So we can create a test class by executing this new test class procedure and give it a name, and that’s just going to create a schema in our database and then every test that we create that’s in that group will actually be in that database schema. So we don’t have to run all of our tests every time. So I’m going to detract from test driven development by creating my function first, but it is only a demo so don’t shoot me.

So I’m going to create a function called ConvertCurrency, I’m going to – the user can pass in the rate and the amount, and I’m just going to divide the amount by the rate and return the value back to the user, and so now I’m going to create a test. So it’s in my test financial app test class or schema, and the test is ConvertCurrency, make sure it’s doing what I expect. This demo is from the tSQLt site, I’ve tweaked it slightly, so what I’m doing here is I’ve got an actual variable, I’ve got a rate variable, so this is what I’m testing, this is what the user would normally pass in, so I’m going to pass in 1.2, I’m going to pass in 2 as the amount, and in my actual variable, I’m going to store the result of my function using those two variables, and I’m going to create a function, call it expected, and I expect the result of my function to be 2.4, and then in this test we’ve got a thing called an Assert, so all unit testing frameworks have Assert, so we’re basically saying here that we want to make sure that our expected value equals our actual value, and if it does, the test passes, and if it doesn’t the test fails and we go and look at what’s wrong with our logic, and there are many different Asserts, so we could have AssertEquals, AssertNotEquals, AssertNull, we can have special Asserts to make sure that tables have rows, all sorts of things we can do there in this powerful framework.

So we can now run the tests. So we run this tSQLt.run and if I just ran this section without my test class as a parameter, it will run all of my tests, but I can just run the ones I want by passing my test class, and that gives me this output. So we can see that my test has run, I’ve only got one test, it’s ran in 13 milliseconds and it’s failed. So I need to find out why it’s failed. So I’m going to alter my function and I’m going to change my division to a multiplication, if it’s right or not, let’s see. We get – and when we run the test I get a nice success and it’s 3 milliseconds that test is running in now.

Now, writing these tests can be a bit of a pain, they’re often more – if I go back to the one I wrote, this one, they’re often more complicated than this, so we want to fake tables and assert lots of test data, maybe we have a couple of different asserts, maybe an assert that checks if there are any errors, and it can be quite a bit of work to create a test. What I find is once you get a small suite of tests, it becomes really easy to copy and paste those tests and make slight changes.

So here I’m saying I’m testing passing in 1.2 and 2, I can copy and paste this, create another test where I’m passing in different values and then build up a nice suite of tests pretty quickly once I’ve got that first one working correctly. So that’s a little bit of tSQLt. I could do a whole session on this but we haven’t got time for that today. I want to move on to Pester, because sometimes some things are difficult to test with tSQLt. So we can test style stored procedures, our functions, our views, whatever, but if we want to test to see things to do with agent jobs or if we’re able to create a file on the file system then we really need something outside of the database. Pester, because it’s PowerShell, can allow us to do some of those things. So let’s take a look at what that looks like.

So I’m creating a unit test in Pester, and we start with a describe section, so this wraps our test, so I’m going to say describe what I’m testing, so I’m testing my SQL info procedure, and I’m going to say that the return is true, I’m going to run the procedure, pike the result into this function which says this is my assert, so it’s saying that it should be true, and if this function does return true, then that test will pass and everyone’s happy.

Now we can do test driven development with Pester quite easily, if we run this new fixture command, give it a path and tell it the name of the function we’re going to be working on, it will create two files for us. It will create a function file, which starts off with just an empty declaration for a function, I’ve added in a little bit of code here, and it will also create this file, so this code at the top, we don’t have to worry how that’s working, but it basically links the two files together, which means in my test file, I can call my procedure, which is the find in the other file. So now I can start building up my tests in here and I can actually write my tests first if I’m being really good, and then start developing my function until it passes all the tests.

So I’m not going to go too much more into that, I know Rob’s going to cover a lot more on Pester. I will show something that I don’t think he’s going to cover in Pester, and that’s mocking. So mocking is the process of faking a response of a piece of code that we’re not currently testing. So when we’re unit testing, we want to test very small parts of code to make sure they’re logically working how we want. So if we have a procedure that calls multiple other procedures and we want to test how the parent procedure does that, when it calls all these other procedures, we’re actually testing lots of code. We’re testing all the code in these other procedures, so to get round that, we can mock those child procedures and say actually, replace that with this little bit of code that just logs the variables that I send in and return a fixed value because I want to write a test for how my code deals with a function that returns the number three. So I can have my test at a known state that’s always going to return three in this test, we can see how my logic deals with that.

So that’s quite useful, just demonstrate that slightly here. tSQLt allows us to do that as well in the database world, but if you had a function that uses Get-Random, so it gives you a random value, and you had a hunch that it works most of the time but it seems to be whenever Get-Random returns the value three, something goes wrong. So we could write a test, but we might have to run it a thousand times until Get-Random actually returns three and we can see what’s actually happening. Or, we could mock Get-Random and we could say just return the value three. So whenever our function calls Get-Random, it doesn’t know that it’s been changed underneath, it’s just going to return the number three, and we can test how our logic deals with that value coming back from Get-Random.

We can also get a little bit more granular. Imagine if I was calling Get-Random next to a loop and I got my logic wrong and I accidentally put that in a loop and every iteration of the loop Get-Random was called and I actually only did anything with the very last value that was called, so maybe the loop went around a hundred times, it got hundred different, random values, but only stored the last one and then I went and did something with that, not that’s really inefficient, but I may not know that that’s happening. That can be quite easy to miss potentially, because I don’t care, it’s a random value, but if I use this Assert-MockCalled and the function name that I’m interested in and say exactly one, if my function SQL gets SQL info, calls random zero more than one time, I’m going to know this test will fail and I can go, that shouldn’t be in that loop, that should be just outside and I’m going to make my code a bit more efficient.

So that’s Pester, now we’re at a point – I’m going to talk about GitLab as a build server. This is a – there’s a community edition of this, and a build server really is what we need to automate all of these processes. So I’ve got my ReadyRoll project in place to deal with the difficult problems of deploying database changes, I’ve got my tests in place, so I’ve got a full suite of tests so that if there are any broken dependencies or anything not working as it should I should be alerted to that. But every time I integrate my code, and we’re going to be doing that a lot, I don’t want to manually run all those tests and check all the results, so a build server is going to do that for me. It’s going to be the remote repository, so we’ve all got our source code locally on our machines, but we need a central remote repository where we all push and pull our changes, so that’s where we integrate our code, that’s where everything is shared, and it’s that shared repository that GitLab will monitor and when it sees changes happen there, it will then kick off a build. So we can build these pipelines, so we can say to GitLab, when you notice a change, I want you to deploy my ReadyRoll project to this database and this SQL instance and this one over here, I want you to run all of my tests, I want you to gather all the results of those tests, and if there’s a problem, let me know, and if there isn’t, start the build for the next guy who’s just integrated his code. So more automation to make sure we can actually integrate our code quicker and quicker.

There are some other features, there’s Issue Management, which is really nice. So if someone is using your product, they can go to your repository in GitLab, raise an issue, and when you fix that of the code that can be linked to the issues so people can see the work that’s done and the issue can get closed off, that’s quite nice. And something that I really like is the documentation is part of the repository. So we have our source code for our application, and our documentation’s sat right next to that, like in a Wiki, but it’s not a separate Wiki on another application. It’s tightly coupled with our actual source code. So it’s not going to get lost on a file share and become outdated and stuff like that, it’s right up front. I think that’s really powerful feature, and the fact that the documentation is marked down, means we can Git diff it, so sometimes Git diffing binary word files can be difficult without special plugins and stuff like that. It’s a really overlooked feature I feel.

So I’m just going to run through a few screens of GitLab. Screenshots again, I am going to do a live demo in a second, but let’s just quickly look at what GitLab looks like. So GitLab is a Linux based application, so if you’re in the Windows world you’re probably going to run that in a VM, you can still hook in your active directory, LDAP technology to that, that’s what I tend to do. So all of our teams can still use our groups and whatever else whilst we’re developing. So we have this website, which is GitLab, we can sign in to that and we’re going to create a new project here called RR_Test for ReadyRoll, going to make that test public so anyone can see it. I’m not too fussed at the moment. Now, there’s a concept of GitLab runners, so we’re telling GitLab what we want it to do, so we want it to build our project, run our tests. Now, it needs somewhere to do that, and that’s where runners come in. So a runner could be physical hardware, it could be a virtual machine, it could be a container. But we’ve installed the runner agent and then link it to our GitLab server, GitLab can then use that compute to run all of the tests. It’s an environment that it can actually deploy the project to. We can tag runners as well, so we can say this environment, this runner is a Windows one, so GitLab will only deploy projects with a Windows tag to it. Useful if you’ve got large teams working on multiple different projects.

So I created by ReadyRoll test project earlier, now I’m going to use this little Git command to push that to my empty project that I’ve created in GitLab, and then I’m going to connect a runner to GitLab, so we can see I’ve created a runner and I’ve given it a Windows and a Windows 2012 and a SQL 2012 tag, I may have another Windows 2012 server with SQL 2014 on it, so I give it the correct tag, so that way we could tell GitLab to – where exactly it needs to test certain projects. So we need to tell GitLab what we want it to do, and we do this with this gitlab-ci.yml file. So this is how we give it instructions of what we want it to do automatically. So a very basic one would look like this, I’m just telling it to run MS build, that will build my project and I’ve given it a Windows tag so it does that on a Windows runner.

Now, that’s very basic. When I do that, that will trigger a build, and that will look something like this, when we go to the build section of GitLab we can see the build pasts, and you have to compile a project, don’t really have to do much, and we can see that the change was just that I added that yml file, that build file. But we probably want to get a little bit more complicated than that, and this is where the PowerShell glue comes in to allow us to glue some of these bits and pieces together to automate them. Now, more expensive and more Microsoft, like TFS will probably do a lot of this automatically, probably won’t need to script certain bits and pieces like I’m going to do here, but some of the beauty of this is with this scripting, we can actually do – the world’s our oyster. We can get GitLab to do absolutely anything, and I want to demonstrate some of that a little bit later. But this PowerShell script is basically going to call MS build, it’s going to – we’re telling it we want it to do it for our project, we want to deploy it to the local hosts, so that’s the SQL instance on the runner, and we want it to deploy the database and create a Shadow database if we want. And then I’m going to call another PowerShell script afterwards, which is going to run my tests. So I’ve got this whole PowerShell script that’s going to run my tests and it’s going to run my tSQLt test, it’s going to run my Pester test, it’s going to use SQL cover. So I recommend if you’re using tSQLt, look into SQL cover. It basically runs an extended event trace behind the scenes when your tSQLt tests are running, and when it finishes, it shows you all of your code and highlights the code that was tested. So you can actually see, I’m not actually testing the code that’s in my catch blocks or stuff like that. It’s really nice.

So that’s GitLab, so we’re telling GitLab what we want to do, we’ve got this automation, we can now integrate our code pretty quickly. It’s a good place to be, but as our project matures and gets bigger and we have more features and more features and we have more and more tests, it’s going to take longer and longer to run all of those tests. There’s no getting around that. As we have more tests, it’s going to take longer, and the answer isn’t to stop writing tests or delete some of our older tests, we still need to run some of those because any change anywhere on the application could have ripple effects. We need to know that everything is still working correctly. So other than tuning our tests, which is something we probably want to make sure that our tests are quite efficient, the only way we’re going to get around this is to split our tests up and run them in parallel. That way, we can still run all of the tests but we will cut out run time down, depending on how parallel we go.

Now, that can have some issues where if we’re running tests in parallel, are they going to interfere with each other? Well, we could get around that by having separate environments. So also that comes with that is if I’m deploying – if I’m developing a project and my customers are going to run this in SQL Server 2012, 2014, 2016, I’m going to want to test my projects against those versions of SQL Server as well. So suddenly I’m starting to build up more test environments so I could have a runner for 2012, a runner for 2014, a runner for 2016, my GitLab file, I could tell GitLab to deploy the same project to all of those in parallel and run all my tests and get all of those back, and that’s good, but that deployment package with ReadyRoll can also upgrade existing databases.

Now, I’m going to want to test that it successfully upgrades those correctly, so if I’m on version 10, I want to test that I can deploy version 10 to an empty instance, so it creates a database from scratch. I want to test that it can deploy against a version 5 version and upgrade it to the version 10, and then every version after that as well. So not only testing my code, but I’m testing my deployments of my code. So suddenly I’ve got multiple databases that I need to manage in the test environment, and then I’m going to want to do that for every version of SQL Server, so suddenly I’ve got lots of environments I need to test, do all of these static machines or VMs sat around not doing anything until tests hit them. And then I’m going to – maybe I want to support availability groups, so then I’m going to have to have an environment where I’ve got availability groups in 2012, 2014, 2016, all my versions, so suddenly this matrix of test environments is going to grow and grow and grow and grow.

So it’s probably unrealistic, or it’s a big management process to keep all of those environments up to sync with production. If we change something in production, suddenly we have to update all of these different environments so we don’t get this drift because if we’re testing against environments that aren’t the same as production, then our tests aren’t really valid. So it’s a difficult problem, but it may not be the solution, but it’s going to take us a long, long way down the path towards it, and that’s Docker.

Now, Docker, Andrew Pruski is going to do a whole talk after this session I believe on SQL Server and Docker, so I’m not going to go into how you actually use this and what it does, but what I’m going to say is that if you haven’t heard of Docker, I’d say look at it. It’s important, it’s making a big change. Docker is only really a brand for a technology called containers. Now, they’ve been around in Linux for quite a while, but it took someone like Docker to come along, write a nice API that wrapped around them, some fancy marketing and nice logos and stickers for your laptop to actually get people to start using this, and some people are using this in production and what it kind of is, is you create containers, so you have an image which is – would take a basic Windows install and we say I want you to install this application, this application, and this version of .net and then SQL Server and that becomes an image and from that image we can spin up multiple containers.

Now, they look like VMs, but I like to think of them more – they’re closer to like a Windows service. They’re more like a process. They’ve got their own – what would appear to be file system and IP address, but they’re not emulating all of the hardware and they’re not running their own kernel. They’re sharing the kernel with the host operating system, so they’re very thin and they boot up really quick, which means we can deploy our test environments on demand, which means we don’t have to have all of this hardware sat around, we can just have a small amount of hardware in which we can deploy multiple environments on demand in a really quick manner to allow us to – all this automation to get our cycle times really quick so we can integrate more regularly.

So I’m going to show you now something that I’ve been working on. I’m hoping this is not going to blow my laptop up, let’s take a look at this.

Daniel Hutmacher: You’re trying to blow your laptop up. Andreo says, “This is so cool, I need to show this to my QA and engineering group” and Eugene says, “Really nice tie into Docker in the next talk. This is the first valid use case I’ve seen for databases and containers.”

James Anderson: Yes, I’ve got to say I’ve kind of been in a cycle with Docker. First time I saw it I was like, this is cool, this is cool technology, then I was like what’s it really for, we’ve got VMs, what’s the point, around in this cycle a little while, yes it’s useful, no it’s not, then I read horror stories about people using it in production, but I’m at a point now where once you understand that you can create a text file, it’s called a Docker file, and you can lay out exactly how you want your infrastructure to be, and then almost instantly spin up multiple instances of that, containers of that, that it becomes really powerful and that’s what I wanted to demonstrate here.

So I’ve created this module called PexterMax, I’m definitely hoping to changing the name at some point, but that’s currently where I am with it at the moment, and the whole idea of this was – Andrew’s going to go into some much more detail in the next talk, but there’s a whole command line interface for Docker. So you can pull new images, you can generate images from live containers, you can do all sorts of fancy things with the containers, but I wanted to extract, or abstract some of that complication away so that potentially a tester who’s got a project, they can just with one line of PowerShell, they can spin up multiple containers from multiple versions and then have all the results of their tests all collated into one area. So this is what I’ve tried to achieve with this, so let’s see how this goes.

So here’s my module PesterMax, I’m going to pass into that where my project is, so this is my ReadyRoll project. I’ve got a path that contains some Pester scripts, so let’s take a look at that path. So if I go to test, I’ve got this suite of Pester scripts, and they’re doing general SQL Server type tests, they’re tested to see if I’ve got agent jobs that clear up MSDB history, if I – make sure I haven’t got auto close set on any of my databases, all standard stuff like that. So those are the tests that I want PesterMax to run. I can pass which versions of SQL Server I want to test against into here, and I can give it a degree of parallelism. So what I’m saying here is I want you to run these tests for this project against this image of SQL Server, so this version of SQL Server, and I want you to spin up two containers of that version so that to get that parallelism so that we can run all of our tests in a quicker way, I want you to split those five test scripts across the two containers. So let’s run it and pray. So we can see that we’re spinning up two containers for one version of SQL Server, and if we were to look at my CPU, it’s – I’ve only got a little two core laptop here, so we’re not really going to get the parallelism that I want but we’ll show you that this does work.

We see the memory starts coming up a little bit here as the containers are booting up. So what my PesterMax script is doing at the moment is it’s looking at how many scripts I need to execute, looking at how many containers it needs to boot up, it’s then running the Docker commands to instantiate those containers. It’s then monitoring the log file for SQL Server so it sees that SQL Server is up and running, then it runs the tests against them, then it writes all of the results of those tests out to one folder, and then it destroys the containers. So containers are throw-away things. We’ve got an image, we should be able to spin them up, do some work and then they’ve gone. Kind of like what we were doing with the unit tests. So with those transactions, so what we’re going to see here – happy to answer any questions at this point because it may take a little while on my poor little laptop, if there are any. I take that as a no, that’s fine. Let’s see where we get here.

So like I was saying, containers, it does take a little while to get your head around the use cases, and when I was talking about data makes things difficult earlier, so data makes things difficult for deployments. We’ve seen that… we’ve used a tool to help there. Data can make things difficult when we’re branching our source control, that data can make that quite tricky, and when it comes to containers, which are supposed to be stateless objects, data can be difficult there, especially if we want to run containers in production and we don’t want it to just destroy our data when we take the container down. We have to mount our data into containers, and you start feeling what’s the point of this, it seems awkward, it seems difficult, but the whole – one of the biggest problems in IT is the provisioning of test environments and maintaining test environments in the same state as production. Containers really offer a very credible solution to this, where in development we’ve got our image of our SQL Server with our database on it, then we can make some changes to that, create a new image, and that image gets deployed to production and that same image is what generates the containers that we run our tests against in our test environment. So we really get an infrastructure as code type stuff.

So those tests have ran, and if I go to where they should have been outputted here, see I’ve got a file for each of my tests, and we can see it’s got the IP address in there, I hope you can see that. So it’s got the IP address in there, so these – I did just a round robbing on the tests so they get ran against different containers, so these two ran against a different container, and if we were to look in one of those, it’s just the – these are just Pester tests, so this is the JSON output, and we can see that my ReadyRoll test database was on the instances of SQL Server running in these containers and my tests passed. I think Rob’s going to show a cool product that surfaces these test results in a much nicer way, but I’ve got the raw results there.

So what I can – I’m going to set this off in a second, I’m going to do this, so I’m just going to show you that I can then say I want you to run the latest image as well. So I want you to run the CTP2.0 image, and I want you to run the latest image, which is 2.1, and I want to give that a degree of parallelism of – let’s be crazy and go three, so that’s going to spin up three containers for each version of SQL Server. So that’s going to be six containers in total, and it’s going to spread the tests amongst the two groups of three. So actually let me just save that a second. No, there’s a bug in my code that requires me to restart that, so I’m going to put this up on GitHub at some point, this project, I need to get it to a slightly less raw state so people don’t laugh at me and point fingers, but if anyone’s interested in contributing and working on this and think it might be useful, do get in touch. Let me know and we’ll see what we can make this. I’d like to get it to a point where it isn’t just a tool for testing ReadyRoll. It could be SSDT projects, it could be even just web application products. It’s just abstracting the complexities, it’s not that complex if you’re a au fait with it, but a tester may not want to get into the depths of Docker. I just want to be able to deploy a project, run a folder of tests and do that in a very parallel way and hopefully, this is going to let us do that.

So you can see six containers, two different versions of SQL Server, and that is going to take some time. So we may revisit that later, but at this point, I think I’m just going to show – it says ultimate parallelism, but it’s only got two there so it’s not great. So I’m going to end there, happy to answer any questions that come in now. I know I’ve gone through a lot of stuff really quickly there, so I’m sure there’s probably a backlog of questions and we’ll see if these results come in as we’re doing that.

Bert Wagner: I saw one question earlier, there was a question earlier about SSIS, SSRS, you know if any of these tools can be a use for those for helping with continuous integration there?

James Anderson: I’ve seen a lot of BI people commenting on how little of these tools are around for BI products. It seems that if us in the database are lagging behind developers when it comes to these things, the BI world seems to be even a little bit further behind us. Now, I’m no BI guy so I don’t know the inside out, I’m just going on things I’ve seen on Twitter that people have been saying. I think that is changing, I’ve seen – I can’t remember who it is now, but there’s been a couple of sessions given and some blog posts about bringing these sort of methods to the BI world. I think products like Biml probably help with that for SSIS, but not an expert there.

Bert Wagner: Yes, I do know with things like SSIS, SSRS, at least you get one advantage, and that’s you’re not using Management Studio to store all your data. You’re probably using Visual Studio already and hopefully, you can get everything into source control. At least step one is already a little easier maybe on the BI side.

James Anderson: Source control is the root of continuous integration. If you’ve got any part of your application that isn’t in source control, you’re just never going to be continually integrating. That is the beginning, really is.

Daniel Hutmacher: Shane asks, “Your PesterMax script says degrees of parallelism two, does it default to one CPU?”

James Anderson: Does it default to one CPU? If I don’t pass that parameter?

Daniel Hutmacher: Yes, presumably.

James Anderson: That’s the idea, yes. Currently, because my PowerShell code isn’t great, currently if I put that to one, the whole thing blows up, so I need to figure out that problem, but that will be the default, yes, that would make sense to me.

Daniel Hutmacher: And Eugene also asks, “Do you see a purely production value for containers in SQL or just as a Dev QA test methodology? I’m still wrapping my head around the appeal in production but the QA value makes a lot of sense.”

James Anderson: Yes, so another thing that’s kind of gone round – I think – you know, this technology is fairly new, especially in the Windows world. So I am using Windows containers here. Container technology does work better in the Linux world. They’re a lot thinner, much closer to being processes than they are in the Windows world. We’re using Hyper-V under the covers here, and there’s still not full Hyper-V VMs Windows containers, but – so in a Linux world at least, I think containers can be incredible even in production, just purely from the fact where when we’ve developed our code, we’re not only deploying our code, we’re deploying our environment as well. And that’s going lock stock and barrel into production, and then we’re using that to spin up new test environments. And I hear it’s the biggest problem with provisioning test environments that are accurate to production, and I think this is the most credible approach to solving that problem.

So yes, but there’s a lot of learning curve, especially when it comes to bessisting state, so databases and containers. We need to have clustered file systems that we mount into containers, we need to make sure that we don’t spin up multiple containers that are both trying to write to the same data and that’s where orchestration tools come into use. So Docker has its own one called Docker Swarm, so with Docker Swarm we can manage multiple hosts running lots of containers, we can write Docker compose files which says when you run this compose file, spin up two SQL Servers and automatically put them into an AG and also spin up an app server and a web server. You can spin them all up as one thing. There’s an open source project that came out of Google called Kubernetes, and that is an orchestration tool like Docker Swarm, but it uses what it calls pods, so you have pods of containers and you can manage them and do like rolling updates or canary deploys, so when you’ve got your new version you just deploy one container for new version, see how that goes and if it’s good you slowly replace your live web servers with the new version. So yes, I think for existing projects they can be quite difficult to containerize, but for new green filled projects, I think containers are quite an exciting prospect.

Bert Wagner: For people starting out with these like automated deployments, in a one DBA kind of shop, I imagine it’s a little easier because you can establish a process and here’s the process I’m going to follow, I know I’m going to do it right. I mean, do you know if you’re working with multiple individuals, maybe you have production access, what happens like with ReadyRoll if something changes in production that wasn’t checked in into like a migration script or something? Is it pretty good at handling that?

James Anderson: Yes, I think whenever there’s a fire in production, if your cycles, your development cycles aren’t quick enough, so if you can’t branch a hot fix branch in your source control, fix the problem, integrate that back in and get that deployed in production quick enough, then you’re going to have to make that hot fix in production directly. So then yes, it’s not in source control, so then you need to back that change into your source control, and I would just make – be very careful, and make the exact same change in my development database and check that in. Now, we can mark – if that’s a change to a table, that becomes difficult because that becomes a migration script and ReadyRoll doesn’t believe that it’s deployed that change, although you’ve already deployed it in production. So ReadyRoll does have this functionality of drift detection, so it will notice if there is new stuff or altered stuff in the database that you’re deploying to, and you can use that to halt the deploy. So whether you – it may be a case that to actually deploy this properly from source control out of hours, you roll that change back and then let the migration script do it. Yes, that can be a little difficult to get around, but it is possible.

Bert Wagner: Great.

Brent Ozar: Alright, well thanks so much for giving your session today James, really appreciate it. Everyone should give a warm round of virtual applause. Throw your contacts slide back up there if you want so folks can see that and how they get a hold of you or where they go see the – there we go, and I know we had a few people asking where they can go to learn more about Docker or really just containers in general on SQL Server. I’ll throw a question out there, what resources do you find the most useful as you’re going to learn more stuff?

James Anderson: So yes, I mean I read a lot online. You know, you’ve got news aggregation things like Hacker News and stuff like that that are useful, but I would say the Docker documentation is pretty good. The orchestration tool that I mentioned, Kubernetes, the documentation for that is incredible, it’s really worth looking at, but to learn stuff, it’s – I read a lot of blog posts. I should actually be writing more blog posts. I’ve been a bit lax on that recently, but I read a lot of blog posts, I get involved in communities, I’ve been going to not just SQL Server user groups, I’ll go to developer user groups, I’m talking to developers and finding out stuff they’re doing. Some of the projects I’m getting involved at with some of my clients, we’re doing interesting stuff with new technology, so I’m having to dive into all sorts of different sources of documentation and people like O’Reilly, some of the books they put together are really useful as well.

Brent Ozar: Alright, nice. Several people are clapping over in the Slack room, alright, wonderful. Thank you, sir.

The following two tabs change content below.
James Anderson is a SQL Server production DBA based in Southampton. Working on multi terabyte databases that provide the backbone for many large scale loyalty reward schemes in the UK and Europe. He has been working with databases since 2008. The first and worst RDBMS he worked with was Paradox. He quickly moved to using T-SQL in SQL Server 2005 and has worked on every version of the product since.

Latest posts by James Anderson (see all)

, , , , , , ,
Previous Post
DBAs vs Developers: JSON in SQL Server 2016
Next Post
SQL Server & Containers

3 Comments. Leave new

I like this session.
You mention tools in your abstract, specifying whether they’re free would help getting people in your session. Maybe add a few words per tool to explain what it does? For example I know tSQLt, but the other ones are unknown to me.

Reply
Subir Sankar Das
October 8, 2018 8:35 am

Now ReadyRoll changed to SQL Change Automation, so GitLab end do we need to install SQL Change Automation Plugin? or .Yml file will take care.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu