The Agile DBA: A Guide To Database DevOps With SSDT

Additional Resources:

Slide deck and example Powershell scripts available on Github

Target Audience:

If you work with a development team to plan and implement database changes by hand or with third party tooling, you’ll love what Microsoft’s newest version of SQL Server Data Tools (SSDT) has brought to the table. This session is an introduction to SSDT and how you can effectively manage and deploy changes in your development environment and beyond, using tools you’re already familiar with.

I will spend some time reviewing the basics of DevOps and how it applies to database development, however, I will encourage viewers to find previous sessions that have covered DevOps in more depth.


As the DBA for an Agile, Sprint-based team, you may feel like the bottleneck at every turn. You’ve stuck with the existing processes and tools because it’s what you know. After all, “If it ain’t broke, don’t fix it!”  Right?

I have good news!  With only a few new skills and free tools from Microsoft, you can join the Agile & DevOps revolution to help accelerate your team, product, and data infrastructure.

In this session, I’ll use the newest version of SQL Server Data Tools (SSDT) to model an existing database and then demonstrate each step from modification to a working, automated deployment.  I’ll highlight the configuration options that matter most, provide solutions for overcoming common problems across environments, and show you what to do once the release is ready for production.

After seeing the possibilities, you’ll be ready to change those old processes. And in no time at all, you’ll be the talk of the team as they try to keep pace with you!

Audio podcast:

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes, Stitcher or RSS
Leave us a review in iTunes

Why I want to present this session:

While Microsoft and others have been teaching that we can use SSDT to achieve DevOps nirvana, most demos skip over the nuts and bolts of the process, preventing users from achieving success once they leave the presentation. Over the last few months, I’ve had to dig through presentations and blog posts to learn how all of the pieces come together. I want to save users from this headache and give them the first steps for success.

I’d like to show viewers that the barrier to implementing a new workflow with SSDT isn’t very high, and how the change and excitement it produces can be contagious.

Session Transcript:

Brent Ozar: In this session at GroupBy, Ryan Booz is going to be talking about database DevOps with SSDT. So take it away, Ryan.

Ryan Booz: Welcome everybody. It’s a pleasure to be here doing GroupBy, talking about a topic that honestly is pretty new for me. This is one of those things where it’s something I’ve wanted for a long time, and you know, even though I know there’s been tooling around knowing exactly how to go about it within our process, just was never really enough time and the time kind of came around, learned some things over the last year, saw some presentations at PASS, and kind of got started on this. So I’m really excited to share this with you, particularly if you’re fairly new to SSDT and even more to committing and deploying changes.

So who am I? My name is Ryan Booz, obviously. I’m currently the VP of Business Intelligence and CTO at EnergyCAP Incorporated. We are in central Pennsylvania, Penn State University is where we’re located, near there. I tweet @RyanBooz and rarely blog at SoftwareAndBooz but someday, someday I’m going to change that. We’ll see. So who am I? I’m a husband and father of five, soon to be six children. We are due with our sixth right around Thanksgiving, so hopefully, I’m still going to be able to make it out to PASS. We’ll see.

I own a small farm with 33 acres. I happen to have 12 chickens, two goats, and we just got our first beehive. I’ve been here for 14 years and I’ve had about 18 years of relation with database experience, right around the turn of the century, as it were. Really started to get into a lot of web development PHP, MySQL, those things, even had a little stint of DB2 when I was at Penn State. I had to do an application and that was the only thing they would give me access to, and it was potentially one of the worst experiences I had. Cut my teeth on a TRS-80, if anyone conversely raise your hand if they played Donkey Kong off a cassette tape, that was my first memory of the TRS-80 and I actually hacked an ending to a video game on a TRS-80 when I was like, nine. We got a bootleg of this text … adventure game and it turns out the bootleg had no ending. So we had to make one.

Learned how to program at the end of high school on a Deck 5000, we had a program with a Pittsburgh Supercomputing Foundation. So I’ve really been interested in this. The one problem as I went to college, thought I was going to do the Comp Sci thing, realized I didn’t like all the science part of it so I switched over to Music Ed. I actually went back to Comp Sci later and then back to Music Ed. So my actual degree is Music Ed. I’ve been doing this computer stuff through the whole thing, so it’s been quite a ride and a fun one at that.

So what are we going to talk about today? We’re going to talk about – there you go, there’s all the fun stuff. This will be up later. So the problem we’re trying to solve, I want to talk about that. So at least from our experience, what is the thing that has been bugging me for years and years when it comes to database deployments and development? How does SSDT impact that, what is the way it tries to solve that problem, and then we’re going to really quickly try and tear through something. The time limit for me – this is always my problem with something new that I’m learning is I usually think I have this much and it turns out it’s this much. So we’re going to give it a rip and see how we go. We’re going to create a quick project just to show you what that looks like, we’re going to develop it locally, mostly, because it’s quick and easy and actually something you can get used to doing yourself. And then we’ll talk about the actual integration and deployment. We’re going to use VSTS to quickly see what it looks like, to merge something into branches and see it automatically deployed. Briefly talk about production, what are some schemes you can do to get set up with production and so forth. And a couple of gotchas we’ve run across other talks, things that we’ve seen happen to others, and try to do some resources.

So what’s the problem we’re trying to solve here? So I don’t know about anyone else, this is what database development has felt like for us sometimes. You know, it’s going through many steps. We know the thing we want to develop, but it tends to happen to the side of development and so we go through a lot of iterations and a lot of conversations, and then when we’re ready for the real script, we have to run all these other things to get everything merged together and then we get the script and realize we forgot something, we got to kind of go back and do some of that again.

And so, this is what it looks like for us. We have a great idea, or what it has looked like. We have our gem of an idea. We might sit around the table at our development meeting and we’ll kind of come up with some of the tables. We’ll look at what the schema changes have to be, and we might even make note of them in Jira or Confluence. You know, we try and do that. We try and document what we said we were going to do. Usually though, what happens is as it starts to get implemented, we – some conversation happens. Someone else has an opinion from within the company and they send some emails about this thing or we start to have some conversations through chat. You know, oh, remember that table over here, and so these things start to happen and the idea is the same but there are some changes that occurred through other conversations and means, and since no one but one or two people really had the ability to make those changes, you tend to get lost, and maybe they don’t get back into Jira. But they end up in some copy of a dev database that we’re going to use as kind of our master golden image to start to create these change sets, and as we do that, we finally create the change set in a number of ways. We have some old scripts or some other ways to do that.

And so now we start to apply it to the dev databases. And inevitably, one of them fails or something’s not quite right, so we have some more conversations. We go ahead and we have some more chat, and usually that changes one or two things, and again, sometimes that makes it back to Jira but not all the time, and that’s just life, that’s what happens. Eventually we’re happy because we get it and so it looks like it works on the dev databases, we’re happy to go, and I think what a lot of people have done until now is what we do, is we usually end up using, quite honestly, something like SQL compare from Redgate and we’ll say this is our golden image. We know what our previous master was, we have this new master, so we create a big DIF file, and then we do all of our work inside of it. We put this stuff up top to safeguard things and we do all of our data work at the bottom. But it usually ends up being a bigger chunk, and all those little changes along the way aren’t necessarily documented or at least the conversation of them didn’t happen like it would have normally. If this was for an encode, right?

But we have this script, we’re pretty sure it’s going to work. So then we schedule the big downtime. And downtime for us is not that big of a deal. Our app is not one that has to be available 24/7 for most people, so going offline once every month or two for a couple hours at night is usually not a big deal. But we’ll schedule that time and try to apply it to the databases, and we cross our fingers and hope that most of it works. Now, quite honestly, most of the schema changes work. You know, we have hundreds of client databases we host, we provide the scripts for our own clients and most of the time it works. There’s a couple of times it doesn’t, and it’s often because of data, not necessarily because of schema. And then maybe we’re up all night, who knows?

But DevOps, the DevOps thing is what a lot of people are trying to – you know, it’s the buzzword. We all know that, and so as I’m trying to prepare this and unpack that word from a database perspective, I started to say, hey, what does that mean to people? So I went looking and I thought this – someone wasted time creating this Venn diagram. It is emerging of devs and ops into traditional DevOps that was worth the time. It’s interesting that Venn diagrams seem to be what everyone wants to choose, right? We want to see where the overlap of all these disciplines are and try and identify what this DevOps thing really means. I found this one, I thought it was funny. It’s the exact same as that first Venn diagram but it just looks a lot cooler. I feel much cooler as a developer if I’m this guy. It just makes me happy.

But what we really want to see is it’s the idea of getting more instant feedback, and at least for us, that has not been possible with the database layer very often. And so there’s our little magical idea, now we have some collaboration. We’re all in there working on this database, doing the magic, and those changes come out in the form of commit into the repository, and that repository, now we can see those incremental things over time. We’re getting that feedback pretty continuously. The rest of the team can see it happening. We talked about this. Is that how it ended up in the change the first time around? Often what that will do is kick off some kind of process to build and test database now, which is great, and if there are tests, something we simply don’t have time to talk about today, we can even do the tests there, whether it’s actual unit tests of the database or integration tests, something like that. Again, assuming everything passes, we deploy the database changes, we send them over to the databases, and again, SSDT makes that very easy to do.

And then the key of DevOps is the workflow of this whole thing, but it really comes down to this last piece, which is – again, we’ve already had a talk about today, it’s the subject that a lot of people have been discussing, which is monitoring. So knowing your baselines, knowing where you’re at. Knowing that we need to deploy these changes somewhere even in development, what’s the status? Is it impacting the way you expected? And the best part because this loop can now happen in a much more succinct and sustained way, we can make those quick changes. There’s no longer the chat sessions and the emails that get lost in someone’s folders. It’s, hey, we had this conversation, we might even be around the table, let’s make that change and commit it and see what happens now we have some documentation for it. So that’s what we’re after and that is an exciting thing.

In the world to talk about DevOps in this realm, so this continuous deployment and integration, we need to quickly understand source control, and again, from a database perspective, there’s a lot of people that just – we haven’t had to deal with it for a long time. It might be a repository for just like, the bulk files, but not for the incremental stuff.

So I just want to quickly talk about what we’re going to do today as a starting place for others. It may or may not be something you use within your environment, you might have some other way to deal with it, but something to do. So this is a really simple representation of something called Gitflow, developed by somebody a couple years ago. Anyway, the idea here is in development, in our repository, we have a couple of branches. We always have a master branch, that kind of like our golden standard. When we release something, it always gets put back into master and that’s the thing we pushed out to production somewhere. Otherwise, no one develops a master. We don’t keep that going. That only receives the final changes. It’s kind of our global record. Develop is branched off a master and that allows us to do the iterative work on these features, and when we feel like the features are out of place that we can do something with it, we go ahead and create a release branch, do some final work, push that into master, and back into development and go along. And then these feature branches come off a develop. That’s where we as developers are doing our individual work. And so, hey, I have a new item like this great idea, we branch, we iterate, when we think we’re ready, we merge it back into develop. That’s the integration part. We’re continuously bringing and integrating it back into that branch so that everyone’s work is coming into one place and we can see how it works together.

So this is where we’re going to focus today. This little section in the next, hopefully, 15 minutes, we’re mostly going to focus here so that I can demonstrate to you how both SSDT works and how you can start to see those changes happen in a repeatable way within this little section. And then you can go and decide within your environment how you’re going to do that with the release branches and what kind of workflow settings do you need to do, depending on the tooling you use. So I just want to keep that clear. This is kind of where we’re focusing for now.

Some other resources, I’m just going to have this in the slide deck. There were two previous talks specifically about DevOps as the kind of the mentality, the way that you change a culture to be more DevOps like, more responsive to change, you know, which is kind of a side of agile. These two books are kind of the definition – they really brought DevOps to the surface, so I recommend them. The Phoenix Project is actually a fictional IT book, which is really funny. It’s really funny to listen to because it seems very over the top, but it’s still pretty good. And DevOps Handbook is a good thing.

Alright, so with that in mind, any quick questions on DevOps? I haven’t been paying attention, and if not, I’m going to – alright, so there we go. You have died in dysentery, that’s awesome. I’m looking at the chat. Yes, there’s – my children. We just recently saw someone do a skit about Organ Trail and they just totally didn’t get the dysentery thing, but yes, Organ Trail, awesome.

Alright, so SSDT overview. Here’s what we’re going to do. First, why SSDT? So for those of you who don’t really know much about what it is or maybe haven’t used it all that much, it is a plug into Visual Studio. So now we’re going to be using a new toolset but it’s a way of declaring what your database should be. So you’re declaring the state of the model. So this really acknowledges to other data modeling tools in a sense. Something like Erwin or ER/Studio, some of those things. It’s not a migration model. So Redgate ready roll, which honestly, I’ve never used but I know people that do and use it very successfully, that’s a very different way of looking at DevOps and keeping the flow going. That says, hey, for every change, we’re simply going to repeat those changes in order and that’s how we’ll get to the new state, rather than comparing through thing. So with a declarative model, when you build this project, what you get out the backend from MS build, from the project, you get this file called the DACPAC. Now, really quickly, here’s my one beef. It’s DACPAC, it stands for data tier application, which is like, DTA, right? Like, how they got DACPAC out of data tier application, I still don’t know, it bugs me. But you get this DACPAC file out and what it is is an XLM representation of your database, and there’s some other stuff in there. It’s really just a big zip file with a bunch of XML that can clearly state repeatably, all the objects and the state of those objects in your database.

So SSDT within Visual Studio then also gives you these things. Gives you rich expiration of your database as code. I don’t know about you, but for years, SSMS is great, there’s a lot of good things for it from managing your queries and doing your stuff and all your work, but there’s no relation between two files. And so, it’s really hard to navigate, again, without external tools to try to navigate that. And so now I can actually navigate my code as an entire database. It validates things, it can run code analysis on it for me. This one I love. Refactoring. How many times have you started again, your kind of proof of concept and you start something and you just name stuff A, B, C, D, whatever, and then you realize two months later you never changed it back? And now you have these silly names all throughout the database. And so refactoring is really easy to do.

You can compare schemas, and in a similar vain to SQL compare. You can take two databases and see the differences between them. And then the other thing is, again, from a DevOps perspective, we are getting a consistent fine grain way of deploying it. I know that every time I hit publish, it’s going to take these settings and use those settings to do this action. And so that’s a great way to move ahead.

So forward I’m going to demonstrate. There’s a couple prerequisites. Visual Studio, I’m using 2017. Community edition would work fine. You need a build environment. Today I’m going to use VSTS, the team services in Visual Studio because it’s free for what I need to do. A local SQL Server, and that for me is so I can demonstrate kind of the local iterative nature of this, and then source control, and again, VCTS, it doesn’t matter where it’s at as long as you have source control you can make this work.

So you then do need the plug-in. So Visual Studio is the tool, but you actually need to install SSDT, the tooling as a plug-in to Visual Studio. Now, this download actually includes other data tools. It includes analysis services and reporting services and so forth. You do not need to install those. You can just install SSDT. And once you do that, what you’ll get is a new project type within Visual Studio. So what I’d like to do for the next couple of minutes before we get into the larger project is to just quickly show you around SSDT, the tooling, just to make sure we’re on the same page, and this is really for people who have never seen SSDT, just trying to get that going for them.

So we’re going to use two databases today. I’m going to first show you wide world importers, the Microsoft database for SQL 2016 and above. And the reason I chose that is it has a lot of variety of the kinds of things it does within the database. So I want you to see that if you import something that’s existing, almost, really no matter how complex, this tooling knows how to represent it. And so that’s why I chose it and just so you can see how that works. Within Visual Studio, if you’ve never used it, you’re used to an SSMS thing, you start a new project and then we get a new project type called SQL Server Database Project. I’m simply going to call it wide world importers. And I am going for the sake of this, just for a second, I’m actually going to create a repository locally. I’m just going to have the tooling do that for me and you’ll see why in a second. So I’m just going to do it that quickly. Create a new project, and I know that this is taking a little time to draw. And what you’ll see is a really simple project interface here. So there’s very little going on. This is simply a SQL project, nothing else in it right now.

And so, for those of you who might be interested and want to see what your database looks like, how you can start to use this is simply right-click on that database, select import, and then database. And that will allow us to import the database. From here we can tell it what database we want to import, so I’m simply going to point it to wide world importers, and it gives us a number of features. So we can tell it how we want it to import the data. Probably the biggest thing here is what schema – what you want the photo structure to look like. But once you have those objects set, you can simply say start. And so, it does exactly what we said earlier. DACPAC is really the magic here. It goes to that database, it creates a DACPAC, it scans it and actually makes a DACPAC of the database I pointed it at, uses that DACPAC file, that XML structure inside to create a representation within our project. And so, it didn’t take long. Now, this isn’t a huge database but it’s not trivial, and we got everything we’d expect. So we now have a photo structure that has all the stuff we would expect to see, the schemas and the tables and so forth.

So that’s what happens on this side. We see kind of a structure for it, we see that we have the tables put in and the scripts that do those, you see something called archive, so those are temporal tables and the representation of those tables and temporal tables. A lot of other stuff going on. So that’s one side. The second thing that’s great about this tooling is that you get a SQL object explorer. So this is kind of SSMS light. This is not a full SSMS database in implementation, but it gives you what you expect to see. You see the same icons, you see the same table structure if you open up each of those tables, you can see the columns. And what’s really nice about that is it gives you the ability, particularly when you’re writing scripts, the column on the right, I can’t expand that and see columns. So if I expand – if I click on the table SQL file, I’ll just get the SQL. But if I’m in the midst of writing a stored procedure, being able to expand that on the left and see the columns is just really useful. It’s great to be able to use it that way.

And then the other thing I wanted to highlight here just from a usability perspective is how this allows you to navigate your code as a natural project now. So when I am in SSTD, now, table has unique view and that you kind of get this three-part view for a table. Most other things only give you kind of the script view. But all of these things are now objects as it were. It knows, for instance, if I want to know everywhere that the salesperson ID column is used, I can right-click on that column, there’s a little option here that says find all references, and I can click that and it instantly shows me where else within this project that column is used. That’s awesome. I haven’t been able to do that in a long time. And so that just makes me excited to be able to jump around my code and see that happen. And you’ll see that it’s right down here, it jumped right to it.

So those are kind of the high-level things I wanted to quickly show about SSTD, the tooling, just to make sure that everyone had an idea what it was. We’re going to go through a little bit more as we go, and I’m going to try and work with the slight delay – that was one concern of mine is doing more of a live demo than slides. So we’ll see what happens. So that’s the basics. Anyone can do this, you can import any SQL Server database. The last thing I will show is because I told it to create a repository, it actually sees all of those repository changes. It’s made all those files and I can now commit that to repository and start this whole process. So yeah, if anyone has any questions let me know, otherwise, we’ll start on the project.

Brent Ozar: Yeah, there’s one from Pete Wang. Pete says, “Is Ryan sharing the SSDT demon on the other screen because I don’t see one?”

Ryan Booz: SSDT demon.

Brent Ozar: Alright, so that’s a no.

Ryan Booz: We might get into what he’s asking about in a little bit.

Brent Ozar: Okay, cool. Alright, go ahead.

Ryan Booz: Okay, great. So here’s the deal. Here’s what we’re going to do. So here’s the mission for the rest of this time. As I said at the beginning, I am expecting child number six and so you can imagine, it’s easy to come up with names like one or two, and then the more we keep having, the more we have to come up with names. So as I debated, my other thing, thinking about Brent, I thought, well maybe I’ll do a car database because I need a new car. I’ve outgrown my van. So we need to figure that out. But I came across – she calls it babby names, not baby names database. Like little Bobby tables, babby names. Anyway. And I thought, hey, I’ll do this. I happen to have done something – demoed something in the company a few months ago on trigram searches. I was really interested in how to make our searches faster in our application in a way that full text just doesn’t help, kind of the fragment search, as it were. So I’d done a proof of concept using Paul White’s excellent article. I had done some manipulation to it and made it work for us. But I realized that Paul’s implementation here is all I needed, so I’m literally ripping off his code to make it work for this demo.

So the idea here is we’re going to use this simple database, create some trigrams so that we can search for names that are similar to the ones we’ve typed in based on prevalence of the fragments of the name. So if I type in Ryan, it’s split up into fragments, what other names do I get? I would expect to get Brian, things like that. So that’s the goal. So to do that, we’re going to come over here and start our project.

So I’ve already brought this in. So this is that database. It’s a very simple database. There’s not much going on here. It just has a few tables. We’re going to add one table, a bunch of functions, and a stored procedure. That’s about all we’re going to do. But we’re going to do this in a way that hopefully gets us somewhere. So what you’ll see is I’ve already – within this application, I’ve already done kind of that Gitflow set up. I have a master branch and I have a develop branch and I’ve branched off develop with my trigram branch. This is my feature. I’m now developing this new feature, which I’ll import into – or merge back into develop later.

So first things first, I want to quickly talk about the project file. So with a database project, you get a project file. So this is where you have all your settings that you can do. So there’s project settings and build settings and so forth. I’m going to really quickly talk about two of them and I’ll try not to zoom in and out too much. We’re going to get to code in a minute so I won’t have to do so much of the in and out. And yes, so the couple things that are important here. You have project settings. So project settings, the biggest one is the target platform. So when you create this project, you need to tell it what platform you’re targeting. And so, it has every version in here if you have the newest SQL Server, and this actually dictates the features that are available to you to use. So if you select 2012 for instance, when you click on database settings, you won’t see options for query store for instance. But if you’re 2016 and above, you get query store.

The other thing that’s important in here, and I’ll just save this because it has actually caught me as we’ve been playing with Azure a little bit. If you’re actually intending this to go to Azure, you really need to select Azure database as the platform. When you try and deploy another version to an Azure database without some manipulation, you’ll get an error because as you try and deploy a database that’s not intended for Azure, even though Azure’s bleeding edge, the SQL database side of it has some limitations that don’t exist elsewhere so something to know. There’s other stuff in here really quickly. You can talk about build settings. Build events is one that I use for this purpose. So when I’m developing locally – and we’re going to talk about that here – sometimes I’ll just iterate locally. So I’ll make a change and I want it to happen very quickly while I’m going. So I set this post build event that says when the build is successful, just go ahead and apply it for me. I don’t want to go through the step every time of right-clicking and selecting deploy and doing all that. So if it’s successful, just go ahead and publish it to my local copy so I can quickly go over and check them and see if it worked and what the possibilities are. Right now, it’s commented out and we’ll talk about that in just a second.

And then last, there is this code analysis side of things, and so you can turn this on. A lot of people tell you to do so so that every time you build a project, it will check these things. It will either throw them as warnings or you can actually throw it as an error and stop the build. I mean, you can see what they are. It’s up to you. Is this a matter to you or not, have fun with it, enjoy, play with it, see if you can get it to break.

Alright, so you set those settings and that is specific to this project, and things like the database settings, those are actually used to, for instance, if you use the create script to start a new database, it will use the correlation settings. It will use the query store settings. So we use all that and actually compare against the database that you’re maybe trying to update. If the query store settings aren’t the same, it will try and update it to whatever you said it should be. So those are really important, but they are for the project. If you ever change it, you’ve changed it for the project.

So with that in mind, we’re going to go ahead and quickly try and get started on the project. So the very first thing I have to do is I have to create a function to actually create my trigrams. So I’m going to come over here, I’m already started some folders, and so you can do it in a number of ways. I always – if I have a folder structure, I meant to say this earlier. The folder structure is not necessarily important. SSDT is smart enough to understand what the SQL is, and so I could call these whatever I want. It’s not going to impact SSDT. This just made sense to me, so that’s why we do it. So when you do that, I’m going to right-click and say add table value functions is the first thing I’m doing. I’m lying. I am adding a table. I need to store these things somewhere. So I add table, it brings up this dialog, I’m going to say search terms. So as expected, I get that same kind of window.

Now, I could go through and do all that here, I’d rather move much faster and so all three of these panes in a table work together. So if I paste code in down below, the UI reacts to it. So any change I make in any one of these quadrant – I guess it’s not quadrants because that would be four. So these tri – anyway, will impact the others. And I’ve realized that I actually had forgotten to add something here so I’m going to add first name ID and you’ll see as soon as I do that, it actually updated over here as well. So first change, I have a table, I have an index.

Now, I’d like to actually deploy it to my local database. So on my local SQL Server, I have two databases that I want to show you. I have a babby names local and a babby names develop. So the theory here is develop would actually be on my development servers somewhere. I wanted to keep everything contained. So we’re going to develop inside of that local copy, it’s my copy I’m doing stuff on. And then at the end, we’re going to see if it actually all applied to develop through continuous deployment. So this has nothing except the other tables in there, there’s no search terms table in here. I can’t do the voices like you, Brent, sorry man. Actually, I’ve been talking so much I have almost no voice left.

Alright, so we have that. Here’s what we do. We right-click on the database and there’s this option that says publish, and that’s kind of the whole key thing to this. publish is actually just a UI front to an application called SQLPackage, and all these links are later in the slides that I’ll put up. SQLPackage is actually cross plat. You can use that now on Linux and Windows, so if you have a DACPAC file, you could actually run SQLPackage in whatever environment you have, so that’s cool. When I click publish, what I’m going to get is – so first thing it’s going to do is actually build the project because it needs to build it to produce a DACPAC file. I get a dialog that I can say where do I want to publish this thing to, so really what this is is it’s a way for me to get my settings together upfront so then it’s repeatable. I can just keep reusing this deployment. So I’m going to first connect to that local database, babby names local. So now it knows that’s where I want it to apply the information to. But before I click publish, I want to make sure that the settings are the same. Now, because of time, there are probably 150 settings in there. There’s a lot of settings in here. Every one of these checkboxes you can actually pass in as a parameter to the SQLPackage. So you could literally string along 150 plus arguments to SQLPackage if you want. So that’s what this is doing. This is giving you an upfront way to set all those parameters upfront.

So a couple things that I typically recommend, again, these are in the slides to look at. Whether or not you want to block changes if data loss might occur, whether you want the database to be backed up. Some of those things. Now, this is a local copy. I don’t care if I lose data. I’m going to uncheck that one. I am going to check backup the database. And then the other – there are a few others. I’m going to really quickly highlight two. One is a new feature they’ve added, which is generate smart defaults, and so it’s this one right here. So if there’s a non-nullable column that has a default set, that has a default value, it would try and insert the correct default value, at least give you a heads up, attempt to. So we’ll go ahead and check that. And last but not least, the important one that has gotten most people messed up in the past is this. So there’s this option called ignore column order. The big problem is you’re comparing two schemas that are literally – they’re linear, right? So a database table has a linear order of columns. And if you happen to reorder those columns in your model, SSDT sees them as differences. And so, this has been a big problem in the past. That will cause SSDT to create a new temporary version of your table, copy all the data over, drop the old one, and rename the new one. It’s called data motion. And so, the team and one of the newer updates in the last year put this in there so that a column order change wouldn’t trigger this. It’s certainly helped a lot of people. So I’ll go ahead and check that too. Lots of other stuff you could check. We just don’t have time in an hour to go through all these things. So we’ll add some of that into the slide set so you can examine them later.

And then the last thing I’m going to do here is save this. Before I hit publish, I’m going to save this profile so that I can reuse it again later. So for me, this is my local version and so this is just how I tend to name things. I just call it local babby names publish. Now save that, and you’ll see that it actually shows up over here in my list of files on my file system. Now I have all that, I’m going to hit publish. And so, here’s the magic. It’s going to go out to that database, it’s exactly what it’s doing right now. It is creating a DACPAC file of that database so that it can do the compare to the DACPAC file it created during the build. So compared the two, and now it says it’s successful. Yay. Wild applause.

You always get three things with a publish when you do it through the UI at least. You get a preview file, and that is basically a textural file of what it said it’s going to do. So this makes sense. It created a new table and it created an index. You also get the actual script that was used. So this is the actual script. This is a SQL command script so you can do variables and so forth, and let’s see if we quickly scroll down without making everyone seasick. There’s my two, there’s my columns. This is my table and my index. And then last but not least, you also get the result. So this is what you would expect to see if I had run this in SSMS for instance. I get the same kind of results view to see what happened. This is also when there’s an error, often we’ll see the error and you can use that for later.

Alright, so that’s great. So because of time and I want to get to VSTS, I’m going to just really quickly do the rest of this in a different way to give you an idea of some other opportunities for doing this development. So I just created this one by going in Visual Studio and I created all that here. What I’m doing to do to kind of speed this up, I have the rest of the functions and the things I need. So I’m just going to create these things, I’m going to create a stored procedure to do some of the stuff, there’s an index view that helps do the searching, and so I’m just going to run this at – you know, that’s how a lot of us do our work, right? We work in SSMS and so that’s all I’m going to do. I’m simply in SSMS, I’m on the local database, the one I’ve been developing on, I’m just going to run this. Of course, invalid.

So let’s see if I refreshed this, that was silly on my part. If I have my functions – so it actually did all go in. I’m getting those errors because the first run had created those and hopefully everything is good. I guess we’ll see. And so let’s see, I’m going to go back to SSDT and this is the other way you can do it. So some people develop within their database and they want to see that happen, and so we can right-click and say schema compare. So I mentioned this earlier in the slides. And you can compare between either two databases, two DACPACs, whatever. So this project is a DACPAC. It can either be the source or it can be the destination. And so I’m going to make it the destination and use the database that I just made the changes in as the source. And I’m going to go ahead and connect to it, and I’m simply going to say compare. There’s this little button up here that says compare and you can set other settings in here very similar to a publish profile to make it to certain things. I’m going to let it do its default deal. It takes a little bit of time, and sure enough, I believe this is everything I need. So I’m feeling somewhat confident we got out what we needed.

You can click these things, you can see what they look like down below, make sure it’s what you want. I’m going to trust that it is and hit updates. This is either going to totally blow up on me or it’s going to be amazing. We’ll see. Alright, and as it updates, this is exactly when Visual Studio would decide to stop responding for me. I can just tell. So now I got everything in here, life is good. So we’re going to attempt this, see what happens. So remember, I’ve set up my local machine in my build profile when I built something I told it – I just want to make sure I did do that. I’m going to uncomment my little handy dandy post-build event so now this is kind of like continuous integration, right? So locally, every time I build, I’m going to deploy those changes if it’s successful. So now we hope it is.

Alright, so it was successful, it built, the DACPAC, it is now deploying those changes to the local database. Now, again, I’m going to assume there’s not much that’s changed because I had just brought those changes in from that database. I actually do spit out that file because of time, I’m not going to go there and look at it. But again, the SQLPackage, you can tell it to do whatever you want. So with my local development, I just always have it keep a backup copy of that script that was spit out. So if this all worked, we should be able to go over and see some of the stuff start to happen.

So I’m going to use my local database, zoom in here a little bit. So for instance, that first function that generates our trigrams, I should be able to see some trigrams and I do, that’s exciting. Now, the one thing that isn’t in here, you might notice, I still don’t actually have search terms though, and this allows me to talk quickly about one thing, which is remember the DACPAC, this whole SQLPackage thing only deals with the middle portion of that script. So it’s not data. It’s all schema-based stuff. If I need something to happen at the data level, I need to do it outside of there somewhere. So you have the opportunity to do a pre-deployment script, or a post-deployment script. And so, in this case, we’re going to do post-deployment. So I’m going to check every time I deploy if there’s information in that table. And if not, I’m going to have it generate the trigrams. And so I’m again, very quickly going to run through this for the sake of time. You can add a script to the project, there are two kinds of scripts. Post-deployment and pre-deployment. So I’m going to choose a post-deployment script and I’m simply going to copy in this little statement that says if there is no data in there, go ahead and generate that trigrams for it. It’s all it’s doing. So we save that and again, I’m local. I haven’t committed anything yet, I’m just local. I’m going to go ahead and build and what we should find is now that I’ve made that change, it’s going to deploy those changes to that database and hopefully, we end up with data in that table. And it’s actually backing up the database, remember, that’s one of the things I’d chosen.

So here we go. Let’s see what happens. I’m going to now look at search terms, and sure enough, I have data in search terms, so that’s great. And so, with all the things in place, I should be able to start query some names. So this is the whole point of this feature that the team is working on. And so, I search for the first name, so I’m using my children’s names by the way, it’s a little shout out to them because they might actually be watching. So Lia is my second oldest, and when I search for Lia, now, she is a trigram unto herself. So she’s very special in that way, right? She only has the three letters. She gets a lot. She gets 1130 results. It makes sense. It’s all the names that have Lia in it. It’s fun, that’s just the reference to it.

But now they have all of those, I should be able to start to search for names that are similar. So what I’m going to do is I’m going to search the tri – so I’m going to create trigrams out of the names, search for the matches in the trigram search table, and then link that to the – Kendra made a table that gives you the number of times that name was used every year, and so we’re going to bring the ones to the top that have been used the most. That’s how we’re going to try and find names that are similar. So if I look for Malachi, he’s my third in line. It splits that up – this is by gender and so you can see that there’s actually some females in basically all of time in the US that have been named Malachi. We could have named him Jamalachi, there’s a few people in all of time that have been named that. We could have done. Malachi is my favorite because a lot of people actually call him Malachi.

So there, we got it. I could even be a little bit more succinct. I could say, oh, I want to do something just in the 80s because I’m an 80s child, so I’m going to search for Audrey, number four in line, and see what kind of names were within the 80s that correlate to Audrey. It’s actually not that many. When you break up her trigrams, Audrey obviously is the top and again, there’s a couple men called Audrey, good. But I really like Audrey Rose. Maybe we would have chosen Audrey Rose, had we known.

Alright, project is done.

Brent Ozar: So are you – as you’re looking for new kid names, are you looking for ones that sound familiar or different?

Ryan Booz: We’re not a rhyming family. So if I look up Ryan for instance, Brian’s going to come to the top. I would assume that, but other things are going to come up. Sometimes you get – for David, you get – actually there’s a King David. Someone has been named King David. There you go.

Alright, so here’s what we need to do now. 15 minutes left, we’re done, we’re ready to put this into development. We want to get this moving forward. And so I’m going to come back here hopefully very quickly. There’s the profile settings, again, we’ll talk about this, it’ll be in the slides you can look at later. Again, don’t say always recreate database. Bad, bad option. So once you have this, I’m using VSTS. They make it very easy to stay within the Azure ecosystem, right? So if all my databases are there, it’s actually pretty straightforward. And that’s honestly where we had a problem. We don’t use VSTS for our stuff. We happen to use TeamCity. So figuring out how to get this stuff installed on all our build servers and what needed to be there was the trick, and that’s what I’m hoping to do in 10 minutes. For those of you who do not use VSTS, you use something else, whether it’s Jenkins or TeamCity or something, what actually needs to be installed in one way you could set up your project to make this work.

So this is what it would look like if you’re in Azure, to publish this in VSTS. They have a MS build task, you can just run it, you copy the files, they have a task to deploy it to Azure, SQL DB, which runs SQLPackage for you, everyone’s happy. And then I always publish my artifacts. For us, locally, again, it doesn’t matter what the build control is. We don’t necessarily have – I don’t want to install Visual Studio and SSDT on every single build machine. That’s hard to maintain, that’s just silly. And so they’ve created a way to do this. And so what you need to do is make sure that you have the local agent. So in this case, I installed a local VSTS agent that will allow me to run my builds. Within your project, you install this NuGet package. So once this NuGet package is accessible, you can then reference it with scripts again, within your package, so now you’re in control. So it’s the same four steps but now I’ve told – I’m using a PowerShell script to run MS build locally without having to install all the tooling. And then I’m running SQLPackage locally with my own parameters to deploy it where I want to.

And so that’s what we’re going to do here. So this is the CI project I’ve built. So this is the same four steps and I’m going to show you those files in just one second. So we have a build PowerShell script, that’s what I’m going to call here for the first step so it builds the project. Inside that script, it references the project it’s going to build. I’m going to copy the output of that build to a temporary directory so I can use it later, and then I’m going to take the files that were output, the DACPAC and the deployment script, and I’m going to use those to deploy this to a server. And then I always publish my artifacts back up to the build server so I can see them later, the script and the DACPAC, what have you.

So that’s what this is. Again, I have a lot of this in the slides that I’ll put up. I want to really quickly show you what this PowerShell looks like. I am not by any stretch a PowerShell wizard. Not even – like, I’m not even Harry Potter, I haven’t even got my wand, I’m not even in the realm. But I could quickly write a script to say within the context of this build, this project, here’s where these tools are that I need so that I can run the build, and then the same thing – again, this will be included afterwards. Same thing, I can say once I have that, here’s how to publish it using SQLPackage. I’m passing in a few parameters to SQLPackage. I know there’s a more efficient way to do this, by the way. But there are literally hundreds of these you could pass in. So I am simply saying publish these changes to this database, save the output file for me so I can see the script that was created, and then publish that stuff back up.

So with those two files, here’s how we do it. Hopefully, we can do this in just a couple minutes. So I’m actually going to go to my project directory because it’s easier to show these things here. So here’s my project directory. I’m going to copy these two files over here so it’s the build scripts I just showed you. And then what we do is create a build folder. So this is outside of the project itself, it’s actually just in the solution. And so within that folder, you just need to run one thing. I’m going to open PowerShell to that folder, and NuGet, and I’m just going to copy it from something else I have. So I’m going to install that NuGet package for the build tools and that’s all I need to do. So now that’s within the context of this whole solution, and that build file references those tools. So we’re good to go.

So if it all works out, you’ll see that I now have all these changes in my source. So there’s a lot of stuff in here. I’m going to go ahead and commit all of this and I’m still on the feature branch though. I’m going to push that up, go into VSTS, and if I come to VSTS, you’ll see that my commit actually made it up here, and there’s my branch. So we have all the stuff we need, we have the build tooling, the project’s ready to go. The project – the build set up that I made though is for the develop branch. So this is where we would say, we’re now ready to go ahead and commit this into the develop branch and so in source control, what we do is we move over to that branch and really quickly, because I’ve switched back to develop, you’ll notice that all of my changes are gone again, right? Because in source control, I actually moved backwards in time for a moment. So this is that green feature, I’m about ready to bring it back into develop. And so I come back to where I’m at, I say, hey, I’d like to merge this thing into develop, so I’m going to go ahead and do that. Yay, it was successful, and I’m going to go ahead and push these changes. So I’m going to go to sync and I’m going to push those changes up. So they were successful.

So hopefully if I come back here, we’ll now see these changes. I’ll go over to my develop branch, and sure enough, it has the same thing. So it has that build tooling. So good, the merge worked, and I can do something with this. And so find that we have this tooling. So here’s my build process, I’m using those tools. I can simply say, queue this up. So we’re going to talk about continuous in a second, but I’m going to queue this off the develop branch so now my changes are there, and it says it’s been queued, and if everything is working, now my local machine is where these changes are being sent to because I have a local agent running as if this were my build environment. And you can see it’s actually spitting everything out.

This is a great thing to show actually. So this is one thing that happens, and I said this earlier and I probably missed it. I forgot to create a copy of this for my dev environment. So I usually have one for each environment in case I need to set things differently and I reference that deployment profile so that I get that consistent deployment. And the one thing that’s interesting here for some reason, the source control, I think this is the get ignore file that comes by default. They are ignored by source control, so these are not included by default. So there’s two things that need to happen here with these deployment scripts. First, I need to include them in source control, and then the other little gem is again, when a build happens, anything I need to reference later, I need to make sure it’s part of the build. And so I click on that profile and down here where it says copy output, this, I always want to say for that file, always copy. That way, it’s accessible to my build. So once the build has happened, it’s accessible as an artifact to the next step and I can use it.

So good, I had a change, let’s go ahead and commit that. Alright, so I’m going to actually just commit and push a second time. So now let’s see what happens if I queue another build. Queue it up, I planned that by the way. I’ve done it like, five times during this demo, I always forget to include this silly thing. Alright, so for some reason, it’s not liking me at the moment. So the last thing I’ll show, I will actually just really quickly do this locally. Not 100% what develop – here’s one thing you can do actually. So this is kind of nice. When I go to this build, I can actually look at – should be able to look at the artifacts, assuming that piece happened. I could look at it locally but I won’t. Just for showing that it goes forward and we can look at this – I’ll have to figure out some other time. If I use that deployment profile – so now I see that it says local but it’s actually in my script, I overrode that so I’m going to do this to develop. Publish it. Overall that’s the key. Actually, this is probably the smallest portion of it. The biggest thing for us was getting that build to happen. I think it’s been hard to find that documentation. Installing that NuGet package and having the build scripts reference that stuff on each build machine is really what allows you to do this.

Now, the only thing I’ll say is so it should work, I apologize out of the 13 times I’ve done this in the last two days. Why now, but we all know why now, right? That this database is now updated and I should be able to switch over here and run the same queries. And so, let’s see if we actually can and sure enough, we can’t. So it happened, right? So hey, the automatic deployment didn’t happen. To actually do the automatic deployment, the only other thing that has to happen is you set up a trigger. That was going to be the last thing I was going to make one change. So you can just say, hey, enable continuous integration. So as I make changes and they get committed to develop, go ahead and do that. Any time something happens to this branch, and then you don’t have to go queue this anymore. As the things get committed within your development environment, you can do that. You can do that to multiple databases if you need to. For us, we usually have five or six or seven example databases we could deploy those changes to each of them.

So at this point, if I would try and commit another change, it would do it. Hey, for some reason that last little step isn’t working. But back to the PowerPoint. So for me, that was the key to getting this thing to work and hopefully, that’s valuable to somebody. Let me just go ahead and finish this up. What about production? This is something that comes up a lot and I know they’ve been dealing with this. Honestly, I sat in on a couple presentations by the team themselves, and so they would say at the end, generate the script. You know, depending on time, there’s a lot of – you know, how much risk and tolerance and time do you have, right? If you have a lot of DBs doing these things without some kind of scale, it might not be possible. There are some neat things about production though. I would love to talk about deployment contributors sometime. That’s a way for you to actually write C# code to get in the middle, and when something happens that you don’t like, you can rewrite the script as it goes. So you have a lot of control, which is cool. But they basically said, hey, generate the script that allows you a lot of uniformity in what you’re used to doing already. It’s a way to get things going.

The other last couple things. Is this further reading? You know, here are some good quick links to get to where you can get some more information. Again, the SQLPackage docs, there’s a billion switches, you can play with it. That one is cross plat, and again, MS build for SQL projects is not yet cross plat. I was going to ask this yesterday and then out of nowhere, some other gentleman asked the SQL tools team the same thing at like three in the afternoon, and they said it is on their radar and there’s a small chance maybe before this year is out they will have cross plat on MS build.

So hopefully this gives someone some excitement to get their database deployments and their database development done in a new way, and know there’s a ton of stuff there and really it was meant to spark some interest, I hope so. Thank you. Any initial questions, I know we’re right at time.

Brent Ozar: Nice job, thank you, Ryan. Got several people asking questions. Steve Malcolm asks, he says, “We use Gitlab, is it the same process?”

Ryan Booz: It is, yeah. I mean, where the source control is doesn’t matter. As long as you have those tools within your build server, you have to have that NuGet package in MS build. as long as you have access to those two things, it will build and you should be able to do this.

Brent Ozar: Let’s see here; Bill says, “In an environment where you’re hosting multiple databases with the same schema but with different customer specific data, do you have problems getting all the subscribers to having their deployment updated at the same time when you want to do updates, or do some subscribers insist on delaying their updates?”

Ryan Booz: We generally don’t give them the option. That’s just for us. If we’re hosting you, you’re going to get the latest and greatest and we give you plenty of time. We do have a couple databases that we hold the side for various reasons. They’re obviously – they pay us money for that, so that’s a good thing.

Brent Ozar: Scott says, “Now are synonyms to other databases on the same instance handled?”

Ryan Booz: So you can make references. That’s actually something that I’ve had to work out a little bit within our own projects. So in Visual Studio, there’s this concept of references here, and there’s some documentation online, it would take me a minute to find it, I’ll try and make a note and make sure it’s in the stuff I post up. You can reference other databases, so if there’s a reference to other databases within this project, it will know how to accept that so you don’t get build errors. I’m assuming that’s kind of what they’re asking about.

Brent Ozar: And then last, Sassan asks, “Given the size of your family, how do you find enough time and energy to stay on top of things?”

Ryan Booz: Dude, you know, I love learning. It’s also the death of me sometimes. You know, this has been a grilling week trying to get ready for this knowing I had like, this much, and I only had this much time, and trying really hard. So it’s always sad when the last thing doesn’t work but I hope that it gets you guys going. They’re an awesome family, we homeschool, so we just have a blast learning stuff together. They are natural learners and so I just try and follow along. That’s the deal.

Brent Ozar: Nice. Well, thanks a lot for teaching today, Ryan. Really appreciate it and thanks to everybody who came in and hosted. Thanks to Stephanie for co-hosting today and making sure everything moves smoothly. And that’s it for us, we will see you all next time at GroupBy.

The following two tabs change content below.

Ryan Booz

VP of Business Intelligence and CTO at EnergyCAP, Inc.
Ryan is the VP of Business Intelligence & CTO at EnergyCAP, Inc. where he has led both IT and development over the last 14 years. Most recently, Ryan has been spearheading a team tasked with bringing the power of BI to staff and clients through industry leading DevOps practices. His mission is to provide the confidence and freedom to explore data in new and tangible ways.   Ryan is a long-time DBA, starting with MySQL and Postgres in the late 90s. He has been working with SQL Server since 2004, supporting an international client base and a thriving SaaS application that hosts thousands of users. He’s at the top of his game when he's learning something new about the data platform or teaching others about the technology he loves.

Latest posts by Ryan Booz (see all)

Previous Post
The Perfect Index
Next Post
Making Your Emergency Toolkit

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.