SQL Server & Containers

Audio podcast:

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes, Stitcher or RSS
Leave us a review in iTunes

Target Audience:

This session is designed for DBAs, Developers and Sysadmins who want to learn about running SQL Server in containers on Windows Server 2016. This session assumes that attendees have a working knowledge of SQL Server and a basic knowledge of Windows Server administration but all concepts will be explained as the session progresses.

Attendees do not need to have any prior knowledge of containers as this session aims to give an introduction into the technology so that the attendees have a base of knowledge on which they can build on.

Abstract:

Attendees will be taken through the following:

  • Defining what containers are (benefits and limitations)
  • Configuring Windows Server 2016 to run containers
  • Installing the docker engine
  • Pulling/Pushing SQL images from/to the docker repository
  • Running SQL Server containers
  • Committing new SQL Server images
  • Exploring 3rd party options to run containers on previous versions of Windows Server (real world example)

The session will explain concepts via powerpoint slides which will be backed up by demos.

Attendees do not need to bring anything to the session but if they want to follow along they could (emphasize could as they really don’t have to) have a VM running Windows Server 2016 with the containers feature enabled.

Why I Want to Present This Session:

I’ve been working with containers for most of this year and have seen the benefits that they can bring to an organization. I’ve also seen the downside and am well equipped to talk about the potential pitfalls that are lurking out there.

This is a new technology available to SQL Server Professionals, one that I think hasn’t realized its full potential. I’m excited about where this could lead in the future and how working with SQL Server could change because of it.

Additional Resources:

SQL Server & Containers – Part 1

Session Transcript:

Brent Ozar: In this session at GroupBy, Andrew Pruski is going to be talking about SQL Server and Containers, so take it away Andrew.

Andrew Pruski: Thank you, Brent, and welcome everyone to this session on SQL Server and Containers. Just want to introduce myself first, my name’s Andrew Pruski, I’m a SQL Server DBA, originally from Swansea, Wales, but I’ve been living in Dublin, Ireland, for around three years now. Been working with relational databases, in one form or another, for around ten years, six of those as a SQL Server DBA, and I’ve been working with Containers themselves for about two years. My blog is up there, dbafromthecold.com, posted multiple articles about Containers, some of which I’m using to base this session on, and some of which go into a little bit further detail. Twitter handle and email, the rest are there as well, so if you have any questions after today, feel free to ping me on, I’m always happy to talk about this stuff.

So onto the session. The aim of this session is to give you the background knowledge and commands to be able to go away and evaluate this technology for yourselves. There’s a lot of buzz about Containers in the technology world, but not so much in the SQL Server world, which I find odd, as this technology has benefits that for us as SQL Server people is really worth exploring. It has some drawbacks too, which we’ll cover, but I’m hoping after today, you can go away, have a look at this technology, and see if it can make a difference in your day-to-day work.

Okay, so even though there are other technology companies out there that will do you a SQL instance in a Container, I think Spoon.net will do SQL Server 2014 Express edition in a Container, but as they announced a partnership with Microsoft way back in 2014, and that official support came out the end of last year, the technology company will be focusing on is, of course, Docker.

So here’s what we’ll cover, we’ll start off with a bit of Container theory, so we’ll start off with the definition of Containers, what they are. Then we’ll go into the different types available to us, how they compare to virtual machines, bit of Container networking, and then the pros and cons of the technology. After that, we’ll go into configure Windows 2016 to run the Docker engine, building our first Container, what a Docker file is and how it’s implemented, building custom images, and then pulling and pushing from the Docker hub. And then finally to round off the session, I just want to talk about how I’ve implemented Containers at my company, the issues that are having that we thought Containers could resolve, the architecture that we built, the issues that we ran into along the way, and then the benefits that we saw.

But first things first, what are Containers? Now, the definition on the slide there is from the Docker website, and it says “A Container is an environment that contains all the necessary binaries and libraries, for a piece of software to run in the same manner regardless of its environment.” Basically, a Container is an isolated place where an application can run without it affecting the rest of the system or the rest of the system affecting it. In practice, they are lightweight objects specifically tuned for say one function such as running SQL Server. They are quick and easy to deploy, easy to customize and easy to share.

So very briefly, some fundamentals, a bit of terminology. So we have the Container host, and that’s the machine that we’re going to run our Containers on, and for us, the SQL Server people, that’s Windows Server 2016, or if you’re working locally Windows 10 Pro or Enterprise Edition with the anniversary update. Then we have the Container engine, this would be the service that we interact with to deploy, run and build Containers. We can send it a variety of commands, such as how many Containers do I have, and what state are they in. Then we have the Container registry, simply a repository on our host that we store Container images in. Now, Container images were explained to me as the installed state of an application in a reusable format. I like to think of them as base lines that we can use to build Containers from, and every Container built from the image will be a carbon copy of every other Container built from that image.

Okay, so when working with Windows Containers, we actually have two types available to us. We have Windows Server Containers and Hyper-V Containers. Windows Server Containers perform application isolation through name space and process isolation technologies. However, they share a kernel with the Container host, and therefore all other Windows Server Containers running on that host. Hyper-V Containers take isolation a little bit further by running each Container in a highly optimized virtual machine. They don’t share a kernel with the Container host or any other Container running on that host.

Now, in practice, I’ve only ever used Windows Server Containers, because they give me exactly what I need. What I need is the ability to spin up instances of SQL Server in a very short period of time. I don’t really care about the level of isolation that I’m getting yet, but I appreciate that your needs may be – you might want to have a higher level of isolation, that’s why I dropped this slide in so we are aware of the different types available, and it’s really simple to build Hyper-V Containers. You just need to enable Hyper-V role on the server, which you don’t need for Windows Server Containers, and then at run time, we specify a little isolation switch that isolation equals Hyper-V at the bottom. But for the purposes of this session, we’re going to stick with Windows Server Containers.

So that’s the two types available to us, but how to they compare to virtual machines? Now, on the left-hand side here we have the virtual machine … that we’ve all come to know and love over the years. We have our infrastructure at the bottom, our host, and then our operating system, and our hypervisor. Now, I know with certain hypervisors we have such as ESXI, which is a bare metal type one hypervisor, the operating system and the hypervisor are one and the same thing. But if you think about Windows Server 2016, you have the operating system and Hyper-V services running on top of that, so that’s kind of why I spit them out there.

But anyway, on top of all of that, we then have our virtual machines; the full blown Guest OS, any applications that we’ve installed, and the binaries and libraries supporting them. Moving over to the right side, to the Container side of things, things are a little bit different. We have our infrastructure, our host, our operating system, Windows Server 2016, and then our Container engine, the Docker service running on top of that. On top of all of that, we then have our Containers. We have the application and just the necessary binaries and libraries required to run that application, and that’s the key difference. Containers do not have a full blown Guest OS. They just have what they need to perform their function, and that means their footprint is significantly less than virtual machines. You can run a lot more Containers on a host than you can virtual machines, without oversubscribing.

Now, one thing I will say is that I am not advocating that Containers are going to replace virtual machines; far from it. What I am saying is that in certain situations, Containers could be better suited. The way I was explaining to me, and this is arguable, but if you have an environment where you need to do one thing lots of times, Containers could be your way forward. However, if in that environment you need to do lots of different things, virtual machine, probably still your best bet.

Okay, so I do want to briefly touch on some networking because we need to know how to connect to these things once they’re up and running. So the black box there is our host, and it has network connectivity provided by a NIC card, and it needs to extend that connectivity to our Containers. It does this via Hyper-V virtual switch, which all the Containers connect to via virtual NIC on the host. When the Containers features are enabled and when the Docker engine spins up, by default, a NAT network is created, and the default gateway with the NAT network is assigned to the virtual NIC.

Now, that’s important, because that allows us to specify port mappings so the end points within the Containers are accessible to external clients such as SQL Server Management Studio. In practice, what we’ll see is when we spin a Container up, it will be assigned a private IP address, which we can use to connect to locally on our host, or if we’re connecting remotely, we use the host IP or name and the port number that we specify upon Container run time.

Okay, I think that’s enough theory, let’s go into the pros and cons of this technology. Starting off with the pros, why do we want to use this stuff? Well, one of the first things there, simple and fast to set up. It is very, very easy to get up and running with a Docker engine on the Windows Server host. It literally is two PowerShell scripts and you are good to go. The Containers themselves are very, very quick to get up and running. You can have a new instance of SQL Server built in seconds. As I said earlier, they have relatively low footprint compared to VMs. At my company at the moment, I’m running 25 to 30 Containers on a host, using about 30 to 35GB of disk space – sorry 30 to 35 GB of RAM, 200GB of disk space. I’d say far less than if I had 25 to 30 VMs. Next one there, the ability to customize images. Now, this is a huge plus point with Containers. Not only can we use the vanilla images from the Docker hub, but we can pull images down and tweak them to our needs so that when we spin a Container up, it is completely customized to how we want it. So it’s not only has a SQL instance that’s configured how we want it, but we can have our development databases there ready to go; really, really powerful when it comes to building development environments.

Access to the Docker repository, okay, there’s hundreds, thousands, if not tens of thousands of images up there, and even though we’re SQL Server people, so we’re probably only interested in four or five of those images, the SQL Server images, the fact … that it’s there means not only can we pull images down, but we can push images up. It’s really handy for sharing images, and we’ll discuss that at the end of the presentation.

Okay, starting to sound like a little bit of a Docker fan boy now and it’s probably because I am, but what are the cons of this technology, and there are a few. The first one, only the database engine is … in a Container. The agent service is there, and I’ve tried my best to get it up and running. If you can get the agent service running in a Container, please let me know, I really want to work out how to do it. But at the moment, there’s just the engine, so no SSIS, SSAS, SSRS, I’m afraid. This stuff is only supported on Windows Server 2016 or Windows 10, so if you’re running anything else, this could be a blocker for you. It’s a blocker for me, I’m using Windows Server 2012 R2 in my production environment, and there’s no point having a dev environment higher up than my prod.

Next one down, the official SQL Server images from Microsoft, R4 2016 and vNext 2017 only. There was a 2014 instance there, but it was deprecated and now, I believe, it’s gone. So again, if you’re not running one of those versions – and I’d love to speak to someone who’s running 2017 in production, just to see how mad they are – but if you’re running anything lower, that could also be a blocker. And as I said, it’s a blocker for me, mainly because I’m running SQL Server 2012 SP3 in production. But for both of those points, there are workarounds; and we’ll discuss those later.

Next one down, the SQL Server images aren’t the smallest. These things, they started off around 14.6GB and I think the newer images are coming down a little bit. I think they’re like 13 something. But for Docker images, those are quite large; usually, Docker images are in the MBs.

So, if you’re sitting in an office with a lightning fast internet- connection, probably not going to be an issue for you. However, if you’re me sitting at home in, for some reason sunny, Dublin on my, oh if I’m lucky, seven megabit connection, that can take a time to download. The last point there is more of a discussion point just to get you thinking about it, is this stuff suitable for production? Think about the pros I mentioned; simpler and faster setup, easier to deploy. I don’t know about you, but when I’m building a 24/7 critical instances SQL Server, I kind of want to spend some time building that and making sure everything’s right. Maybe in a DR scenario where I need to spin something up quickly or I really am on the fence with whether or not this stuff is production worthy.

Okay, enough theory, enough of me chatting about pros and cons; let’s actually get into the building containers now. And the first thing we need to do is configure our Windows Server instance on the Docker engine. So we have four PowerShell scripts there, but we only really need – I’m going to discount this one because it’s just restarting the computer. And this one, we don’t actually need to run at all; I’ve just dropped it in there for completeness’ sake. What we need to start off with is installing this script, and that is a simple line to install the PowerShell module called the Docker MSFT provider, which is a module for discovering, installing and updating the Docker engine. And when we run that, the engine will come back and say, do you want to install a new get package provider, which is the third script there. So just that first script and we are good to go and install the Docker engine, which is this one here. Nice and simple, install package, name Docker, with the provider we just installed. Let that run, give our server a bounce, and we are good to go. We have installed the Docker engine.

So once our server comes back up, we do need to do a couple of checks just to make sure everything’s gone as expected. And the first thing to do is just double check that the containers feature has been enabled. Once we’ve got that, we can then start interacting with the Docker service. So we’ll just double check that the service is running. So get service, Docker. When everything’s up and running, we can then send our first Docker command. And I always use something simple. So I say, hey Docker, what version are you running at? It’s a Docker version, and if everything’s good we’ve got the client and server version returned to us, and we know the Docker engine is installed, running and responding to commands as expected. Ah, I do want to quickly mention a little bit about Azure. If you are working in Azure, you don’t have to do any of this because Microsoft has very kindly put an image up there that has the containers feature already enabled. There’s also a couple of images there, I think for Windows Server Nano and Windows Server Core. So you’re still going to need to get a SQL Server image, but every little bit helps.

Another thing to mention about Azure as well is, you’ll notice that I’m working on a server with a GUI, but everything I interact with the Docker engine is command line. So why bother with a GUI? That kind of makes sense; this is a host that we want to give all the resources to our containers, so why – I know it’s only a little bit, but why bother spending resources generating a GUI? Well, the simple fact is, and it might be up there, but I couldn’t find it. My lab’s in Azure and I could not, for love nor money, find a Windows Server 2016 Core instance. And that’s the only reason. You can, of course, run all this stuff on Windows Server Core.

So first demo, just going to go through setting this stuff up, just to actually show you how simple it is. So first thing, we’re going to install the Docker MSFT provider. So hit that, hit execute, and the engine should come back and ask us to install the new get package provider. So I don’t need to run that first script at all, I need to take it out of the slide really. So we’re going to hit yes, wait for that to install… Cool. Now we’re good to go and install the Docker engine. Hit execute… Excellent. Okay, so we give our server a bounce and then we can do a couple of checks. So we let that go down, and when it comes back up, first thing, again, we need to do is check that the containers feature has been enabled. So bring the server back up, jump onto the server manager, next, next, next, next, next… Excellent. So now, containers features enabled, let’s open another command shell prompt and double check that the service is there, status will be running, hopefully, and then we can now send our first command. So we’re going to say, Docker, what version are you running at? Excellent stuff, so we’ve installed the Docker engine and it’s responded to requests as expected.

So we can now go ahead and start building containers. And the first thing we need to do is identifying the image on the Docker hub that we want to build containers from. And we do that – my mistake, we’ve got a few Docker commands here that I’ve dropped in just for reference. So if you download the slides from my blog, each one of these is hyperlinked to the actual Docker documentation that will give you the different flags and switches and things like that. But let’s carry on and go with searching the Docker hub for an image that we want. And we do that with the Docker search command. So we’re going to say, Docker, search for an image, and we’re SQL Server people, so we’re going to say Microsoft/MSSQL. And that’s going to return all the images available to us that we can pull down to our local repository. So we’re going to pick one there, and I think the one we’re going to go for is the vNext edition, which is this one here; but there are other ones. There’s the Express edition, the Linux edition and then the Developer edition, which is 2016 SP1.

So now that we’ve identified an image that we want, we can pull it down to our local repository, and we do that by the Docker pull command. So we say Docker, pull Microsoft/MSSQL-server-Windows. Hit execute, and that’s going to pull down the image into our repository. Once that’s completed, we need to make sure it’s there. So we search our local repository via the Docker images command. And this just gives us a list of all the images that are in our local repository that we can use to build containers from. The size of the thing there, 14.6GB, so that did take some time to pull down, but that doesn’t matter because now that we’ve got it here we can build as many containers as we want from it. So let’s run our first container, and we do that via the Docker run command.

Just going to go through what we’re doing here; we’ve got Docker, run, D, run the container in detach mode. So it’s going to run in the background so we can continue using our shell. Then we’re going to say -P, and this where we specify our port mappings. Do you remember that NAT network? So what we do is we say Docker, map port 15789 on the host to port 1433 in the container. And what that’s going to do is any connection hitting the host on that port will automatically be mapped into 1433 default port of SQL Server in the container. Then we’re going to set some environment variables. We’re going to say accept the end user’s license agreement and specify an SA password. We’re then going to give the container a name, and then specify – I can’t do that for love nor money – and then specify the image name that we’ve just pulled down. Hit execute, the engine is going to come back with a big long ID to identify our container, and then we wait for our container to spin up. When it exits out, we then need to do a couple of checks, but I just want to quickly mention something about that ID.

When we refer to our containers when we’re running commands to, say, inspect our container or start it up or shut it down, we can either ruse the name that we set or the first three digits of that ID. When I first started working with containers, I was actually cutting and pasting the whole ID, and then I watched a demo with some chap who was working with it and he was just using the three digits, and boy did I feel silly. So when we’re referring to a container, either use the name or the first three digits.

So that says it’s out, let’s just double check that it’s up and running, and we do that via the Docker PS command. Cool, and what the Docker PS command does, by default, will show us all running containers on our server. If something’s gone wrong, we won’t have anything returned here and we’ll have to use the -A flag, and that will show us all containers on our host, no matter what state they’re in. and if we’ve just run this and nothing comes back, and then we run the Docker PS-A command, and we’ve got a status of excited, something has gone wrong. So we can run something along the lines of Docker, logs, my first container. And that will show the container logs, and it will probably show us what’s gone wrong. And if you’re me, it will say something along the lines of you need to accept the end user license agreement. So we’ll go back here, drop the ACCEPT_EULA, and then run again. And if this time it all goes right, we’ll have a container with an ID there and a status of up.

So, good stuff. We’ve got a container, we’ve got it up and running, how are we going to connect to it? It has a private IP address assigned to it when we spin up, but we’re going to connect locally, so we need to find that private IP. And we do that with the Docker inspect command. So we say, Docker, inspect our container name. And that’s going to output a whole lot of information in JSON format, and what we’re looking for, right at the end, a private IP in the 127 range. And we can use that, if we have management studio on our server, drop that in with our SA username and password and connect into our container.

If we’re connecting remotely, we use the host IP and the port number that we specified. So 15789 is the port number that we specified, hit execute and we’ve connected to our container. So let’s go through that process now. So we’re on our host, let’s search for images that we want. So Docker, search Microsoft MSSQL, and there’s the image. So that’s the vNext image, so we’re going to go with that one. So we’re going to pull that down to our local repository. So we’re going to say, Docker, pull that image down, Microsft/MSSQL-server-Windows. Hit execute… That’s going to take a bit of time depending on our internet connection. So we’re going to go away and make a cup of team. When we come back, hopefully, we’ll see it’s successfully downloaded. Excellent, so we’ve got the image there. So we’ll just clear that out and double check our repository by running the Docker images command. Cool, so we’ve got that in there. 14.6GB is not the smallest image in the world, but it doesn’t matter. We’ve now got it in our local repository and we can build containers from it. So let’s build our first container. So Docker, run in detached mode and map port 15789 to port 1433 within the container. We’re going to accept the end user license agreement and then we’re going to specify an SA password. We’re going to give it a name and then we’re just going to build it from the image that we just downloaded. So Microsoft/MSSQL-server-Windows, hit execute. It’s going to come back with an ID pretty much immediately, and now we’re waiting for our container to spin up.

Okay, so let’s check that’s up and running and everything’s gone well. So we’re going to say Docker PS, no, yes Andrew. Cool, so our container is there, it’s got a status of up. Everything looks to be good. So I’ve got management studio installed locally on because this is my lab. So we’re going to find the private IP by running the Docker inspect command. So, Docker, inspect my first container. And there’s our private IP; so we’re going to spin up management studio, drop that in with our SA username and password and see if we can connect.

Excellent, okay, so just to recap what we’ve done, we’ve configured our Windows Server host to run the Docker engine, we’ve searched the Docker help for an image that we want, we’ve identified that image, pulled it down to our host and then started building containers from it. And we’re spinning up containers, but they’re empty instances of SQL Server. Now, I know there’s PowerShell gurus out there who will just say, well I’ve got a PowerShell script that will restore our databases, no problem. And that’s great, but it’s kind of missing the point of containers. What we can do is we can take that image we’ve just downloaded and customize it to exactly how we want it, so that when we spin a container up, it has an instance of SQL Server in there with all of our development databases ready to go. And the way we do that is via a Docker file. Now, a Docker file is simply a file on the container host that contains a bunch of code that the Docker engine will work their way through, executing one by one and build us our custom image.

So here’s an example Docker file, starting with our [inaudible] image. So we’re going to build a new image from the existing image that we just downloaded. Then we’re going to run a PowerShell command to make a directory within our container. So we’re going to make a C SQL Server location, and then we’re going to copy two database files into that location. Notice that the database files don’t have a file path there. So when we execute this, those two files need to be in the same location as our Docker file. Finally, because I’m forgetful, I’m going to set the environment variables SA password and accept the end user license agreement there, and then we’re going to attach our database that we just copied in into our container. You can, obviously, attach more than one database and that database can have more than one data file. I’ve just, sort of, kept it this way because it looks nice on the slide.

Okay, so then we’ve got our Docker file and we need to build our custom image. It’s very simple, we use a Docker build command. So we say, Docker, build me an image, -T, tag it with the name, so myfirstimage, and then for example, there I’ve just done Dot. And Dot says, look for a Docker file in the current location that I’m in, in my shell. You can specify a file path there, but just a nice example, Dot. So, Docker, build -T myfirstimage, and we can see it stepping through each one of our commands, build intermediate containers, and then finally, removing the last one and building our custom image.

So we run our Docker images command to query our repository, and there it is our new custom image. So we’ve called it myfirstimage, 14.8GB, slightly bigger because I’ve copied some files into. And now, we can build containers from that custom image. So we use the run command again. We say, Docker, run in detached mode, map in a port, 15777 – oh, and by the way, I’m picking these port numbers at complete random, there’s no reason why they should be those; they’re just the first thing that popped into my head. So map in port 15777 to port 1433 within our container, give it a name, and then specify in our custom image. Notice no environment variables, they’re already baked into our image. Hit execute, it comes back with an ID, waiting for the shell to return, and then we can check that everything’s gone well and it’s up and running, just by running the Docker PS command. If everything’s gone fine, we’ll see our new container there with a status of up, and now we can connect to it. And this time, I’m going to connect remotely using the container host IP, the port number that we specified, and if everything’s okay, we go in and our database is already there ready to go.

So let’s run through that again. We’re on our host, we’ve got our Docker file in a location with our two database files. Gives our Docker file, just with all the commands in; I’m not going to run through them now. So let’s build our image. Navigate the Docker file location, and now we can double check everything is there. Now, run our Docker, build me an image, tag it with a name, and the Dot, because I’m in the location. Hit execute, now it’s stepping through each of the commands. So it’s making our directory, copying our database files in – so there’s the data file, then copy the LDF in; excellent. Setting our environment variables then, and now we’re going to attach that database into the SQL instance within the container.

… image ID there. So let’s query our repository, making sure it’s there. So Docker images, and there’s our custom image. Cool, so let’s clear out of that and let’s build us – let’s run the new container. So, Docker, run in detached mode, map in port 15777 to 1433 within our container, no environment variables, given it a name, and then the image name, myfirstimage. Hit execute, it will come back with our ID and now we’re waiting for it to spin our container up. Cool, okay, let’s double check that that’s all gone okay.

So we’ll use the Docker PS command, boom, and there is our new container. Okay, so I’m in my lab, so I’m going to connect locally, so I’m going to run the Docker inspect command to get the private IP. So, Docker, inspect my custom container, there’s our private IP address, so clear out that one, drop our new IP address in with the SA username and password, then we’ll see if our database is there … PowerShell, we can spin up an instance of SQL Server in a very short period of time that has everything you need ready to go. So say we’re working in a dev – we’ve deployed this in our company and our dev department is working away. This spin up container [inaudible] blowing them away, happy as Larry.

There’s another dev department down the corridor and they want to get in on this; so they’ve customized the host to run the Docker engine, but they work with the same production environment we do. So we want to make sure that their testing against exactly the same configured SQL instances that we are. So we need to share an image. Because they’re local, in the same building as us, we can export our custom image out and push it over to their host. And we do that via the Docker save command. So we say, Docker, save-O myexportedimage.tar myfirstimage. Actually, there are a couple of things I should have mentioned with these image names; container names can be in any case you want, but the image names have to be lower case, and this exported name, I tried my best to get it working with .zip on Windows, but every single time I used .zip, I managed to corrupt the exported file. But for some reason, when I use .tar, everything’s fine. Maybe a legacy from Linux days, I don’t know.

But we say, Docker, save-O file name, the image name, hit execute and we’re waiting for its return. Now, that will save it to our local file system, and once it’s saved, we can push it over to the other host, and then they can load it into their container repository via the Docker load command. So let’s say, Docker, load-I input myexportedimage.tar. Hit execute, and they can wait for it to then load into their repository, and once it has, they can start building containers that are exactly the same as the containers that we’re testing against.

So that’s great, we’ve got two departments now, in the same company, all testing against instances that exactly configure to how we want them. But our company has a remote team working in a foreign country somewhere, and they want to get in on this as well. Okay, let’s fire up those FTP sites, but we’re using Docker, so we have access to the Docker Hub. What this is, is an online repository that just has a load of images available to us, and we’ve interacted with it. We’ve pulled images down, but not only can we pull images down, we can push images up to it.

So we go to the hub.Docker.com, choose the Docker ID, sign up and then we can create a repository. So my username, DBAfromthecold, I’ve given it my repository name, testSQLrepository, and then we’ve got options for visibility; we have either private or public. Private means only I can see any images that I upload to my repository. Public means, and we’ve got to be careful with this, anyone with a Docker ID can download an image you upload. Now, if you have a corporate account, I think you have options to specify individual accounts that you can share with, but that costs money and I’m cheap. So, I have the free account and those are the only two options available.

So, once we create our repository, we can then load our image up, and the first thing to do is tag our image with our repository name. so we say, Docker, tag myfirstimage, my username/my repository name and then a tag. Then you’ll see, most of them are usually latest [inaudible] version one, version two, version three, just so I can keep track of what I’m doing when I’m loading stuff up to the hub. Hit execute, and there’s a little quirk with this command in that it will exit immediately, and it doesn’t rename the existing image, it actually creates a new image with a new name. And there’s definitely a use for that, I just haven’t thought of one yet, but nice and quick. So we hit execute, run Docker images to verify our repository and there it is. So, now that we’ve tagged it with our repository name, we log into the Docker hub in the command line with the username and password, and then we can push the image up to the hub with the Docker push command. So we say, Docker, push username/repository name tag, hit execute. And this, depending on your internet connection, can take some time.

So we go away, we make a cup of tea, we come back and hopefully, when that’s completed, we can jump into the hub, refresh the image, and there is our tag, the V1 tag, and we’ve successfully pushed an image up to the hub. So our remote team can download that image using the Docker pull command, and now we have, not only local teams working on containers that are configured exactly how we want them, we also have our remote teams as well. So let’s run through that one more time, just to show it in action.

So we’re on our host, double checking our images, so we’ve got the myfirstimage there. So Docker, save-O, export myfirstimage.tar, then myfirstimage, hit execute. Now, there’s no progress for this, we just have to wait until it exits out, and when it has exited out, we can double check the file system to make sure that it’s there. So there it is, take that, push it across to the other host, and then those guys can load it in via the Docker load command. So there it is on this new host. So Docker, load –I myfirstimage.tar. Now this does have progress, so you can actually sit there and watch it, but they’ll probably go away, make a cup of tea, and then when it’s finished it will look something along the lines of this. And then they can query their local repository to make sure it’s loaded; excellent.

Okay, so let’s have a look at pushing up to the hub. So load up the Internet Explorer or Edge. Here’s my repository, I always make my repositories public. There’s nothing there at the moment, so I’ll just have a look at the tags. Docker gives us the weight of pul; absolutely empty. Okay, so we’ll jump on the server and we’re going to push that image up to the hub. There it is, close that, okay, Docker, push – oh my mistake, we’ve got to tag it first, sorry. So Docker, tag myfirstimage, DBAfromthecold, so my username/repository name, and then we’re going to give it a tag of V1. Hit execute, now that’s completely instant. So if we have a look at our images, there we go, it hasn’t renamed the image, it’s actually created us a new image with our repository name. Clear out of that, log into the hub, and our password… Cool, and now we can push the image up. Now, finally, our tag, hit execute. It can take a bit of time, depending on your internet connection, so we’re going to go away and make a cup of tea, all the way back, double check everything’s gone okay by searching for the image. So we say, Docker, search, DBAfromthecold, and there’s our image.

So come out of here and let’s actually check in the web browser. Refresh this, and there’s our image. Okay, so that image is up and available, we’ve made it available to our remote team. They can pull it down with the pull command, and now we have all of our dev departments in our company using our specially custom image that they can build containers from that are tailored to exactly how we want them to be. So let’s just recap on what we’ve gone through. We’ve configured our Windows Server 2016 host to run the Docker engine, identified a vanilla image up in the hub that we want to use, pulled it down, started building containers from it, but they’re empty SQL instances. So we decided not to go that way, we decided to customize our image with our development databases, and so that when we spin our instances up, they have all of our dev databases there ready to go. And we were happy as Larry testing away, but there are other development teams in our company that wanted to get in on that action as well, so we shared an image locally and we shared an image with the hub. And that is one of the really cool points with Docker, not only the customization but the ability to share.

Okay, so that’s pretty much all the points I wanted to talk about Docker. What I’d like to talk about now is how we implemented containers at my company. So a little bit of background, I have a QA department and they have new VMs in which they test our full production stack. So each one of those test VMs needs an instance of SQL Server, and they’re constantly blowing away these VMs and rebuilding them, reinstalling SQL each time. The method of installing SQL is via Chocolatey, which is an application package manager for Windows. And then once they spin up SQL, they restore all the databases via a series of PowerShell scripts. But this wasn’t the most reliable process in the world, it was prone to failure and when it did fail, the dev was left trawling through either Chocolatey error logs, SQL Server error logs, working out what went wrong, then uninstalling bits and bobs and then having to start again. It wasn’t the quickest process in the world; the hosts that the VMs were running on using five 2K disks, CPUs that are probably – oh god, I think they’re 2007. So we were seeing installs of SQL taking anywhere from ten minutes to 40 minutes just to install the engine. And then they’ve got to restore all the databases as well, and they are not the biggest databases in the world, but for some reason they were taking, I saw, up to an hour to get all the dev databases up and ready to go.

So there has to be a better way, and that way, of course, was containers. We could implement containers running SQL Server on a remote host with a custom image that has all of our dev databases in it. So we’re taking SQL Server away from the test and dev VMs, moving them into containers on a remote host and the guys would spin a container up and point all their apps at the container host with their containers. So no need to install SQL, no need to restore databases from PowerShell, and resources will be freed up on the VMs as well because each one doesn’t have SQL running on it taking however many GB or RAM. However, when I first started working on this solution, I ran into a problem straight away, and I mentioned this right at the start. That containers are only supported on Windows Server 2016 running either SQL Server 2016 or vNext. We’re running Windows Server 2012 R2 with SQL Server 2012 SP3, soon to be SP4, but still, bit of a problem. That was until I found a company running out of Seattle called Windocks, and what Windocks have done is build a custom port of the open source software that Docker is made available to the community that allows earlier versions of SQL Server to run on earlier versions of Windows Server. 2008 and upwards, both cases; fantastic. It’s exactly what we needed. And even better, and one of the reasons I’m mentioning it is they do a free community edition as well. So if you are working with earlier versions of SQL Server, I highly recommend you check out the Windocks free community edition; it’s very, very good stuff.

So, got the software, got the free community edition, went to my sysadmins and went, hey guys, do you have a spare server lying around that I can use to test this stuff on? As luck would have it, they had an old box that I could use to repurpose. So I wiped it, reinstalled Windows and then followed the Windocks installation instructions, and here’s what we built. Very, very similar to a Docker installation with a couple of differences; the main one being this default instance of SQL Server. So this is the Docker engine, the Windocks daemon, and the reason the default instance of SQL Server is there, and it doesn’t run, it’s in disabled, is that when the Windocks daemon spins up for the first time, it looks for SQL Server binaries. So if you install Windows Server 2008, it builds you a 2008 image. If you install SQL Server 2018, it builds you a SQL Server image of 2008. If you install SQL Server 2012, it will then build you a SQL Server 2012 image. So you don’t have to download an image from the Docker hub, nice and quick, really, really handy. And then we have our VMs here, all collected on a custom port that’s specified when we build our containers; exactly as we’ve seen earlier.

Another thing to mention about this SQL Server instance is that the Windocks is slightly different from Docker in that it will use that default instance binary when it builds containers as well. Now that’s interesting because that means that if we make any changes to those binaries, they’ll be replicated in the SQL instances within the containers. So if we patch that default instance, all new containers will be running at the higher patch level. So if SQL Server 2012 SP is coming out, I no longer have to go and patch, say, 40 VMs. I can patch one VM, sorry, one host, get the guys to blow their old containers away, spin new containers back up and they’ll all be running at SQL Server 2012 SP4. A nice little-unexpected bonus; I kind of like that.

Okay, so what benefits did we see? So we’re no longer installing SQL Server from Chocolatey and we’re no longer restoring databases from PowerShell. So that means, we can spin up new VMs in a fraction of the previous time. So we time spin up a new container with, I think, we’ve got 32 dev databases, and from hitting execute and the container being spun up with all our databases ready to go, the longest it has taken is two minutes. Two minutes, down from up to 40 minutes to install SQL, up to an hour to restore the databases. Okay, they’re not the biggest databases in the world, but they are exactly the same databases that were being used in the previous process. So no matter how you look at it, that is a huge saving.

The base image can be used to keep containers at the production SQL instances patch level. What I mean by that is, when we release to production, we also release to a container. So, when we release to production, we release the same scripts to the container, and then we commit that container as our new image. So when our dev guys spin up a container, they’re guaranteed to know that all the dev databases in there are at production level. The last point there, more VMs can be provisioned on the host due to each host requiring fewer resources, because SQL Server is not installed on there. This hasn’t happened, mainly because the guys – because SQL is no longer on the server, it’s freed up resources and the VMs are that much more snappy to work with, making it a nicer work environment. So they kind of said, no we don’t want to build any more VMs on the host, we want to keep it exactly as it is because we’re kind of chuffed with working with these really snappy VMs.

Okay, so it wasn’t all easy going, there were a couple of problems. So let’s talk about some issues that we had. The first one there really, really scuppered us. So in production, all our applications use DNS entries to reference our production SQL Server. And that’s replicated on these test and dev VMs by hosting file entries. So say we have production DB pointed to 127.0.0.1. Now, this is a problem because the apps are no longer looking at a local default instance of SQL Server. They’re looking for a remote instance of SQL Server that’s listing on a custom port. And that custom port is the problem; you can’t specify port numbers in host file entries or DNS entries.

So what do we do? Change the application string, right, you just dropping a port number into the application string. The apps can connect, good to go. Well, not really. One of the prerequisites of this project was that we could not make any changes to the applications, and that’s fair enough really. I mean, you don’t change an application that you’re going to deploy to production purely because there’s some half crazed DBA sitting in a corner banging on about random ports and containers. So what do we do? We use SQL client name aliases. And what SQL client name aliases allow you to do is not only murk DNS entries, but they allow you to specify port numbers in them. So when the guys spin up their containers, they do it through a PowerShell script, and that PowerShell script is a step to grab the custom port the container is listing on and drop it into the client aliases. The apps don’t even know, they think they’re connecting to a local instance, but they’re actually pointing at a SQL instance in a container. Really, really nice solution, really got us out of a bind. I really wish I’d thought of it, but it was actually the other DBA who came up with it; so nice one, John… He won’t be watching.

The next point there, update to existing test applications. So we have a bunch of custom in-house applications that are buried in the code somewhere, things like SQL instance dot or SQL instance local host. So it’s just a matter of going through and updating them to use the containers hosts, so we could actually change those application strings. Trial and error to integrate with Octopus Deploy, we use Octopus to deploy our applications and each application deploys slightly differently. So it’s just a matter of going through each one and either changing things on the container side or the deployment side, just to make sure that everything works.

And the final point there, new ways of thinking. So people are generally used to thinking of SQL as being this, sort of, behemoth that sits there. And once installed, sits there and runs queries, doesn’t do anything; you might install a database, you might drop a database, but you don’t really do anything to the SQL instance. Containers change that way of thinking by making SQL, within the container, a throwaway object. You spin the container up, you test against it, you blow it away. You spin another one up, you test against it, you blow it away. And just getting people into thinking of SQL like that, as in you don’t keep it up and running, you use it when you want it and you throw it away.

Okay, so finally, just want to go through a final couple of slides, just to go through some other resources. I mentioned right at the start, there were workarounds or other options to instances of SQL Server in a container. So you can use the Windocks stuff. Or if you want to, you can use a couple of images that I’ve uploaded and made public on the Docker hub, which have SQL Server 2012 SP3 dev in or SQL Server 2014 SP2 dev in. so those are publically available if you have earlier versions of SQL Server that you want to use. But if you don’t want to use those images and you want to build your own images, I’ve actually put the code on GitHub to be able to do that, and that’s DBAfromthecold/Docker MSSQL. You can customize that for whatever instance of SQL Server you want to install.

Okay, Portainer – so another question I usually get with stuff like this is, you’re interacting with a Docker engine always through the command line, but I kind of like GUIs. Is there a GUI I can use to interact with and manage my containers? And Portainer allows you to do that. It’s completely free, it’s available on the Docker hub to run in a container. It is a little bit tricky to get working with Windows containers, but I’ve got complete instructions on how to get set up and using it on my blog. So please check that out if you want to use that stuff. It is actually really good.

And finally, just to round off, I’ve posted a load of stuff about containers on my blog. So there’s a summary of everything I’ve posted there if you want some further information. I have the case study that I’ve talked about with my company posted on SQL Server Central. It has a little bit further information about how we use the client aliases and what we did. I’ve talked about the stuff with Carlos and Paul up in SQL Data Partners; so if you want to listen to that, that link is there. And then right at the start of the presentation, I mentioned that you can run this stuff on Windows Server Core. And if you want to, I’ve published an article on SQL Shack about how to configure Windows Server Core to run the Docker engine.

Are there any questions? Let’s have a look…

Rob Sewell: So there’s a question about the minimum required versions for running Docker.

Andrew Pruski: Minimum required? As in…

Rob Sewell: Windows requirements, yes.

Andrew Pruski: As in resources or versions?

Rob Sewell: Versions.

Andrew Pruski: It’s only supported on Windows Server 2016. So that’s the earliest…

Rob Sewell: What about Windows 10?

Andrew Pruski: Oh, Windows 10 Pro or Enterprise with last year’s anniversary update.

Rob Sewell: And then there was another question about Windocks, but you answered that.

Brent Ozar: I’ll pop in and ask, I threw one into Slack – is there anything that you wish Docker would change? Or like any features or things that you wish they would add?

Andrew Pruski: That’s a very good question.

Brent Ozar: Every now and then I get lucky.

Andrew Pruski: Oh yeah, definitely, at the moment – so AD authentication thwarted, so it’s only SQL Server authentication, which can be a bit of an issue. So if they could get that sorted, it would be absolutely brilliant.

Brent Ozar: Alright, is there anything that’s new coming in next versions of Windows that you know about in terms of containers?

Andrew Pruski: Not off the top of my head. The Windocks stuff is – they’ve got some cool stuff coming up. Well, they’ve just released their version two, which supports cloning. So if you want to work with large data sets, that will allow you to, say, have a whole bunch of really big databases deployed into a container in a very short period of time. I’m yet to start playing around with that, but that looks pretty cool.

Brent Ozar: Okay cool – yeah, go ahead, Rob.

Rob Sewell: How are the resources handled when you’ve got several containers running and you’re hammering each of the instances heavily?

Andrew Pruski: Okay, so you can specify memory usage and CPU at run time with containers. I don’t know about I/O, but I’ve generally managed memory usage through SQL; as in, in the container I’ve set the memory limit. So it doesn’t go absolutely nuts with absolutely maxing out the RAM on the box.

Rob Sewell: And a question about Docker Toolbox. The question is, “did you discuss Docker Toolbox?”

Andrew Pruski: So the shocked look on my face says no, I haven’t used the Docker Toolbox.

Brent Ozar: I saw there was a bunch of stuff around news with that, so they did a huge newsreel. What was all the controversy about? Does anybody know?

Andrew Pruski: I’m not too sure, I’m afraid.

James Anderson: Is that controversy about Docker?

Andrew Pruski: Okay, if you said controversy, was there anything about going towards the Enterprise Edition and the Community Edition? Because they’ve separated out into Project Moby now.

Brent Ozar: Yeah, yeah.

Andrew Pruski: Yeah, so there’s now an Enterprise Edition, but they’ve moved all their free stuff into a separate project called Moby, and I think there was a bit of hubbub about that. But I don’t really see why, because all the stuff is still free and you can get it, so I’m not too sure about that one.

James Anderson: There’s a concern that, because container technology existed before Docker and Docker wrapped it with an API and they’re the dominant company for that technology. And I think there’s a concern with some of the people in the open source network that they’ve pushed certain changes that they want to happen to Docker and Docker have actually rejected those changes because they didn’t fit in with the model that they – they want to be cross-platform for Windows and Linux. So they actually have rejected some pull requests, and so people are getting concerned that it’s more driven by Docker, what they believe, instead of a fully open source. So for that reason, Google are pushing an open standard of container, which could be bad news for Docker in the long run. But for us, we don’t really care, we’ll use whichever container is out there.

Andrew Pruski: Yeah, it’s really immaterial which technology company we use, I just want to be able to spin up an instance of SQL in seconds. If it’s Docker, if it’s Windocks, I don’t really care.

James Anderson: Like, the orchestration tools, I mentioned earlier there’s a tool called Kubernetes, which lets you manage groups of containers. And that is not Docker, it’s not reliant on Docker, it can run any container engine, it’s just pretty happy with that.

Andrew Pruski: That’s on my list of things I need to start playing with, definitely Kubernetes.

James Anderson: Their documentation is amazing. They’ve got this online command line that you can use within their documentation that lets you practice all of the commands. It’s really good.

Andrew Pruski: That’s awesome, I’ll definitely be checking that out; cool.

Brent Ozar: I would think it would be easy to flip regardless because if everything boils down to a series of text files just to define what your environment is. And look how fast Docker caught on and look how fats Kubernetes caught on. If something else caught on or came in that was awesome, we would just switch over to that and that would kind of be the end of it.

Rob Sewell: So, James and Andrew, any input on using SQL Server in Ansible? I thought Ansible was sort of Linux only, but maybe you guys know better?

James Anderson: Yeah, I’ve not seen it used for Windows, I don’t know if you have, Andrew?

Andrew Pruski: No, I haven’t I’m afraid.

James Anderson: So Ansible is kind of a way of – it’s kind of like – in the Linux world, it’s kind of like DSC and PowerShell as far as I’m aware. It’s a way of letting you script – and like Puppet and Chef, I think. It’s a way of letting you script your infrastructure and then it goes and deploys it. With SQL Server on Linux, you could use it.  Whether you could plug that into Windows? Probably. It’s not something I’ve seen though.

Rob Sewell: So question in Slack, Andrew. Isn’t Docker much better for stateless applications compared to state heavy applications like SQL Server?

Andrew Pruski: It was originally designed for that, but you can persist data – I was really hoping someone would ask this question – you can persist data in Docker containers. I’ve actually got – I’ve just finished writing a blog post about it. So say you are working with SQL in a container and you blow away that – this is why I don’t type in demos… Blow away that container, but you want to retain the changes that you’ve made to your database [crosstalk] I’ve got – this is my dev instance and I’ve got the image of SQL Server Windows is up there. And so if I spin up a container, and I can use the –V flag to map a volume on my host into my container. So there we go, I’m having a connection problem, there we go. So I’ve got D SQL Server and a couple of database files there. So let’s run a container …

So, we’re going to say, Docker, run it detach mode, map in my ports, and here’s the flag here, -V mapping D SQL Server on my host into SQL Server in my container. So, let’s hit that, hit execute…

Brent Ozar: So you persist the data in log files?

Andrew Pruski: Yeah, so what I’m – it’s mapping that log and data file into the container, but it’s going to persist. So if I go into Docker, inspect – what did I call that container? I think it was test container. My IP address… Now, the database isn’t there, but it is in the container. So I just need to attach it, gives my location, okay, and attached. This is an empty database, so let’s do something like create a test table and put a [record] in there. So just creating a dummy table, one value, hit execute, there it is. Okay, so let’s disconnect and throw that container away.

Brent Ozar: So to visualize, for those of you who aren’t following, there’s a Docker container, this thing’s temporary and not persisted, but it’s pointing to data and log files that are stored outside of the container, not onside the container’s local file system. So it’s a lot like running SQL Server off a UNC path, pointing to your data and log files on a UNC share, something that’s on a permanent file server, and the SQL Server blows chunks, it doesn’t matter. I mean, it does matter in the sense that you could corrupt the database. You still need some kind of backup and recovery for that kind of thing…

Rob Sewell: That was going to be my question, Brent. Andrew, are you risking corrupting your database if you blow away your container?

Andrew Pruski: Good point, I tried [crosstalk] – let’s see if that’s happened, because I guess if you blow away your container while it’s accepting connections, probably.

Brent Ozar: And it’s a dice roll. I mean, sometimes SQL Server’s really good at that, sometimes not so much.

Rob Sewell: And I guess it depends, when you blow away your container, does it actually gracefully kill those connections or does it just go bang?

Andrew Pruski: I think if you – that’s a very good point, how does Docker interact with SQL Server services? Does it actually gracefully shut it down or does it just go bang, you’re dead?

James Anderson: I think if you use Docker stop, then that gracefully shuts down the container, whether it actually waits for SQL Server to shut down, I’m not sure. But I think if you do Docker stop with –F, that will just kill it instantly.

Andrew Pruski: Okay, that’s forcing the stop, yes? What did I just call that container?

Brent Ozar: Eugene asks, “what happens if you spin up two containers pointing to the same file storage?” Locking issues, unfortunately, two processers can’t access the same windows file at the same [inaudible] Windows can access the same file at the same time. So it would not be – either Docker would stop you or Windows would stop you, one of the two things. But if you do things like file server copies, then you can pull that kind of thing off.

James Anderson: See the cool thing about Docker in Linux is that the containers have what’s called a union file system, so on the host, if you’ve got some files, you map those files or mount those files into the container, like Andrew’s doing now. That container has read only access to the files on the host, but you can still make changes to those files, but those changes, that delta has stored within the container. So it’s a really nice feature. I don’t think Windows is capable of doing those Windows containers because the file system, NTFS or whatever it is, isn’t quite that fancy yet…

Andrew Pruski: So I just wanted to say, there’s – so I’ve just attached that database back into a new container, double checked and our changes are there. So when we blew away the old container, it kept the changes. It persisted the data on disk and all the changes that we made to the SQL instance within that container. So Docker volumes, if you use the –V flag are persisted even if you drop a container.

Brent Ozar: Nice… So folks, if you’ve got other questions, feel free to dump them in either Slack or in the GoToWebinar. Otherwise, thank you very much, Andrew, excellent session, nice job. Round of virtual applause, especially when throwing on live demos at the end. People are saying in Slack, nice job. It’s terrifying to show something, especially that you weren’t planning on showing… Surprise.

Andrew Pruski: Definitely. I’ve got one more if people want to see, but…

Brent Ozar: Yeah, sure hold on, let me – I’m going to make you the presenter again then, which is good because it gives me time to finish my sandwich that I didn’t eat. So you should be the presenter again, you’re not sharing your desk top yet though, so you’ll have to share that – alright I’ll drop off for a minute, I’ll be back.

Andrew Pruski: Awesome, so one question, another one that comes up is, “Can we control where the containers live on the host?” So by default, they live on the C drive in C program data, Docker; which isn’t great because hey, I don’t like having stuff on the C drive at all. So can we move images and containers off? And we can. There is a switch that we can use when we spin up the Docker daemon. It’s the G switch and it allows us to specify a custom location on our host where we can store containers. So let’s create a directory on our D drive called containers, and let’s run through a couple of the commands. So C codes, open this up in Notepad. So let’s stop the Docker service first, and then we can set it to disable.

What I’m going to do is I’m not going to alter the existing service, I’m going to set up a new service pointing at the new location. I’ll just double check that. So new service, name Docker2, binary path name to the Docker, D.exe –G, and this points to the new location. Run service, startup type automatic. So hit execute. Coo, and then we can start our service up. And let’s have a look at the new location… Please work… Excellent. So now when we spin up new images and now containers, they will be stored in a custom location that we’ve specified, so we’re taking it off the C drive, which I kind of think is kind of cool. One thing though is, because I’ve moved it from there, I don’t have any containers and I don’t have any images. But we could probably use the Docker save and Docker load command between the two services and load our existing images into the new location. I’ve actually tried to copy all files under program data Docker to the new location and I keep getting access denied errors against the Windows filter folder, which I haven’t spent much time trying to actually work out what’s going on, but I just sort of gave up and went I’ll just point everything there and start again.

There we are, that’s how to change the default location of Docker images and containers. I’m so glad that worked.

Rob Sewell: Your Docker files, Andrew, they’re in the Docker cloud store, whose name I’ve forgotten?

Andrew Pruski: yeah, they’re in the Docker hub. So if you want to pull my custom images down, just search, Docker search DBAfromthecold and it will show – all my repositories are public, so you’re more than welcome to pull those down. And the code is also up to build a custom container is also on GitHub as well.

Rob Sewell: One question that I hear with SQL Server on containers is licensing and how that works.

Andrew Pruski: So all the images that Microsoft have released are developer edition, so that’s how they get around that one. Windocks gets around it by using – they license the containers as named instances of SQL. So you can have up to a maximum of 50 containers on a host.

Rob Sewell: Does that mean you can’t use it in production then, not licensed to use it in production if it’s developer edition only?

Andrew Pruski: At the moment, I don’t know where Microsoft are going with it. That’s why I’m thinking are they going to go and push this, are people going to say hey we want to use this for production? It really is – I’m not so sure. But you can also, you’ll see it if you have a look at the code at GitHub, I’m installing SQL Server from an iso file, and I’ve used a dev edition. But you could probably use, if you were mad enough, an enterprise edition if you wanted to go that way. But then you really are in murky ground with licensing, I think. I’m not too sure how it would work.

Rob Sewell: Especially if you used multiple versions of it as well, multiple containers.

Andrew Pruski: Oh yeah, spinning up on a host with, say, 18 cores.

Brent Ozar: I had this personal belief that when Microsoft started announcing support for containers and for Linux, I always put my tin foil hat on and I’m like, you know what, I bet the Azure SQLDB team is doing this because I bet the Azure SQLDB team is tired of supporting six gajillion Windows VMs. And that it would be so much easier for them to run databases on Docker containers, run it on Linux, it’s easier to automate Linux at scale – I’m probably going to hell just for saying that. But for automating OSs, deployment and testing at scale, in text files – so I went to the Microsoft team with this just at a conference, publically. Other people around so there’s no NDA or anything. So I’m talking to him and I’m like, is this what drove your push to go to containers and SQL Server support on Linux? And every time I asked someone from Microsoft or Azure that, I got the same look, like where are you coming from? What is your idea? And they all said no, that’s not what we’re doing at all. But I still have this conspiracy belief that one day we’re going to hear from Microsoft when they go to do the announcement that we’ve been running Azure SQLDB on Linux for a year now, it’s totally production quality. Because they have this thing where they seem to want to say that you’re already running it in production live, you know, this is how you know that it’s supported. The same way that they say Azure SQLDB is the next version of SQL Server so they’re already testing the new version for you. So ah well.

So thanks again Andrew, excellent job, nice job, sir…

Andrew Pruski: Thank you, guys.

The following two tabs change content below.

Latest posts by dbafromthecold (see all)

,
Previous Post
SQL Server and Continuous Integration
Next Post
Green is good, Red is bad – Turning your Checklists into Pester Tests

1 Comment. Leave new

I am looking forward to this presentation. Containers are pure magic to me. Your abstract is exactly what I would like to know.

Reply

Leave a Reply to Ray Herring Cancel reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu