Data + Docker = Discombobulating?

Target Audience:

Anyone wondering what containers are and how we can safely store data in them.

Abstract:

If you’re like me, you like your data to stick around for a long time and most importantly you want to know it’s safe.

In the Docker world, there’s the maxim of “never patch”, always make a new container with the latest version of the application. If we’re sticking in our database in a container like Microsoft are doing with SQL Server now, what happens when we need to apply the latest patches? Will we lose our data?

In this talk, we’ll look at the sorts of data we need to think about (it’s not just databases!) and how Docker containers work. We’ll then look at how we can save our data from disappearing when we load a new version of our Docker container. Once we know how to safeguard ourselves, we’ll look at some of the architectural options you have when working with data and Docker.

Why I Want to Present This Session:

No one should lose data!

Additional Resources:

http://stephlocke.info/Rtraining/datadockerdisconbobulating.html#/

Previous Post
War of the Indices- SQL Server vs. Oracle
Next Post
From 0 to 1bn transactions a month on a shoestring budget

2 Comments. Leave new

Ooo, interesting – I’ve wondered about this myself.

Defining the target attendee is a little tricky here: the target audience says “Anyone wondering what containers are”, but the second sentence assumes that they know that Docker containers should be constantly killed and reborn. The people who don’t know what containers are will immediately have questions – “Why should I kill my containers? I don’t kill my VMs or servers.”

I went through a quick thought exercise about guessing what the session would include:

  • Defining what containers are
  • Explaining why you might want to use them over VMs
  • Explaining the container options out there, and why you’re going to focus on Docker
  • Defining why containers should be killed & recreated rather than patched in place
  • Connecting the dots that the local file system is going to disappear each time

And then the work in the abstract starts. Is it doable in 90 minutes? Yeah, but it just feels like the abstract is really light relative to the heavy dot-connecting that has to be done in the session. (Then I clicked on the more-info link and yep, sure enough, you’re going to that level of detail – but I would just tweak the abstract to explain the level of detail.)

This way, people won’t go in expecting a 300-400 level session on data persistence with Docker, and then sit through 15-30 minutes of “Meet Docker” before they get to the stuff they want. Just setting the right expectations. (And I do think the deck as-is makes sense – with up to 90 minutes, you can afford to lay the groundwork to introduce Docker, and the majority of the community needs that.)

Reply

Good topic! Maybe consider fleshing out the “Why I want to present this session” and clarify if there are any pre-requisites to taking this.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu