In depth interview with Dotmesh founder, Luke Marsden, about how Dotmesh works.
This blog post is a transcription of An Introduction and Deep-dive into Dotmesh, a video interview of Dotmesh founder, Luke Marsden, by Michael Hausenblas of Openshift. Recorded on Friday Feb 9 2018.
[MH] Well, hello and good morning at least here in the European zone. Today I have a super interesting guest with me and that is Luke. Hi Luke, good morning. How you doing?
[LM] Hey Michael, how’s it going? Thank you for having me on.
[MH] Hey, I’m super excited. We got a mini talk about your new thing, Dotmesh, but before we get to that, can you give the audience a little bit of a background? So this is your new venture, but there are many other things I can remember that might be of interest that you have done before…
[LM] Yeah, so I guess I’ll start by talking a little bit about ClusterHQ. So, ClusterHQ is the company I founded previously and we were working on making stateful containers a reality for people so basically making it possible to run databases (and other workloads that include data) functional in a microservices containerized environment. So I was involved in developing Flocker which was the open-source project at ClusterHQ. With Flocker we did a lot of work… I mean the industry was super early at that point like that was the year that Docker just sort of started exploding in 2013-14. I think was when we really got involved. We pivoted something we were doing previously we actually had a ZFS based distributed web hosting platform on FreeBSD - we sort of started serving the Docker market!
But the interesting thing about Flocker was that it was so early that we had to do a lot of hard work to just connect containers to storage at all and so we integrated with EBS on Amazon and Google persistent disks, we integrated with Cinder on OpenStack. We integrated with about a dozen different storage vendors for the SAN products and we managed to get Flocker into a 1.0 and into production with a big customer Swisscom so that was great, apart from the fact that then Kubernetes came along and did the same thing. And so we then found ourselves in the position where we were sort of being commoditized by Kubernetes and we were at this sort of thin layer between container orchestration frameworks and storage underneath. So we realized that we had to pivot again. Unfortunately, at that point we had already scaled too large really to be able to move quickly enough and that premature scaling - sort of believing our own hype and believing that we we had achieved product market fit before we really had - I think that that’s the reason why why ClusterHQ wasn’t successful, ultimately.
But I then took a year out to work with Weaveworks and I had a fantastic time working at Weave. They’re a really great team there, really great products. I recommend you go try Weave Cloud as well. And I was working on developer experience at Weave. That was super fun because I got involved in teaching and talking about everything from container networking to Prometheus monitoring to visualization to continuous delivery with Kubernetes and got to meet a lot of people and had a lot of fun doing that. But I guess I just still had the itch to come back to the container storage world. So I launched this project Dotmesh, which we just launched on Wednesday (7th Feb 2018) this week. And it’s not actually really container storage anymore; it’s more about data management for cloud native applications.
[MH] So I think that now the time is probably… like stuff has settled. Criticism of the container orchestration world is over. Kubernetes is the container orchestrator and it’s really now kind of like the time where you can actually build stuff on top of that, and it should be also be successful in terms of a business applying a certain business model - at least that’s how I see it.
[LM] I completely agree with you that that was another one of the challenges. What was frustrating was we didn’t know which orchestrator was gonna win so we had to support Swarm and Kubernetes. So yeah, it is definitely easier now that things have settled and we’re even getting our stuff working on Kubernetes on Docker for Mac, and it’s really nice to see that Docker have embraced running a local Kubernetes cluster for development on Docker. So that’s pretty interesting.
I’ll talk a little bit about about Dotmesh. I’ve got a couple of slides actually that I can use to just paint a picture and I’ve really just done it in the form of three memes!

We’re both software engineers and we know what it looks like when it’s a bad day at work! And so these are three stories that we learned from talking to dozens and dozens of users and potential customers late last year.

The first way that it can be a bad day at work is that you have a change to an application that you’re developing. It passes all the tests in CI and you deploy it in production and it blows up! This happens surprisingly often. It’s happened to me, it’s happened at companies I’ve worked with and it’s really painful because it means that you’re exposing your users and your customers to failures, to errors and so on.
You have to ask the question - why does this happen? And the reason that things pass the tests and then they blow up in production is that production is just fundamentally different from all the other environments that you have. It has different data, it has different scale, it has different inputs as well so even just the requests that are hitting production are often going to be different and more varied than the the inputs that you used when you’re testing your software in CI.
So that’s the first problem, and that’s a big problem to solve, and we’re not going to solve all of that problem in one go. But it’s interesting to just set the scene.

The second one - and I’m sure you know xkcd - I’ve actually modified this xkcd slightly. It used to say that the number one programmer excuse for legitimately slacking off is that your code was compiling but actually in 2018 compilers are quite fast now and the number one programmer excuse for legitimately slacking off - “2018 edition” - is that the integration tests are running.
Often times I’ve seen problems at companies where you just can’t seem to ship stuff. The dev team is just slow for some reason and stuff is late and everyone gets stressed and it can be a bit of a mess. And if what I’m saying sounds familiar, the cause of this is often that there’s a slow CI system with slow and flaky tests at the heart. So how can we make our CI systems faster? How can we make our tests faster and more reliable?
Another interesting fact about this is that the more realistic your testing gets the slower and flakier it tends to get. So, end-to-end tests that test maybe 50 different microservices together using real databases are often pretty much guaranteed to be slower and flakier than unit tests that can run quickly using prepackaged data that’s shipped as part of the test.
[MH] Let me stop you there a moment. Something I heard a couple of times - and when I said it myself I got quite a lot of heat - in the context of containers and microservices and whatnot, we don’t really have QA any more. You don’t really do test - you directly go into prod and you just expose this new version to a very small… kind of like “post smoke test”… just expose it to a small part of the audience and then if you’re experiencing problems there you don’t affect everyone you just roll back these 4.1%. What do you think about that?
[LM] Yes, so that’s really great. That approach is fantastic if you’re at sufficiently large scale that you can get statistically significant data about the new change by rolling it out to a tiny percent and I’m not saying that people shouldn’t do that. If you’ve got sufficient scale to do that sort of canary or blue-green testing, then absolutely go for it.
There’s an interesting case there though where if the change that you’re deploying depends on data… If the change that you’re deploying, for example, updates a schema in a database then you can’t fork your database into the old version. The canary approach to testing new code changes only really works for stateless applications so there’s an aspect there of needing something a little bit stronger for statements.
[MH] In a nutshell if you are not Facebook or Google or Twitter, or whatever, keep on listening and keep on doing what you’re suggesting?

[LM] Yes, I’d like to talk to people at Facebook and Twitter and Google as well about what they do and so on.
Anyway, I’ll proceed with the third category that we have here. So this one is that “one does not simply capture the state of four microservices at once” and what I mean here is that as we see a progression towards microservices there’s this thing called “polyglot persistence”, which is where the “right” way to do stateful microservices is that each database is only talked to by one microservice. And basically the upshot of this is that instead of having one large database at the center of your application you’ll end up with many. You’ll end up with a database for your orders service, a database for your users service, a database for whatever other domain-specific data there is for whatever your application does.
What this means is that when you’re testing an application, when you’re doing development on an application, you might be spinning up maybe four or five different databases on your laptop. That is, if you even can spin up your microservices all on one laptop. That means that there’s lots of state in lots of different places. And it’s just so hard to capture all of that state in one go in order to, for example, share a problem state that you’ve developed in development with a colleague to help you debug it. It’s so hard that people just don’t bother. You’d have to like exec into all your containers and dump their state and zip up those states and then email it to them or something and it would just take so long it’s not worth it.
[MH] Sorry, just one quick note. People probably know “polyglot programming” but might not really be that familiar with “polyglot persistence”. Just to be clear, it’s not a top-down thing. It’s not that “we have to use five different kinds of datastores so here we use some SQL and here’s some Elasticsearch” etc. It’s really that it depends on your workflow. E.g. for your shopping basket you might choose to use Redis for some transactions. And that’s a bottom-up adoption - that’s why you end up with different data kind of datastores that treat data differently. You arrived it that way essentially. I just wanted to make sure that everyone is on the same page that this is really bottom up same as probably programming. It’s not that “Oh you have to use five different programming languages” it essentially grows bottom up.
[LM] Absolutely, I think polyglot persistence is an effect that you see. An emergent behavior that you see when you do microservices - rather than the CIO saying “we must have five different databases”. It’s a consequence of doing microservices properly. And absolutely it’s not like “oh, we have to use five different microservices or five different databases - because”. Each team is given autonomy over developing the microservice that they’re working on in the language that’s most appropriate for it and also the datastore that’s most appropriate for it.
So if you’re working on the search service it probably makes sense to use Elasticsearch as your data store. If you’re working on data that tends to be sort of ephemeral then Redis might be a good choice, whereas if it’s something like a user service in which the transactional aspect of it is more important, a traditional SQL like Postgres or MySQL would probably make more sense.
So, anyway, you end up with this problem where it’s so hard to capture the state of multiple microservices at once in development that people just don’t do it. So what you see instead is “Oh, can you ssh into my machines to look at this problem”, or if you’re in the same building “Can you come over here and look at this problem over my shoulder and help me fix it?” and of course that doesn’t really work very well if you’re in a team. For one thing it means that you have to interrupt people and you have to synchronize human focus between between these environments where the data is. But it also it goes badly when when you’re in different timezones or when there’s lots of different teams, so I think there’s a better way of dealing with this problem basically.

So if you take a step back then you can see that the problems I’ve described touch all the different stages of the software development lifecycle. In production an unexpected production outage can often happen because tests aren’t realistic enough with respect to data. In CI you often get these end-to-end tests that manipulate real databases that are slow and they’re flaky. In development, microservices and polyglot persistence make capturing and sharing development states hard enough that no one does it.
So, these are the things that we found when we started talking to people about microservices and data. You can take a step back and think about what is the common theme between all of those problems that I just described. In all of those cases they happen because you weren’t in control of data. So then the interesting question that follows from that is well what are you in control of? And what does control mean in modern software development? Modern software is all about control.

Firstly, what what is modern software made out of? I think that modern software being made out of code, infrastructure and data is a reasonable way of dividing up the world and we’ve been in control of code for the longest time - probably for two decades we’ve had version control and more recently we’ve seen the emergence of continuous integration meaning that code is controlled by the fact that it is continuously tested.
More recently, we’ve seen the development of control over your infrastructure in particular the movement towards immutable infrastructure with things like Docker and Kubernetes but also declarative config being applied to to cloud resources with things like Terraform and the state of your server like Ansible.
We actually have quite good control over our infrastructure now and being able to recover from machines failing by having something like Kubernetes automatically spin up new pods on different machines and based on a declarative config. This is really powerful but we still are in this state where data is outside of the circle of control.
The way that we’ve learned that people deal with data is very often using scripts that they’ve written or manual processes. A surprising number of companies - or maybe it’s not surprising - still have DBAs and you send the DBA an email or you make a phone call if you want a snapshot of your production data and so on. There’s broadly space for data to be brought into the circle of control and that’s our mission with Dotmesh; that’s what we’re trying to do.

The obvious next question then is “how do you bring data into the circle of control?” and what I’d like to propose is that you do it with a mesh and so the mesh looks something like that.

We’re proposing that you include this service called Dothub in the center of your mesh and then around the edges of the mesh we’ve got these different environments. We’ve got a development environment of one developer. We’ve got another development environment - another developer - this first development environment might be a laptop and the second development environment might be a VM. Then we’ve got the CI system, of course, which is running tests against code as it as it flows from dev in to ultimately staging and prod.
Then you have staging, and like you say staging maybe is going away or maybe there’s more advanced versions of staging that are happening, like a Kubernetes namespace per branch, for example. But in all of those cases that there are often still environments in between the CI system and production. Then, of course, we have production itself where where your workload is running and serving production traffic.
Once you have a mesh you can do some interesting things.

The first use case that we’re proposing with Dotmesh is developer collaboration. It’s this idea that if developer one has a problem state or some interesting state - maybe they found a security vulnerability in an app - if you can only reproduce it by getting the databases into a certain special interesting state then they can capture that state in a Dot (we call them Datadots) and then the developer can commit that Dot just like a git repo and push that interesting state up to Dothub. Another developer can then come and pull down that state and debug the problem and develop some code changes that address it.

There’s also an interesting use case for capturing failed CI runs. We’ve talked to customers where they have a CI system that involves a lot of microservices being pulled together in one pipeline and tested end-to-end. Whenever that pipeline goes red the really interesting thing is that they have to stop the entire office committing changes. They have to SSH into the test runners and go and poke around with the databases to find out what went wrong. Wouldn’t it be so much better if instead of having to SSH into the test runners you could just capture the state of a failed CI run? And in a way that is reproducible both in terms of code and data and pull that and state down to a developer who wanted to reproduce that state at a later time or in a different time zone and certainly in a different environment.
So that’s the second use case.

The third use case is pulling realistic data from production and this is going more into the territory of things that we are going to be able to do in the future. I wouldn’t recommend anyone uses Dotmesh 0.1 to try and do this yet but it’s sort of the direction we’re going in. This ability to take production data and capture that in a Datadot, to be able to pull that down maybe every hour or every night and scrub that data of course because you want to remove personally identifiable information from it. Then to be able to pull that down into a staging environment or a CI environment to run tests against - or even development.
So, what you see here is everything that we’re doing is about moving these Datadots around between different stages of software development lifecycle and that it unlocks a new set of DevOps workflows.
[MH] That’s super exciting and myself, I have a background in data engineering. I really appreciate and understand all these issues. Let me quickly try to reformulate that in my own words to see if I really got it because I honestly didn’t get it so far (before the call). I’ve played around with it a little but I did not really get it. Is it fair to say that that Dotmesh is kind of like the Istio for data? So, essentially rather than having ad hoc solutions that you have say “this in CI and capture that with a shell script”, or whatever, “put it there on S3”, or whatever, or even make it part of application, you kind of outsource that with the mesh is actually taking care of you know “here is the snapshot” you’re moving from that environment to that environment in the same way that essentially says “don’t do that in the application, we’re doing it in the data plane of service mesh”. I should say “these two services can communicate” or you inject some failure or troubleshoot it whatever the same way that Dotmesh does that for your data…
[LM] Yes, I think that’s a very good way of describing it. I might use that, thank you!
I think that the really important aspect of what you just said is that it’s a generic solution (“generic” might not be the right word) but it’s a generalized solution for dealing with data snapshots across all different stages of the software development lifecycle. It’s independent of which kind of databases you’re using, independent of which infrastructure you’re running on whether it’s cloud or on prem, on laptops or whatever. It’s about providing a set of tools that work in a consistent way, in a generalized way across all of those different environments. To give developers the power to get the data that they need in the place that they need it, when they need it.
[MH] That yields the very interesting question, the technically challenging question… I don’t doubt that you and your team are able to tackle that, but it does… there are many many different data stores and databases on the market, right? Relational databases, any kind of noSQL and newSQL and whatever-SQL. That actually means that let’s say you’re using Elasticsearch, using Cassandra, Postgres, MySQL and Redis. You may have four or five now. For each of those datastores (using datastore as a kind of umbrella term for any kind of database) and you essentially have a kind of like plug-in driver that actually understands the snapshotting operations of that particular datastore and you’re providing then generalized generic interface to actually just say “snapshot that” over here and so on…
[LM] So that would be one way to do it it’s not the way that we’ve started out doing it, though. So the way that we have started doing it is instead to provide a layer that sits underneath. Every database that you just mentioned writes files to a file system and so we provide a shapshotting engine that sits underneath, that just allows you to take consistent atomic snapshots of the file system state of those data stores. Now, that does rely on the data stores being crash consistent and it does mean that, for example, we rely on the fact that Postgres has a write ahead log and we rely on the fact that MySQL InnoDB can recover from a power outage but all of these data stores can recover from having the power ripped out. So we can recover these atomic snapshots from different environments and we can take the snapshots without without stopping the database.
[MH] That is awesome and, I think, needed, otherwise these pieces might not be possible. The only thing I’m having trouble understanding is if you’re working on a file system layer… there are some databases that actually bypass the file system… raw blocks that they are dealing with how does that work in this case.
[LM] So, I haven’t seen a database that does that in quite a long time. We would have an answer for that - we could provide a snapshottable partition rather than a snapshot of a POSIX file system but we haven’t seen that that’s needed yet. I think probably 80%+ of the databases that exist in the world (probably more than 80%) just write files to disk on a files on a file system. It’s definitely an interesting question though and another interesting aspect to it - and playing devil’s advocate with myself here - is that how do you actually deal with sharded databases because being able to…
[MH] … yeah, how DO you actually deal with if the data is sharded it across different nodes?
[LM] Well, I’ll be very honest with you, at the moment we don’t. So at the moment we’re focusing on these dev and CI use cases. We’re assuming that you can run your stack on on a single machine basically but at the same time Dotmesh does already support running in clustered mode. It just has the restriction that each Datadot is only on one machine at a time but you can have many of them spread across many machines. There are definitely some interesting questions around how to deal with being able to capture the state of a sharded or distributed database. I’ll just say that’s a research topic that we’re investigating at the moment and we hope to have a solution for it in due course.
[MH] I personally prefer to be up front by saying “look this is what you can do and here the limitations” over “we’re gonna solve all the problems of the world” then boiling down to well “actually by the way in your case it doesn’t work.” I will have to research follow up regarding my experience many years ago that Oracle is one of those who had really relational databases and then probably you know you’re spot-on with most of the Elasticsearch and whatnot. They actually, at the end of the day, just have some local file system ext4 or whatever that lets us do all the heavy lifting in the edges working on top of the file system.
The sharded thing like that is that is something where maybe you know don’t do the solution yourself you might be able to plug into the solutions of others actually as you did with Flocker.
[LM] There’s a few things, a project called Kanister from a company called Kasten and I was speaking to them yesterday and there’s interesting opportunity to potentially collaborate around that. Kasten are taking the approach that you first described, which is that you integrate with each data store using the data store native backup tools. Which means that it can work with sharded databases because the sharded database knows how to back itself up, right? So, it’s definitely really cool that that the Kasten and Dotmesh are exploring these two possible solutions to the space in parallel. The other thing that is really cool is that Kanister is open source, which is the the project that Kasten have for actually integrating with these individual data stores, so it may well make sense that in the future we can leverage some of that and use that project.
[MH] …and shifting gears a little bit, there is an initiative within communities to actually work with wider container systems - called CSI storage interface - can you, for the audience, put what you’re offering with Dotmesh, put that in relation or explain the overlap? Could be one influenced the other or…
[LM] Yeah, so my understanding of CSI (the container storage interface) is that it’s exactly that - it’s an interface so it’s an API that you implement if you’re a storage provider. That’s really nice because Kubernetes is not saying “we’re going to create storage” they’re saying “we’re going to let everyone plug their storage into Kubernetes” and so for us with Dotmesh we already plug into Kubernetes. So we have a FlexVolume driver, which is sort of the precursor to CSI, and we also have a dynamic provisioner which is the new way that that Kubernetes allows you to do lifecycle operations on volumes like create and destroy. Then the FlexVolume driver is the thing that does attach and detach. I need to look more into CSI but I believe the CSI covers the FlexVolume side - it’s like a new version of the FlexVolume driver and it’s just a nicer interface than the Flex API, and we’ll definitely support that when it’s ready.
The other interesting thing that Kubernetes is working on, and SIG Storage is working on, is snapshot and it would totally make sense to expose Dotmesh commits as snapshots through the Kubernetes snapshot api once that exists. So we’re going to be getting more involved in SIG Storage and we’re going to be helping to actually write code and offer our engineering efforts to to move all of those efforts forwards a little bit. And to make sure that we can provide implementation of those APIs as they develop.
[MH] Totally make sense, my hope actually is going a bit further really that the emerging spec or whatever extends in its place is actually informed by and influenced, shaped by, what you guys have to put forward there because it’s the tree that makes a lot of sense, right?
I mean there are not that many generic… or actually I only know your work there! Your offering there, that actually does that - the snapshotting part in the context of containers. Actually, it should be the other way around - Dotmesh should be the reference for implementation…
[LM] Well, we’ll see! I don’t want seem like I’m charging in and telling everyone what to do. I’ll go in and listen and get involved.
[DEMO]
[MH] I did the Katacoda one that you have online there which is sort of smooth you just go there clickety click you don’t need to set up anything locally awesome that reminds me a bit of you know - again coming back to analogy with the service mesh - in Istio there’s this thing that it can inject failures right you can say “every third one is a 404” or whatever, is there something similar in Dotmesh where you can actually inject a broken schema or whatever it is?
[LM] You could use Dotmesh to capture a schema that you’d broken in a certain way or some data that didn’t upgrade properly when you tried to apply a schema modification to it and I believe that’s valuable when dealing with trying to collaborate with other developers about states. You basically build up this library of states in the Dothub (which is our SaaS) that are interesting to the entire team and which anyone can take off the shelf at any point and just reinflate. You’ve got your code and your data in the same place rather than just having code.
[MH] Is the data in the Dothub encrypted?
[LM] It’s something that we’ve heard a requirement for. It’s currently not encrypted. We currently encrypt in-flight data to and from, but not at rest yet.
But that’s that’s clearly something we’re gonna up come up against.
[MH] So we’ve got a few more minutes if you want to talk a bit more about roadmap items or whatever, what’s coming up in the next release, what do you plan to do in the next couple of weeks besides get a little sleep!
[LM] So just before we touch on roadmap, I showed you the tutorial but I didn’t actually go through the demo. Shall I go through this demo now? So just for everyone who’s watching I also encourage everyone to try themselves so you can go to dotmesh.com/trydotmesh, it’s also linked to from the homepage and you get this little tutorial here.
[DEMO]
[MH] Questions are: the Dothub which is essentially this is central like GitHub for code, currently it’s your SaaS offering? I suppose if I want to have that in enterprise behind the firewall then I’m gonna reach out to your big sales team that will tell me how much money I will have to pay?
[LM] Yes, so that’s that’s very well said I’ll just bring up our pricing page quickly because if you go to dotmesh.com then there is a section on pricing. Before I talk about pricing it’s very important to point out at the Dotmesh itself is open source it’s available on github at https://github.com/dotmesh-io/dotmesh and I’m a strong believer in the necessity that the open source is feature-complete and powerful. That’s why Dotmesh (the open source) supports clustering. It supports everything that you can do with the Dothub today apart from the web interface and so if you want to run your own version of Dotmesh on premise then you can do that just by picking up the open source and you can run it and operate it.
With that said, however, we are a business and we do need to make money and so we’re offering a hosted version of Dotmesh as dothub.com and you can see that it’s it’s a SaaS service and you can see that for example I’m I’m logged in here as LM and I can see different branches, it looks and feels a tiny bit like github and this interface is the start of the thing that we’re going to turn into an Enterprise version.
So on the pricing page you can see that’s there’s a free tier so you can come and try it for as long as you like with if with 1GB of storage for free as soon as you bump over that one GB limit then it’s a very simple ten dollars per user per month for our developer account but which is the price of the the second cheapest Digital Ocean droplet so we priced it sort of with respect to what developers are used to paying for things but then as you start adding team and business features and functionality the price goes up accordingly.
Then there’s the Enterprise version which as we develop more features in the Dothub that are specific to the SaaS then those are going to be the things that eventually turn into the more scalable business model where you can run a version of the Dothub on premise and we can help and support you.
[MH] That’s super exciting and feel like I could really say “all right, I was excited yesterday but now I got it” and I will totally sign up to the free plan and perhaps upgrade to the developer plan!
It’s really awesome, I love it! I love it! I hope people out there can appreciate it as much as I do because you do need a little bit of a background in data to really get it but this is really kick-ass, this is really the future.
And thanks a lot for your time, look, I will see you soon in person continue the discussion over a pint or whatever! But congratulations again, this is really awesome and you know whoever has a question would just be able to reach out via your support channels, Slack…
[LM] Yes! It’s worth mentioning please do come and join just on our homepage it’s actually a little bit hidden at the moment I want to move it more obvious but there’s a little slack link down here and so that just takes you straight to this like invite page. So come join our Slack, chat to us, give us feedback or reach out to us on Twitter twitter.com/getdotmesh (because Dotmesh was taken) so yeah really look forward to continuing the conversation and thank you Michael for taking the time.
Get involved.
- Sign up for Dothub for free.
- Try it via Katacoda, or the hello dotmesh tutorial.
- Check it out on Github.
- Give us feedback on Slack or get in touch via email.
- Learn more about what a datadot is.
- Browse the tutorials here.