Dotmesh 0.5 is released!
It is a fine tradition to announce Kubernetes Operators at KubeCon (hat tip to CoreOS).
So from here at KubeCon EU’18 in Copenhagen, I’m proud to announce the immediate availability of the first version of the dotmesh operator. The dotmesh operator opens the door to layering dotmesh on top of cloud volumes, moving dotmesh a step closer towards being a much more powerful Kubernetes storage system.
Motivation for developing a Kubernetes operator.
We’ve been dog-fooding dotmesh by using it as the multi-node storage layer in the dothub: the central repository through which dotmesh users share their datadots.
While running the dothub on GKE we realised that upgrading a cluster wipes out all the nodes and since the dotmesh data was stored on local storage we lost that too. Good job we had backups!
To ensure that the data in a dotmesh cluster can survive all the nodes being recycled, we’re working on a project internally called “Cloud Volumes v2” (or CVv2). The GitHub issue for this epic is here.
Today’s release is the first step towards “Cloud Volumes v2” and will mean that rather than consuming local storage on each node and providing PVs from those disks to applications, dotmesh will consume PVs from the underlying cloud provider and expose (more lightweight, more powerful) PVs upward to applications.
Benefits.
As well as solving the GKE upgrade problem, cloud volumes support will soon bring a number of other benefits:
- Makes provisioning PVs faster as API calls don’t need to be made to the slow underlying infrastructure. You don’t have to make an API call to your cloud when you spin up a container, why should you have to do that when you spin up a volume for a PV?
- As a result of limiting API requests to the cloud provider, the problems of API calls being throttled and hitting limits is solved. Now you can spin up and down as many stateful applications as you like.
- Take advantage of the fact that cloud provider volumes are synchronously replicated to provide failover: being able to fail over is an important production capability, and when we’re consuming reliable cloud volumes from e.g. EBS, PD, Rook or Portworx, we’ll be able to provide all the benefits of dotmesh such as fast commits & branches, and cross-cloud push/pull, without losing the ability to recover data when a node fails.
- Having a smaller ratio of cloud volumes to application PVs allows for a higher density of stateful pods on each host (hundreds or thousands can be supported) versus EBS volumes (for example) where you can’t attach more than 40 to a node.
- Provide multi-writer capabilities, so that multiple pods can read and write to the same dot from multiple nodes.
- Cross AZ portability of volumes (datadots).
- Use dotmesh subdots to take atomic commits of multiple stateful services – which may be scheduled on multiple nodes – in an atomic fashion.
The dotmesh operator.
Dotmesh 0.5 is the first step towards this vision as it replaces the daemonset that deploys dotmesh-server to all the nodes with an operator which will allow us more fine-grained control over the underlying resources.
The next step (probably in dotmesh 0.6) will be to bind PVs to each node so that dotmesh clusters can start to survive e.g. GKE upgrades :-)
For more information about the new configuration options available see the docs on the ConfigMap consumed by the dotmesh operator in our docs.
We are excited about this release because it takes us a step closer to dotmesh for production ready volumes on Kubernetes – with the dotmesh benefits of being git-like: fast commits, branches & portability between different stages of the software development lifecycle, and even portability between different clouds!
Upgrading & feedback.
See the dotmesh 0.5 release on GitHub for release notes.
Note the upgrade instructions specific to upgrading dotmesh 0.4 to 0.5 on Kubernetes.
Please give us feedback on our plans and on this release on our Slack channel!
Get involved.
- Sign up for Dothub for free.
- Try it via Katacoda, or the hello dotmesh tutorial.
- Check it out on Github.
- Give us feedback on Slack or get in touch via email.
- Learn more about what a datadot is.
- Browse the tutorials here.