How dotmesh improves and simplifies your Continuous Integration pipeline.
Now that we have seen some of the great things dotmesh enables you to do when collaborating, including capturing state from one or even multiple data stores, and then using dothub to share them, let’s take a closer look at how dotmesh can improve your Continuous Integration, or CI, pipeline.
One of the key pillars of CI, adopted by engineering organizations, the world over, is automated testing. The more complex your stack of services, the easier it is to break… and the more important reliable automated testing becomes.
Indeed, the Wikipedia article on CI states that one of the key pillars is
… the test environment, or a separate pre-production environment (“staging”) should be built to be a scalable version of the actual production environment to both alleviate costs while maintaining technology stack composition and nuances.
As the CI pipeline becomes ever more complex, engineers and CI managers ask themselves several key questions:
- How do I build a CI environment that closely correlates with - or is a “scalable version of” - the actual production environment?
- How do I set up CI tests as quickly as possible, when each setup can take a long time to initialize multiple data stores to get them to precisely the correct state for the tests at hand?
- Most importantly, how do I capture the complete state of CI after a test run?
In today’s article, we will explore how dotmesh simplifies capturing the complete state of CI after a test run. In a future article, we will look at how dotmesh greatly eases and speeds up setting up CI environments in the first place.
What happens when a CI run is complete?
Every reasonable CI implementation captures the logs as the “reliable record” of what happened. Usually these captures include the specific versions of software tested. Sometimes these are the specific git commit hashes; other times these are the specific docker images used, preferably the exact image hash rather than the mutable version number. In the best of cases, it is both.
However, while your test runs from a specific version of software, it runs against a specific set of data, which mutates throughout the test. What happens to that changed data? How do you know what changed, how it changed, what the final state was?
What organizations do nowadays depends, to a large extent, on whether the test run succeeded or failed.
In the case of a successful run, most organizations historically ignored the changed state. After all, the test passed, we have the logs… who cares?
Increasingly, many are rejecting this approach.
Some are required by compliance to keep track of everything that is tested and then deployed into a sensitive system. If your system processes payments and is subject to PCI-DSS, not only do you need good testing, but you need to prove that the release deployed really did pass tests correctly, and show every single parameter of that test. Logs and code simply won’t cut it. If you are regulated by a governmental authority, you really want to be able to show the auditors how you looked at everything reasonable. Without the ability to show them state, you are missing half your defence.
Others, however, simply realize that what compliance forces others to do might just be good practice in general. When your deployed system fails due to a bug, and it costs you $1MM in lost revenue and customer settlement, it is a pretty good idea to be able to go back to your entire test suite and figure out what you missed. Without your data, you are missing at least half of the critical information.
In the case of a failed test run, organizations are left with a Hobson’s Choice: stop all CI in order to access your CI system and check the state of everything, including your data; or lose the state of the data and hope that logs are sufficient.
The overwhelming majority of organizations choose the latter. Most SaaS CI providers don’t provide much choice in the matter.
The few who really care suffer the sudden and complete stop of all CI, and therefore all deployment and coding, until sufficient information is gathered from the CI system. It may be necessary, but it is a terrible price to pay!
The Dotmesh Way.
Dotmesh makes the entire problem go away.
With dotmesh, you capture the state of your CI databases at the end of the test, as part of your
post stage, right before you discard the test environment.
pre stage, rather than simply launching data stores, start each of them on a subdot part of a single dot. The dot can be a brand new one on your local cluster, if you plan on initializing each data store as part of your test run. Or, if you follow the CI rocket-speed seeding process coming in the next article, you continue to use that dot.
When your tests are done, during the
commit the dot, capturing the entire state of all your datastores. Then send them off to dothub.
You now have the option to:
- Report on CI state including actual data change
- Recreate precise tests and comparing data state at the end
- Clone the dot to see what future changes would bring
- Clone the dot and debug failures with complete state
- Anything at all
Most importantly, your state is not lost, but rather available to you, without stopping your CI builds, or slowing down the pace of your development and deployment.
Tomorrow, we will look at the other side of the equation, how to seed a complex CI pipeline sanely and quickly. In the few days after that, we will walk through practical examples of using dotmesh to capture CI state change. One day we will explore Jenkins, the most-popular on-premise CI system, while another day we will look at Travis, a highly popular CI-as-a-service offering for github.