Our goal was to simplify the deployment process and allow various engineering teams to deploy at any time, while increasing the cadence of deployments. To achieve this, we had to consider that we would be running our environment in both on-premise locations and public cloud environments. We wanted to make sure we had a common and seamless way to deploy, regardless of the environment.
As we looked at our existing environment—one that had organically grown to support the large subscription billing economy—we saw two options: automate the existing approach, or change things up. We decided to change things up because we needed to scale faster, remove some of the complexities that had crept in over time and allow for more flexibility with our deployments. Accomplishing these goals would be tough with our existing setup.
For our new deployment approach, we decided to move our application stack to Mesosphere Enterprise DC/OS for its cluster management and container orchestration capabilities. We also decided to change to versioned artifacts for the application code and adopt versioned configuration. This strategy has given us a common method for both on-premise as well as public cloud environments. For example, for our main core application, we use a JSON file to describe everything that is necessary to run the application, in terms of memory, cpu, application version, configuration version and cluster size. Not only does this enable easy deployment, it also gives us the possibility to easily scale the application from 10 to 50 instances within a few seconds.
Another advantage of Mesosphere Enterprise DC/OS, is that we are now able to do blue-green deployments. This means that we have set up two identical versions of the application for each application type: one version that is live and taking traffic (green), and another version that is offline (blue). When it comes time to perform a deployment, our build server pushes out a Mesosphere Enterprise DC/OS JSON file that has the updated versions of the application and configuration artifacts to use.
With the new version of the application and configuration running—but not live—we can run validation tests against the pending environment. Once the tests pass, we use our internal tool Mink to enable the new application in the load balancer and start to drain the previously live version. Once completed, we are live with the new deployment and have the older version turned off.
Over time, one issue we have found with any configuration management solution (CMS) is that it is hard to duplicate an older version of the application. This is because CMS solutions typically don’t allow rollback of configuration files. With both the application and configuration versioned, we can now easily replicate any version of deployment to production or back to development for testing.
With this more flexible setup, we will be able to increase the pace at which we perform deployments. These faster deployments will allow us to deliver new features to our customers at a faster pace. Also, we are experimenting with canary deployments, which will enable us to perform more granular deployments to certain parts of the application stack. Finally, we are going to use our new setup to automate horizontal application scaling.
Zuora Build Services Team:
Virender Khatri, Rajesh Dharmalingam, Huiying Hu and Gary Forrest
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.