A Phoenix in greek mythology is a long-lived bird that is periodically regenerated or reborn. This perfectly describes the basic idea of a new paradigm that appeared in the last years about the way to manage application deployments and it's runtime environments. If you draw the line of a phoenix live to the productive lifetime of a software revision you will see and understand why we choose the name Phoenix Principle for it. We abolished software and system updates in the traditional meaning and introduced immutable deployments for us and for most of our customer projects.
Chad Fowler wrote an excellent blog post a half a year ago about the benefits they have seen and realized at 6Wunderkinder with that pattern. From our decades of experiences with administrating server infrastructure one of the most remarkable advantage is getting rid of the unpredictable configuration and state of long-running server systems. The Phoenix turns infrastructure to code and thereby to a reliable, structured release lifecycle with version control, quality assurance and so on as we know well from application development. No more leap in the dark if you hit return with full root permissions.
Ok, so how to adapt?
Paradigms to follow:
- Automate everything
- Replace instead of change or update
Automate and replace wherever possible
Everything means really everything. Throw away manually prepared virtual machine images! Do not login into a server to change a config file or restart a process! Do not write administration guides! Please, automate. We are used to build a complete bootstrap process that includes getting from a git repo to a running server with the custom business application deployed and ready for use. Three technology stacks have been established in 2013:
|Development Stack||Cloud Stack||AntiCloud Stack|
|Host Controller and VM||Vagrant with Virtualbox||OpsWorks or cmdline on AWS EC2||Foreman with VMware and bare metal|
|Provisioning||Chef or Puppet||Chef||Chef or Puppet|
In 2014 we are going to use container based deployments (e.g. with docker) for our customers and follow with great interest the coreos development. This is a fast changing field and we will maybe see some winners in the PaaS race this year.
Docker provides out-of-the-box some basic features for building a "replace-only-infrastructure" for yourself. It has automatic network configuration features and a service registry and detection to orchestrate different services. It's possible to build that on AWS too but for the "AntiCloud Stack" it was a really time consuming target before. So for realizing the replacement you need to build a new instance (container or vm) of your application, test it and than turn it on. The switch from old to new happens on the network layer. It's like driving your car to the dealer and instead of waiting for a repair you instantly getting exactly the same car with your custom configuration. This would be quite expensive in the analog world but it is possible for bits and bytes.
Building the replacement instances you can see how the both paradigms are playing together. It should be fast and tested and maybe unattended e.g. caused by an upscaling request through a load assertion monitor. Automation can do.
To sum up a short list of gains you can earn with an implemented Phoenix Principle:
- zero downtime deployments
- traceability of changes (provisioning scripts in git)
- multiply complete infrastructures for testing
- horizontal up-/down-/autoscaling
- easier migration (vertical scale, change of provider, geographical move/duplication)
No pros come without some challenges:
- initial setup effort
- application architecture must support things like fault-tolerance, asynchronous communication (retries, queues)
- persistent data (live migrations of data structures)
- write-intensive databases (clustering, partitioning)
Automate server setup including provisioning and application deployments and try to use that full bootstrap wherever possible to let your services rise like a Phoenix.