Almost two weeks ago I went to the Serverless Conference in London - as far as I know the first and currently only - conference series dedicated to serverless technologies, architectures and the surrounding vendor ecosystem and their respective user communities. You can find recordings of all talks on youtube.com.
State of ‚Serverlessness‘ and other key take-aways
What is serverless or serverlessness anyhow? Initially both terms were often used as synonyms for Backend-as-a-Service (BaaS) and Function-as-a-Service’ (FaaS) style offerings. This seems to unduly narrow the notion down to the domain of mobile application development on the one hand and on the other to being simply another abstraction layer on top of ‚containerisation‘ (e.g. Docker and friends) that enables deployment and execution of code in the cloud even down to the granularity level of single functions. By now it also seems apparent that none of the proponents of serverlessness propose to get rid of server hardware or processes entirely - as is sometimes ridiculed. But judging from attempts at nailing down the notion of serverless at the conference, it seems that there still is no concise, canonical definition of serverless. At least, consensus seems to emerge that the notion of serverlessness should not be conflated with BaaS or FaaS but should rather be broadened to encompass styles of system design and implementation (including corresponding architectural styles) that try to minimise the infrastructure management and maintenance overhead and costs by outsourcing as much generic functionality as possible to vendors providing them in the form of managed services.
Understood as being primarily an outsourcing strategy, serverless obviously is nothing entirely new under the sun and as such has always amounted to more or less conscious decisions about the corresponding trade-offs: Trade-offs between increased vendor dependency and lock-in effects on the one hand and the potential for speeding up time-to market, improved scalability from prototype to production and cost-reduction on the other. Or as Patrick Debois succinctly put it in his keynote
whatever you’re doing with serverless - you could have done it before
people jumped to docker for the same reasons
that were promised by vendors: that you’re gonna be faster and more agile.
Considering the current hype around serverless it was - somewhat surprisingly - not all pixiedust and fairies at the conference: quite a lot of speakers took a critical if not even skeptical stance towards it.
Although AWS Lambda - a poster child serverless offering by Amazon - has been around almost two years by now, it seems that we’re still at quite an early stage in the adoption and maturity cycle. This is also reflected by the repeated criticism at the conference that the product offerings of the big vendors are still lacking in fundamental ways regarding provisioning, deployment, config management, instrumentation (with respect to logging, monitoring and analytics), tracing, debugging and testing (anything up the testing pyramid from unit testing) as well as general tooling and IDE integration. This critical take on things was somewhat counterbalanced by vendors displaying that - at least regarding developer tool integration and debugging - things will improve in the near future.
A recurring theme - and somewhat an eye opener for me - was that adopting serverless patterns does not mean that you can get rid of operations. On the contrary it was argued by some speakers, that ops, config and infrastructure management automation will gain weight with respect to your overall activities in delivering cloud based software. This is due to the fact that serverless can be taken as getting rid of running and maintaining your own infrastructure by abstracting away from hardware, VMs, and even containers and OSes by running functions as a service and utilising managed services wherever possible. This usually means the overall balance of code vs. configuration and provisioning shifts towards the latter. Also, since serverless tends to lead to architectural patterns that are event driven and distributed in nature - necessitating automated deployment and configuration even down to the level of individual functions - this in turn means that you not only need developers with an ops mindset and ops skills but also ones that are keenly aware of the technical and business tradeoffs incurred by the adoption of serverless in the guise of increased vendor dependencies and shared responsibilities. Charity Majors even went so far as to argue, that you better understand the strengths and weaknesses of the technology stack and the competencies of the vendor that you buy into - including it’s business model - since a third-party service will always and foremost protect itself.
Some speaker-highlights and practical tips
Arriving late at the venue, I jumped right in to the middle of a very entertaining talk about ‚Serverlessness, NoOps and the Tooth Fairy’ featuring lots of rainbows and unicorns by Charity Majors of ex-Parse fame. The talk nonetheless had a pretty serious topic in debunking the myth that serverless will allow us to get rid of operations. Charity proposes a very broad definition of ops, that even encompasses non-developer functions within an organisation:
Operations is the constellation of your org’s technical skills, practices, and cultural values around designing, building and maintaining systems, shipping software, and solving problems with technology.
Or on another take:
Ops is the way you get things done.
In the end she seems to understand operations as to involve anything needed in an organisation to deliver a product or service of the highest possible quality that still makes sense and is viable from a business point of view.
Some other very memorable quotes by Charity:
On the role of ops:
Operations skills are not optional for software engineers in 2016. They are not ‚nice-to-have‘, they are table stakes.
On the relative cost of software development vs maintenance:
The cost and pain of developing software is approximately zero compared to the operational cost of maintaining it over time.
On ops and service ownership:
The center of gravity for applications operations is shifting, from dedicated in-house ops teams to software engineers who own their services end to end. (with a little help from their SaaS friends).
She also argued, that exposing software engineers to customer support activities will improve quality of service. This would also free expert ops staff to do more important stuff than being ‚cannon fodder‘ for your software engineering team, such as catering for instrumentation, resiliency etc.
The shadow side of DevOps: Software engineers need to level up at operations. Outsource as many ops problems as possible! But own operational excellence for your core differentiators.
Because in the end, „you’re gonna be made responsible“ for your product or service by your customers.
Paul Johnston argued in his talk about ‚The future of serverless‘ that serverless is all about reducing maintenance costs but that this should not be conflated with ops skills becoming optional. On the contrary, he maintains that config related code will become even more important. He suggests on going with Hashicorp’s Terraform as far as possible (even for the overly verbose AWS API Gateway setup). He sees an issue with frameworks focusing too much and AWS Lambda and API Gateway, thereby tending to omit other services used in the context of more complex architectures. He also maintains that a real FaaS-supporting queueing service is badly missing in the serverless landscape. He currently finds DynamoDB to be the simplest fall-back as a primitive queuing system. Generally he thinks that it will make more and more sense to keep related config and code together - maybe even going so far as code being embedded in config.
Another highlight was Danilo Poccia presenting on a range of serverless architecture patterns and showcasing them in an example architecture for a serverless media sharing application (that can also be found in his latest book which I can highly recommend). Some key take-aways:
- strive for an event driven design (learn from the front-end notion of data binding and from reactive programming models). A useful pattern in this context is the CQRS pattern. Take important guiding principles from the Reactive Manifesto.
- he discussed distributed vs centralized dataflows and their respective relation to the notion of service choreography vs orchestration, the former generally being - in each case, he argued - a superior pattern for event driven designs.
- data should drive boundaries of services (this idea seems somewhat related to the concept of ‚bounded context‘ from the Domain Driven Design approach)
- the main issues to think about: time dimension (AWS takes care of the space dimension) and ‚message driven‘
- generally, he sees the main advantage in serverless architectures of being able to scale all the way from prototype to production if well designed.
What astounded me from Chris Andersons talk on ‚The making of Azure Functions‘ was the degree of openness under which the development is taking place and that a lot of the platform is being published under a liberal open source license. Microsoft’s Azure Functions team is - similar to IBM’s OpenWhisk and the Google Cloud Functions teams - also very keen on community participation and feedback. Microsoft, but also IBM and especially Google each demonstrated impressive debugging capabilities. In this regard Amazon certainly seems to be behind the curve. It will be interesting to see how the competition amongst the big players for developer mindshare and community building evolves.
- it’s always a good idea to embed version config into events that call the Lambda function, otherwise adverse side effects might hit you because of residue state in underlying containers.
- don’t necessarily follow the principle of one AWS Lambda function per API endpoint. This is a sure way to land in deployment hell - amplified by the fact that deployment automation still is a largely unsolved problem in Lambda-land.
- don’t count on AWS Lambda lambda being stateless since it deceivingly ‚remembers data‘ from previous invocations. This residual state can lead to bizarre problems that are hard to debug. Don’t design for statelessness but rather for share-nothing architectures.
- use a deployment automation tool and test your deployments, e.g. Gojko’s very own Claudia.js. For a list of alternatives see the TNS Guide to Serverless Technologies
All in all the conference was very well organised and featured a lot of interesting talks. Nonetheless, I would have liked to see less vendors presenting their platforms, products and services in favour of more talks about concrete solutions and workarounds to current issues, about proposals for architectural patterns with respect to non-trivial use cases and also more demonstrations of creative uses of serverless tech, such as Jeroen Resoort’s: He displayed how easy it is to use services such as AWS IoT - that are rather geared towards large-scale IoT implementations - for quick prototypes or even small hobby projects by showcasing a robot (a Mars Rover ‚clone‘) built from mBot, RaspberryPi, LEGO and some other components, that could be remote controlled via browser, commanded to take pictures of it’s surroundings and display them in the users browser. Very neat!
To sum it up, serverless still remains a somewhat vague notion that in the end seems to imply basically an outsourcing strategy building on a new generation of managed, cloud hosted IT-services. Taking it as such, you should not assume that you can eventually dispense with ops related know-how and resources entirely (because you thought you outsourced just that). To the contrary, you shouldn’t be surprised if the pendulum sways towards the need for even deeper ops know-how within your organisation and especially within your development teams, since it seems that config and code will become ever more entwined when utilising serverless patterns. Additionally, since you also can’t get rid of the responsibility for the functioning of your product or service from the perspective of your customers, you better build up knowledge about the internals of your favourite vendor’s technology stack, system design and architecture and you should also strive to maintain good relations to your vendor’s tech-team so that you can devise and implement strategies to minimise the potential impact of vendor service degradation or outage on your own quality of service.