I love how Dr. Werner Vogels manages to put technology advancements into a broader context. In this year’s re:invent keynote, it was about hope. The world always keeps turning, and we should make progress under all circumstances. The topic of “the world” also tied into the main technical keynote: asynchrony and by extension event-driven architectures. The real world is asynchronous and it works! Thus, synchronous systems are doomed to fail. The keynote covered important milestones in Amazon’s engineering history: the distributed computing manifesto from 1998, the original design principles of S3 and the evolution of Amazon’s architecture from monolith to shared services. It included new product announcements that support developers to build event-driven architectures faster and better. The keynote ended with an outlook to the future, where simulations and quantum computing play an even bigger role.
Overall, the keynote was full of interesting insights and is well worth watching. Amazon engineers already thought about those subjects in the nineties, and now it gets them worldwide attention in the cloud engineering space when they make their annual major product announcements.
In this article, I will give an overview of the products that Dr. Werner Vogels announced in his keynote and how they might relate to our work.
But before we dive into that, I’d like to share my favorite keynote quotes:
- The world is asynchronous. […] Synchronous is simplification. Synchronous is a convenience. Synchronous is an illusion.
- All complex systems that work evolved from a simpler system that worked.”
- Learn from the universe itself. It is extremely agile, extremely fault-tolerant, and resilient and robust.
- Visualize everything. Systems don’t need it. It’s all for us. People need visualizations.
- And the one that especially warmed my heart: You must all know Martin Fowler. He is one of the most famous architects in our world. If you don’t know him, look him up.
And now to the announcements:
A unified software development and delivery service, Amazon CodeCatalyst enables software development teams to quickly and easily plan, develop, collaborate on, build, and deliver applications on AWS, reducing friction throughout the development lifecycle.
Initially, I got insanely excited about this. It seems that AWS finally provides a better, out of the box way to actually support a modern software development workflow for large teams.
What additionally excited me were the integration capabilities with standard tools such as JIRA, Slack and GitHub. And last but not least, the possibility to define application blueprints. I believe that we as an industry need to standardize much; much more when it comes to software development and software architecture. Having an integrated development environment that encourages you to use standard tools and ways of working is a great step. Especially, as promised in the keynote, the platform takes care of all the “heavy lifting” - or sometimes also called plumbing - of setting up everything you need to build, deploy and run an application, but that doesn’t add any business value by itself.
After looking into it, I’m not sure if my initial excitement was justified. AWS never had a strong focus on developer experience, and this new product is no exception. It looks a little as if they plugged CodeCommit, CodePipeline and Proton together with a little bit nicer UI and user experience. There seem to be some extra features, like being able to set up dev environments with one click and connect them directly to your IDE. I believe it is promising, and we will evaluate further. For clients that are all-in on AWS and don’t want to maintain too many platforms and tools, it definitely looks like a much better alternative than the previous AWS offerings.
Following the theme of getting more developers on AWS and keeping them there, the AWS Application Composer was announced. It should make it easier to build more serverless apps and do it faster.
Today, AWS is launching a preview of AWS Application Composer, a visual designer that you can use to build your serverless applications from multiple AWS services.
In distributed systems, empowering teams is a cultural shift needed for enabling developers to help translate business capabilities into code.
I really like the phrasing here because I believe this is the true power of all the cloud native (Note: cloud native, not cloud-native) serverless services. A lot of applications contain huge amounts of redundant glue code and way too much time is spent on writing non-value adding code. This relates to the point brought up during the announcement of the previously mentioned CodeCatalyst - a lot of time is spent on the heavy-lifting, and it should not be the case.
To be honest, my initial gut reaction to this announcement was huge skepticism. Using a visual editor to create code immediately makes me think of the impacts on long-term maintainability. I think a key point in the announcement is:
This helps new builders when designing their first serverless applications and provides an initial configuration, which more advanced builders can amend. This allows you to include good operational practices when designing a serverless application.
It is clear that the tool’s aim is to get new developers started quickly. For “advanced builders” - which I translate to anyone who builds applications that are supposed to run in production, i.e. us usually - it does not seem a good fit.
However, being naturally curious, I would still be interested in trying it out - at least for proof of concepts - and hearing opinions and experience reports about this.
I think this is the announcement I should actually have been the most excited about. Because this really sounds like the removal of glue code. Basically, this new EventBridge feature translates the Unix pipe command to the serverless world; making it possible to pipe the output of one command (aka service) directly to another command (aka service) without having the need to write code in between.
Today, I’m excited to announce Amazon EventBridge Pipes, a new feature of Amazon EventBridge that makes it easier for you to build event-driven applications by providing a simple, consistent, and cost-effective way to create point-to-point integrations between event producers and consumers, removing the need to write undifferentiated glue code.
I have to admit that I haven’t yet looked into it deeper. Still, it’s something we will definitely look into much further when we build new architectures from scratch.
The last announcement – which was chronologically actually the first one - was Step Functions Distributed Map.
I am excited to announce the availability of a distributed map for AWS Step Functions. This flow extends support for orchestrating large-scale parallel workloads such as the on-demand processing of semi-structured data.
Something I feel excited about as well because it aims to simplify. Instead of having to use EMR or Kafka to perform MapReduce on large amounts of unstructured or semi-structured data, simple Lambda functions can now be used. This again seems like something to consider when designing new architectures in order to not introduce complicated (or even complex) systems when they are not needed.
And that was it with the announcements and my take on it! Looking forward to start using the new products in the solutions we build for our clients. If you are eager to try them out and don’t have the chance in your current job, apply to us 😉