Finding an optimal strategy for implementing modern and fast web applications has been likened to searching the elusive Holy Grail: all of the current ways to implement such a type of applications come with their own set of drawbacks.
During a recent workshop that we had with a customer, we discussed the different approaches for implementing modern and fast web applications. After demonstrating what is possible with client- and server side rendering approaches the following question has been raised:
This question started a very positive discussion that motivated us to look into the reasons of some of those companies in detail.
In this article we will merely focus on different approaches to render pages within web applications - a detailed view on the features of current web stacks (or the patterns used therein) is not covered by this article. To distinguish the basic principles of the different approaches we focus on page rendering and speed.
Let's start looking at the basic principles: The pages of a modern web application can either be:
Server side rendering is still widely used among current web frameworks (i.e. erb-templating in Ruby on Rails). The major drawback of a pure server-side approach is the full round-trip necessary on a page change resulting in a performance impact: a user's request (i.e. clicking a link) will result in the page being rerendered completely (server-side caching might speed up the response though).
- Caching: Caching on the client is limited in most environments (localStorage defaults vary from 5MB to 25MB depending on the user agent) and is also hard to invalidate manually
Implementation is not limited to pure server-side or pure client-side approaches. A mixture of both is perfectly possible and the degrees of what is rendered on the server and what is rendered on the client varies between approaches: i.e. only the initial page is rendered on the server and from there on a client-side rendering approach takes over the scene.
This is where rendr, the new kid on the block chimes in. It intends to give developers the freedom to freely chose what should be rendered where. The idea is to implement a base for creating modern and fast web-applications that overcome the known issues outlined before (i.e. Performance, SEO, Maintainability) and combine the advantages of both worlds.
Let's look at client-side vs. server-side performance first.
In a blog post Karl Seguin pointed out that client side rendering must be slower by definition when it comes to page rendering.
He continues to state that it doesn't really matter if you transfer gzip'ed HTML or gzip'ed JSON (i.e. for a search results page). Leaving aside that content + markup is larger then content alone - he might have a point here as well.
That being said, I would not completely agree with his conclusion. There are several other aspects that might add to the decision of a pure client-side vs. a pure server-side approach and more importantly the time that passes until the user is presented with a page that he can interact with:
- initial page load time: Having a page that relies on a huge amount of data might be better off with presenting something to the user fast and load data asynchronously in the backend - instead of loading a page including all content rendered on the server.
What to choose then?
Well then, what to choose performance wise? Pure server-side? Pure client-side? Mixed?
Good news is that you don't necessarily have to chose between pure server-side and pure client-side approaches. For most use cases something in the middle (a mixed approach) makes the most sense and allows for the highest degrees of freedom.
- The initial page load is faster (less markup, no content), the user sees the page earlier and
This means that even with more payload being transferred (in total until the page has been assembled completely) with client-side rendering approaches, the user sees content earlier. This is especially true for modern browsers running on modern hardware. For a mobile related context or a user base that is known to have less powerful hardware and limited browsers a server-side approach might be the better choice.
So how does such mixed solution look like? Let's examine some examples from the field.
Tales from the field
airbnb (Relaunch of their mobile app in January 2013)
airbnb rewrote their current Backbone.js (client) + Ruby on Rails (server) based mobile application in January 2013. They used Node.js + Backbone on the server making use of their newly developed rendr library.
The advantage of the new approach was a lower initial page load because real HTML being served by the server on the first request that is also fully crawl-able.
In extreme cases their previous search results page took up to 10 seconds until the results has been loaded completely. With their new approach they brought that down to 2 Seconds. This has been achieved by loading HTML directly and loading most of the required JS asynchronously. The user can interact with the page even before everything has been loaded (the time to content we discussed earlier).
Twitter (Performance improvement in May 2012)
In 2010 Twitter overhauled their architecture completely to an all client side approach (#NewTwitter) relying heavily on front end rendering with a REST API delivering the content. This offered a lot of advantages but lacked support for server side optimizations, so that in 2012 they moved back to a rendering on the server. After that the initial page load times could be cut to one-fifth of what it was before.
It’s agnostic on how requests are routed, which templating language you use, or even if you render your HTML on the client or the server
Basecamp (Basecamp Next in February 2012)
So what is the Holy Grail approach of developing modern and fast web applications? The most realistic answer is: it depends.
Leaving performance optimizations techniques (i.e. caching, domain sharding, SPDY integration) aside you have to carefully look at the nature of your application: There is certainly no one-size fits all solution.
What kind of application are you going to develop? An application that consists of one single view? Something like Google Mail? Or a more complex application with a multitude of different views? What kind of performance is important to you? Overall page speed? Time to content? Scalability?
Apart from that other factors might be important, i.e. what are the existing skills of your team?
Event though the three examples from airbnb, twitter or beasecamp all outline different technological stacks they share common ideas:
- get the time to first content (or time to first tweet respectively) down to a minimum, mostly by rendering the initial page on the server
- present the user with a highly interactive interface by using a client-side approach where necessary. This happens to varying degrees (from replacing HTML partials within a page) to more complex client-side application.
- try to reuse as much as possible on the client and on the server.
I guess the last point is probably the most important one. One of the nicest thing of the approach that airbnb chose is that you can use the same language on the client and the server and that you virtually shift parts of application freely between client and server by only maintaining a single codebase. From all the mixed approaches presented this seems to be the most interesting one.
As stated before the rendr library is still pretty new and it's yet unclear where the journey ends. We still believe in the proven stack of Rails (or Sinatra) with a Backbone.js front-end for future applications but will definitely evaluate the rendr-appraoch in one of the next projects.