Velocity 2012: Building a Stronger and Faster Web, Part 2

Velocity Logo
This is the second in a two-part series of articles discussing the O’Reilly conference, Velocity 2012. Please check out the first article in the series, available here: Part One

O’Reilly’s Velocity Web Performance and Operations Conference, Velocity 2012, wrapped up after three days, from June 25 through June 27th. Now in its fourth year, Velocity Conference is often described as the premier conference for web operations and web performance professionals. The theme for this year’s conference was, “Stronger and Faster.”

There are four tracks, covering:

  • Web performance
  • Operations
  • Mobile performance
  • Culture

In the last article in the series, we focused on talks in the operations and culture tracks. Those tracks are of primary interest to web engineers with roles closer to operations. In this article, we focus on web and mobile performance, and other issues of interest primarily to front-end web engineers.

We’ll focus on 3 talks in the web/mobile performance area:

  • Patrick Lightbody, “Gathering Insights from Real User Monitoring”
  • Ben Galbraith & Dion Almaer, “The Performance of Web vs. Apps”
  • Matt Atterbury and Mustafa Tikir, “Using Google Sitespeed and PageSpeed Products to Debug, Improve, Measure, Iterate”

Gathering Insights from Real User Monitoring

Patrick Lightbody

As an opening, Lightbody made a show of changing the title of the talk, from Real User Monitoring (RUM) to Real User Measurement, because the point of RUM is to make objective measurements of real systems’ performance, in order to learn and optimize those systems. He presented a framework for evaluating various monitoring tools, in terms of how well they can assess a system’s performance, availability, and functionality. These are the three core attributes of systems that users care about.

  • Performance: how fast is it?
  • Availability: can users get to it?
  • Functionality: does it work the way it is supposed to?

After establishing that framework, he then describes several categories of monitoring tools that can be evaluated within the framework, including:

  • Synthetic HTTP requests (external performance)
  • Network and application performance monitoring (APM)
  • Monitoring real users
  • Watching JavaScript errors to identify functionality problems

Ultimately, the evaluation of the different monitoring solutions within that framework leads to the conclusion that a combination of all of these approaches is required to get complete coverage across the three dimensions of performance, availability, and functionality. To get the complete picture, you can watch the video below.


The Performance of Web vs. Apps

Ben Galbraith & Dion Almaer

Galbraith and Almaer evaluate the difference and relative merits of web-based versus native applications on mobile devices. They explore the differences in terms of:

  • Ease of maintenance
  • Size of audience reached
  • Performance
  • Advanced visual effects vs. user interface responsiveness
  • Ease and frequency of delivery of new releases

The speakers are very clear up front that they may have some bias, with a preference for web, as opposed to native, applications on mobile platforms. Still, the observations that they make feel balanced in this regard, as well as founded in fact. They also present a number of counterpoints, when the facts favor native applications. For example, while it is generally true that web applications have a larger reach than native ones, they mention that some apps, such as Angry Birds, have a larger audience than almost anything else on mobile platforms.

They listed a few disadvantages of building native applications.

  • Need to cover many platforms
  • More markets will lead to more fragmentation
  • Difficult to get a consistent feel across fragmented platforms

They also listed one advantage of native applications. Namely, that nothing is better for user interface performance. However, they pointed out, raw UI performance generally doesn’t matter as much as responsiveness, and there are many techniques for getting better responsiveness out of web applications. To illustrate, they mentioned a particular advantage of Node.js, which runs server-side JavaScript. Since the application logic is implemented in the same language in both the client and the server, it is possible to shift some of the page rendering burden to where it will produce the most responsive application.

  • Server side rendering
  • Conditional rendering based on the client platform
  • Server rendering for initial page load, client side for dynamic content updates

Other server platforms can do this, of course, but few if any others can do it without duplicate implementations of portions of the software.

In the end, the conclusion seems to be that for most applications, starting with a web-based platform is the logical choice. For much more detail, and to evaluate for yourself, the conference video is available below.


Using Google Sitespeed and PageSpeed Products to Debug, Improve, Measure, Iterate

Matt Atterbury and Mustafa Tikir

Matt Atterbury and Mustafa Tikir explain how to use Google PageSpeed and SiteSpeed products to improve web application performance. First, Tikir talks about using Google Analytics to get detailed statistics on visitors and how they interact with your web applications. He demonstrates how Google Analytics provides three important views of your performance data.

  • Overview: summary statistics with averages
  • Page timings: page load statistics with data in histograms with drill down
  • User timings: like page timings, but for any discrete events

Next, Atterbury discusses how Google PageSpeed can be used to optimize web applications without making actual changes to those apps. PageSpeed uses an open source Apache module to automatically rewrite HTTP responses on the fly, implementing a set of optimizations and web performance best practices. The module is structured as a framework with a set of filters. The filters are applied in series. Each performs one function, and each can operate on the output of previous filters in the chain.

The filters’ benefits depend heavily on the application to which they are applied. The main focus of Atterbury’s portion of the talk centers around determining, for your applications, which filters will result in an improvement in performance and which ones won’t. He recommends performing a set experiments using A/B testing, to gather enough data to make an informed decision about which filters to use and how to adjust their settings, citing a number of benefits:

  • Based on real world data
  • Tests are double blind, because A/B decisions are made via random, weighted selection, and neither the administrators nor the users are aware of the selection.
  • Leverages Google Analytics for viewing and analyzing results, in SiteSpeed reports

Next, Atterbury talked about a number of specific filters, and the effect observed on a real-world site (though with identifying marks removed). Mostly focusing on new filters, he discussed:

  • Inline preview images: substitutes low-resolution images, and then replaces them with full-resolution images after the page has been initially rendered
  • Lazy loading images: delays image loading until they’re needed
  • Defer JavaScript: one of the more aggressive filters, prevents JavaScript from blocking the rest of the page from loading, and then runs the scripts later

Some of the filters have the potential to change the functionality of the pages served in unexpected ways. However, with testing and some tweaking, PageSpeed has the potential to significantly increase both the real and apparent speed of your applications. Atterbury presented a final, significant result. For the example site, the application of the three filters mentioned above results in a page that is nearly twice as fast, with no changes to the application.

No YouTube link is available for this video. However, it is available as part of the Velocity Conference 2012: Complete Video Compilation. A list of free videos is also available in the Velocity 2012 US YouTube playlist.