10 questions about web performance – Sander Temme at Apache
But we now have the great privilege to present an interview about web performance with Sander Temme, member of the Project Management Committee member and contributor to the Apache HTTP Server project.
This is a part of our continuing series of interviews about web performance.
Pingdom: Why is web performance such a hot topic right now?
Sander: Oh, web performance has always been a hot topic! Remember that the Apache Group was originally founded by a group of site administrators who found themselves with performance requirements that the software at hand could not meet. So they started exchanging patches to the existing NCSA server software just to keep up with performance requirements.
What has changed recently is that more people are online, online more of the time, and we are using the web for more and more time-sensitive things like driving directions, time-limited “flash” sales, etc. The increased audience on the web also has led to unprecedented scale: companies that find themselves with a hit application see unprecedented scaling requirements and can grow at a much larger rate than ever before.
Another consideration is that the meaning of the word ‘performance’ has bifurcated. Web applications used to be the domain of specialized startups and relatively autonomous teams within established brick and mortar companies, and they were focused mainly on technical performance: make the app go fast. Recently, businesses have adopted the web as part of their core business, so now ‘performance’ can also have a business meaning like sales revenue or lead conversion.
Pingdom: There’s a lot of evidence that web users tend to leave a website if it loads slowly. Are users getting more demanding and impatient, or are there other reasons behind this?
Sander: I think there are two things going on here: one is impatience or distraction that leads to abandonment if a site’s response is less than snappy. We need the information for which we are browsing now, not five minutes from now. When browsing links from our Twitter or Facebook feed, we are faced with such an information overload that we have no patience for slow response. We’ll just click on the link in the next tweet or post in the stream, because it’s probably just as worthwhile a read. If a site loads slowly, we are likely to switch to another tab or window, and may never return.
The other aspect to this is that web site purveyors are getting better at measuring their users’ experience on their site. The tools are maturing, and increased focus on a site’s yield in terms of time spent, ads viewed or shopping transactions completed encourages operators to detect and address cases where slow response leads to abandonment. These patterns may have always been there, but the current crop of analytics tools allows for their detection.
Pingdom: Web performance involves a lot of testing and numbers. But at the end of the day, isn’t a user’s experience a personal and subjective experience? How do you reconcile the two?
Sander: Metrics analysis and subjective testing are different. One can watch users browsing a site or application, observe their behavior, or survey them afterwards. When analyzing metrics, the trick is to link data and behavior. Since you are likely to have a lot more data, statistics become important, and individual experience disappears. It becomes more about finding out which paths users most likely follow through your site, and which features lead to engagement or abandonment. A|B testing can help you find out whether changes actually result in measurable improvement of your metrics.
Pingdom: Could, at least part of, the answer to improved web performance for end users be tighter integration between the components involved, like hardware, software, networking, etc.?
Sander: Theoretically, perhaps. It depends on how you define ‘tighter integration’. Having an entire application served by an in-process environment like the traditional mod_php or mod_perl may sound reasonable: the web server does not have to make any trips to the network to parse requests and create responses.
However, segregating web applications into tiers has technical and organizational advantages that outweigh any performance degradation incurred by the split. A web server process with an embedded PHP or Perl interpreter can be quite resource-intensive. Using it to serve static content is a waste of CPU and memory. The traditional three-way segregation of a web application cluster allows each tier to be scaled and tuned for performance. Skinny web heads serve static content and route requests for dynamic resources to fat application servers, which in turn access a robust and scalable data store. It’s the Model-View-Controller pattern writ large, and each tier can be designed specifically for the needs of the application in question.
Pingdom: What’s the relationship between web performance and scalability?
Sander: For the technical definition of performance, success means serving resources with low latency. Scalability means serving those resources to many users concurrently, without sacrificing performance. The trick is to perform well and scale well. To focus blindly on one or the other will cause unpleasant surprises. Web performance tuning used to concentrate on optimizing source code, network stacks, disk access etc. Today’s kernels, drivers and software are generally good enough for adequate performance, which allows application developers to concentrate on time-to-market and on scaling their application across multiple servers.
Pingdom: Best practice in mobile web performance isn’t as well established as in other fields like desktop. Are we getting closer to a sort of universal agreement or understanding of performance in the mobile space as well?
Sander: Mobile is much newer than desktop. The web has come to our desktop for close to twenty years, and we as a community have a wealth of experience about how to feed those desktops the data their users request. Mobile started in earnest only about five years ago, and presents a whole new set of challenges.
Mobile platforms are not as easily instrumented as desktop browsers, a situation compounded by the wide variety of networks and devices in use, and the fact that their users move around by definition and switch networks without lowering their expectations for responsiveness from their apps.
Pingdom: For someone who is going to start working with their web site and performance, where do you suggest they start? What should they do first?
Sander: The first thing they should do is take a step back and define what they actually mean by ‘performance’. They should define their goals: are they looking for a certain number of concurrent requests or users, or a certain response time? What constitutes ‘success?’ Is it time users spend on the site (which translates into articles read, ads viewed and click-through rate), or orders taken and goods sold? Find metrics that quantify these goals, and measure them. Implement changes, and re-measure: do they result in an improvement to those metrics?
Pingdom: With everyone talking about cloud, it seems to be everywhere. What’s your view on cloud and web performance?
Sander: Cloud hosting allows application deployments to rapidly scale up and scale down by starting and stopping application instances or virtual machines on-demand. While cloud environments are not necessarily great for raw software execution speed, they do offer adequate performance to run applications. And the scaling mechanisms offered are compelling: especially in situations where demand is expected to fluctuate strongly, cloud deployment offers a great advantage.
From the point of view of a web app administrator, not all applications translate seamlessly to cloud deployments. Sometimes software design changes need to be made to accommodate the new platform. There might be security concerns, requiring that adequate controls be in place to safeguard application and customer data. Instead of code performance optimization, the accent shifts to package the application for easy deployment of new instances.
Cloud computing treats servers and software as commodities, to be added and removed on-demand. Application design should follow this paradigm shift in order to take optimal advantage of this new deployment environment.
Pingdom: What are we going to see happen in the next few years in terms of web performance?
Sander: In the short term, I think we will see more rich user interfaces with live updates within the loaded resource; more requests from apps with real-time response requirements, and less tolerance by users for sites and apps that update slowly. The advent of Web Sockets means we have to rethink scalability on the server side: the paradigm of dedicating a worker to each client connection breaks down when we move from stateless to stateful connections.
Pingdom: Finally, is there something exciting that you or your company is working on in terms of web performance that you can tell us about?
Sander: The most exciting thing about the Apache Software Foundation is that it is not a company: we are not beholden to revenue or shareholders. This means we can focus on technologies and standards in the long term. It also means that we are free from the pressures of commercial competition: several additional web server software environments like nginx and lighttpd have emerged that address specific user needs.
We welcome these efforts because they enrich the software landscape and offer increased choice to users. At the same time, we have recently released Apache HTTP Server version 2.4, with enhancements like event-driven connection handshakes; runtime loadable processing modules and FastCGI support to remove heavyweight scripting language interpreters like PHP from the httpd core processes; and experimental support for the lightweight Lua scripting language. We think this release is an important step on our way to support today’s deployment environments.
Web performance has never been more important. However, the definition of performance is in transition. Previously performance was measured strictly by quantitative factors like the speed and efficiency of web server software. Today it is increasingly defined by qualitative yet measurable user experience metrics like page responsiveness, optimal navigation through a site and continued low latency under increasing load. From operating system and web server software through the design and code of the application itself, the entire stack should be designed with scalability in mind.
About Sander Temme
Sander Temme applies hardware controls to the use of cryptography for Thales e-Security. In his scant spare time, Sander is a Project Management Committee member and contributor to the Apache HTTP Server project. He is a frequent presenter regarding Apache HTTP Server security and performance tuning at ApacheCon and other conferences. Sander has a degree in Experimental Physics from the University of Amsterdam and is owned by Murphy, the wonder cat.
About the “10 questions about web performance” interview series
We have gathered some of the best and brightest minds in the web and IT industry to a discussion about web performance. Over the next few weeks and months, we’ll be rolling out a series of interviews, bringing together people from web design, mobile and computer hardware, web hosting, software, and other areas. You can find all the interviews in this series on the Royal Pingdom blog.