Pingdom Home

US + international: +1-212-796-6890

SE + international: +46-21-480-0920

Business hours 3 am-11:30 am EST (Mon-Fri).

Do you know if your website is up right now? We do! LEARN MORE

Web performance – Weekend must-read articles #25

web performance

This week we focus on web performance. There’s something about HTTP pipelining, scalable web architectures, python memcache client, and web performance testing with Fiddler.

Every week we bring you a collection of links to places on the web that we find particularly newsworthy, interesting, entertaining, and topical. We try to focus on some particular area or topic each week, but in general we will cover Internet, web development, networking, performance, security, and other geeky topics.

This week’s suggested reading

HTTP Pipelining – not so fast…(nor slow!)

HTTP Pipelining is an old optimization technique that’s been getting some renewed interest recently. I’ve written in the past about how pipelining is broadly used in Mobile, and recently Chrome & Firefox have been considering enabling it by default.
I set out to try and assess the value of pipelining for page load times, and surprisingly found it have very little effect. This result surprised me, so I dug deeper and looked at this data from multiple angles – the rest of this blog summarizes my research and findings.

We had another article on HTTP pipelining in last week’s roundup.

Improving UX through front end performance

This is a presentation by Lara Swanson, User Experience Manager at Dyn.

Concurrent programming for scalable web architectures

Web architectures are an important asset for various large-scale web applications. Being able to handle huge numbers of users concurrently is essential, thus scalability is one of the most important features of these architectures. Multi-core processors, highly distributed backend architectures and new web technologies force us to reconsider approaches for concurrent programming in order to implement web applications and fulfil scalability demands.

We went down, so we wrote a better pure python memcache client

Memcache is great. Here at Mixpanel, we use it in a lot of places, mostly to cache MySQL queries but also for other data stores. We also use kestrel, a queue server that speaks the memcache protocol. Because we use eventlet, we need a pure python memcache client so that eventlet can patch the socket operations to be non-blocking. The de-facto standard for this is python-memcached, which we used until recently.

Design and implementation of web prefetching solution

The enormous potential of locality based strategies like caching and prefetching to improve web performance motivates us to propose a new algorithm for performance evaluation in scenarios where different parts of the web architecture interact. Web prefetching is a technique focused on web latency reduction based on predicting the next future web object to be accessed by the user and prefetching it in idle times. So, if finally the user requests it, the object will be already at the client’s cache. This technique takes advantage of the spatial locality shown by the web objects. The basics of web prefetching techniques preprocess the user requests, before they are actually demanded. Therefore, the time that the user must wait for the requested documents can be reduced by hiding the request latencies.

Creating a web performance test using Fiddler

As you all know (or should know), Fiddler is a really powerful web debugging proxy which logs all HTTP(S) traffic between your computer and the internet. Fiddler allows you to inspect traffic, set breakpoints, and “fiddle” with incoming or outgoing data. Now one cool thing I didn’t know before is that you can export fiddler events to a Visual Studio Web Test. This is really easy, but I’ll show you how.

And, to finish off with, head over to Internet-map.net and type in a URL. Enjoy!

You can also subscribe to these articles

You can also subscribe to these weekly articles and receive them in your email inbox each week.

Sign up here!

Image (top) via Shutterstock.



2 comments