Zit Seng's Blog

A Singaporean's technology and lifestyle blog

Fixing A Slow Website, Part 2

This is the part two continuation of my last post on Fixing A Slow Website. I was writing about how websites are often slow because of their design, not so much because of the network or servers, as are most commonly blamed for performance bottlenecks. It’s funny that many IT folks don’t really understand, even though these knowledge isn’t new at all.

Let’s take a look at how bloated many websites have become.

Our main newspaper, Straits Times, webpage requires a browser to fetch a whopping 343 resources, totalling 6.88 MB in size, to render the front page. The page took 72.44 seconds to load. Wow, really, wow. To be fair, the page loaded much quicker, but was consistently stuck on one resource which had to timeout before the page could be considered loaded. (See GMmetrix report.)

CNN, on the other hand, needed 168 resources, totalling 3.33 MB, and just 11.13 seconds to load the page.

News websites tend to be rather heavy, because they have too much content, need plenty of visuals, and are also heavy on advertising. Fine. Let’s look at other types of websites. The Singapore Government website: 107 resources, 1.43 MB, 7.28 seconds (see GTmetrix report). Or the NDP website which really ought to be very simple: 47 resources, 1.55 MB, 3.98 seconds (see GTmetrix report).

In the table below, you’ll see a bunch of websites I’ve tested using GTmetrix.

WebsiteRequestsSize (MB)Time (s)Page Speed GradeYSlow GradeReport
Straits Times3436.8872.44D (65%)E (59%)link
CNN1683.3311.13B (82%)D (68%)link
Singapore Government1071.437.28B (88%)C (75%)link
NDP Website471.553.87D (68%)C (71%)link
IDA773.552.49B (88%)B (80%)link
Co.Design10219.85.79A (98%)C (74%)link
HardwareZone2456.2117.55B (83%)D (67%)link
Mr Brown16412.99.79D (65%)D (69%)link
Zit Seng’s Blog220.292.29A (98%)A (91%)link

Apart from the fact that webpages often pull in too many resources, there’re a lot more reasons why a webpage cannot be rendered, or fully rendered, because of the way a webpage loads its resources, and how the resources are delivered.

The above table is not intended to grade or compare one website with another, although admittedly I’ve inserted a shameless self-plug to my impressive double-A graded blog. The point is that there are many things not optimal with all these websites, but they either don’t seem to know or they don’t care.

Much of this knowledge has already been captured by Yahoo about a decade ago. They have published a set of rules on Best Practices for Speeding Up Your Web Site. It’s also been published into the book High Performance Web Sites which you can buy on Amazon. Incidentally, the YSlow Grade in the above table is based on the rules form Yahoo’s research.

The GTmetrix analysis is a very good start to learning about optimising your website. This is because their report also provides more information about what exactly you need to do.

The other score in GTmetrix is the Google Page Speed grade. It’s not exactly the same thing, but you can also run Google PageSpeed Insights directly on your own. Similarly, PageSpeed Insights make some recommendations about what you can do.

Much of these recommendations, to speed up your website, revolve around two goals:

  1. Make as few requests as possible.
  2. Make the best of each request.
  3. Make it easy for the web browser to do its work.

1 and 2 above sounds pretty similar. In fact, even 3 seems to be somewhat related. It’s basically asking you to put yourself in the shoes of a web browser, and organise your website so that the browser can work faster. These recommendations aren’t really rocket science. But it did need some people to think about them and structure them as a set of guidelines, or rules, so that others can easily follow.

Let me try to give an example. Both Yahoo and PageSpeed Insights make suggestions to combine files in order to minimise HTTP requests. If a webpage refers to three CSS stylesheets, combine them into a single stylesheet, so that the browser only fetches the page once. Otherwise, if there were three stylesheets, then the browser has to make three HTTP connections. No doubt many web browsers and servers do support HTTP 1.1 connection pipelining, still, breaking things up into too many parts still results in less efficient resource fetching, and there is also a default limit most web browsers apply to simultaneous connections to the same web server.

How did the three stylesheets come about in the first place? Perhaps someone thought it was much more modular to organise things this way, or the stylesheets came from three independently developed parts of the website. Programmers are often taught that modularity is good. Put common code in libraries, they’re told. But this results in lots of extra work in the web browser. Not different from how an OS program loader has to struggle with runtime linking of dynamic libraries, except that instead of pulling stuffs from disks locally, web browsers are fetch stuffs from 200 ms away.

When I’ve explained the above, it may just make perfect sense.

There are other little nuances, such as avoiding HTTP redirects. One common reason for HTTP intended redirects is a missing trailing-slash from a URL that should otherwise have one. This causes the web server to send a HTTP 301 redirect response for the web browser to use a properly constructed URL. It’s a wasted HTTP request.

Then there are other more complex considerations, such as whether stylesheets and scripts should be externalised. There are certainly pros and cons either way. Like the earlier example with multiple stylesheets, externalising such resources could be good modular practice. However, depending on the situation, such externalisation could result in unnecessary HTTP requests.

In fact, one common optimisation recommendation is to inline small images into the HTML file itself. I.e., instead of loading an image externally, the image is directly encoded into the HTML file. Does it sound like unnecessary bloat added to the HTML file? Well, you’ve got to weigh that against an extra HTTP request.

Unfortunately, programming school may teach one thing, but real life tells you that you’ve got to adapt.

It can get quite frustrating trying to convince people that you understand the problem better than them. Like how the problem isn’t with the network or server, but just simply a poorly designed website. Sometimes you don’t know the solution, but at least you know what the problem is. However, people in the upper layer prefer to throw random questions and suggest blame on a variety of things. The firewall for example, perhaps it is unable to cope with the traffic. Or the load balancer, perhaps there’s something wrong with it.

A common template of such argument go like this. The website worked fine in development. It also worked fine in UAT. It passed all tests in QAT. It was even subjected to stress testing. Everything checked out fine. However, when it was moved into production, the website died. Obviously, some hardware in production is different and therefore is to blame for all the problems. In most organisations, QAT is not an exact replica of production. So those differences automatically get assigned the blame.

It is true that switches, routers, firewalls and load balancers can all contribute to some latencies and delays. It is also probably true that the configuration may be non-optimal. The same can be said of storages and servers.

However, any marginal performance gains you get out of tweaking these layers are likely to have negligible impact to the end-user experience, particularly when PageSpeed Insights and YSlow tell you your website has major issues. Convincing the website designer, or the architect who has orchestrated the entire system, that they are wrong, however, can prove to be a real challenge.

Another comment I want to make about using Google PageSpeed Insights is that it helps you test your website user experience in different ways, apart from simply its page load speed. A large part of Internet users are now on mobile devices instead of traditional computers. Google confirmed that in 10 countries, including the US and Japan, there are more google searches from smartphones than on desktop computers. If you haven’t begun to do so, it’s high time you get started to see if your website works for mobile users. Does your webpage resize properly for narrower screens? Are links clickable with fat fingers, as opposed to highly accurate mouse pointers?

Unlike packaged software which are typically tested only in the lab and in limited field trials, websites and web apps have the benefit of living on the Internet, where telemetry can be beamed back to provide real-time analysis of the app performance and user behaviour, and updates can be pushed out universally and automatically. Hence, website designers cannot just think of their work as being the one-time development portion pre-launch, or the occasional maintenance required post-launch. They should, instead, expect that a lot of testing, optimising, and tweaking should continue to happen when the website has gone live.

I digressed. Those are good points to think about though. It’s important, however, to remember that what counts is the experience of the end user. There is little point in improving latency, requests-per-second, and such metrics that many of us may love to look at, when at the end of the day it doesn’t translate to improved end-user experience. You, the web developer, or web designer, or the system administrator, you are not the end-user, not even when you think you’re mucking around with a web browser like other users.

There are a couple of test sites that I turn to. I’ve already mentioned GTmetrix and Google PageSpeed Insights. Another favourite I use is Pingdom Website Speed Test. It actually seems to be a subset of what GTmetrix does, but there’s no harm to have yet another site providing view of your website performance. You can also look at WebPageTest and WebSiteTest.

Are there quick solutions to fixing slow websites? There’s no fix that will make magic happen. However, if I had to make a recommendation, Google PageSpeed Module seems to do a reasonably good job fixing some things automatically for you. I blogged about using PageSpeed Module recently, and it’s really something worthwhile if you have your own Apache or Nginx web server that you can configure.

Allow me to share one last case study of a website I had to look at. This was about a web based game, where users had to click lots of things rapidly on their browser, easily at the rate of about 3 – 5 clicks per second. Every click translated to an AJAX call, which meant it would eventually lead to one HTTP request for every click. This is another great example of how such a website could work in development and testing, where at the most they only had a handful of users who were all local. But what happens when the website went live? They had maybe a thousand “players” online simultaneously, connecting from all over the world, sending aggregated HTTP requests of many thousands per second, some of them from over 300 ms of Internet latency away. Surely the designers could have designed the game to work better? Yup, if only they understood deeply what goes on behind the scenes.

I’ve not begun to touch on more enterprise-y kind of topics, like the use of Content Distribution Networks, to push content closer to users. Or even simply distributing content across multiple domains to take advantage of more parallelism in HTTP fetches. But I think it has been an interesting enough eye-opener to the question of why some websites are slow, and to some extent, how they can be fixed.

1 thought on “Fixing A Slow Website, Part 2

Leave a Reply

Your email address will not be published. Required fields are marked *

View Comment Policy