Burberry X Google Hackathons

While working at Burberry I had a few opportunities to work directly with some of the lovely people at Google, focussing on site speed, resilience, and improving the user experience.

Working with Google representatives, we organised a series of fascinating hackathon events relating to improving perceived performance, and not losing an offline user.

Perception of speed

These sessions started with an introduction for that hackathon’s theme; for example, a “Perceived Performance” session described the 2009 study at an airport in Texas where complaints were abnormally high for a baggage delivery time that was below the average wait time – the cause eventually was discovered to be due to the arrival terminal being only 1 minute from the baggage claim area, so although passengers were fast to disembark the plane and get through the terminal, they were then waiting around for their luggage.

The solution that most improved passenger happiness was to increase the distance between the arrival terminal and baggage reclaim, such that, although the delay between arriving and receiving baggage was largely the same, it was spent actively moving; passengers were not feeling like they were wasting their time passively waiting around.

This was a great example of if you can’t make something actually fast, make it seem fast enough.

Offline (or is it?)

Some other interesting hacks were related to the user being offline; something that is more common that teams might expect. Mobile traffic has soared over the past few years, leaving desktop traffic in its wake.

One huge difference between desktop and mobile is the connectivity: WiFi / ethernet is more reliable than traversing the physical space that makes up a mobile network: jumping between cell towers, moving through built up areas and underground transport dead zones.

The default website reaction to this would be a browser error page; however, given how often a mobile user might see this on your site, it’s certainly worth considering mitigations.

The first hack to mention was one where I suggested a couple of new engineers attend; surprised at the invite, they asked what should they do?!

My suggestion was to implement client side caching via a service worker for any visited product detail page (PDP), and alter the product listing page (PLP) to greyscale any images of products that were not in the cache when the user is offline; this allows the user to still hop between listing page and detail pages for products they’ve already seen – which they can clearly identify – until they’re back online.

(Mock up of how a “greyscale if not cached” product listing page could look)

This suggestion was implemented beautifully, and that team won the multi company hackathon! Go team!


The second mentionable hack, although developed at a much later hackathon, would turn out to be complimentary to the first.

By utilising a service worker and custom javascript (background sync doesn’t work in Safari, which was the biggest mobile market for ecommerce), the user could still add products to their basket while offline, synchronising the basket automatically once connection is restored. This was slick, and using it by default could avoid annoying “wait, did that add to my bag? Should I click again?” situations. (I did a talk about Optimistic UI at some point also, which relates to this)

This could potentially be progressively enhanced further to simplify and use the new Periodic Background Sync API, or the WebWorker Background Sync API for browsers that support them. I’ll leave it as an exercise for the reader! Fancy trying it?

Being able to involve a large subset of the digital department – including all roles, not just engineers – was fascinating. Getting everyone on shared context, and seeing what is actually possible in browsers with a relatively small amount of code and effort, was an eye opener for many.

If you are unsure the value of a hackathon, or not sure how to run one, contact someone who has done it before to help define desired outcomes, set the agenda, and help it go smoothly. The lovely people at Google did this beautifully for us at Burberry, and I’m happy to advise also!

Google’s Chrome User Experience Data in WebPageTest

This article assumes that you know the basics of AWS, WebPageTest, SSH, and at least one linux text editor.

Chrome User Experience Report logo + WebPageTest Logo + Core Web Vitals' LCP thresholds. Quite a busy hero image, I'll admit.

When talking to people about website performance stats, I’ll usually split it into Real User Metrics (RUM) and Automated (Synthetic/Test Lab):

  • RUM is performance data reported from the website you own, reported into the analytics tool you have integrated.
  • Automated are scripted tests that you run from your own performance testing tool against any website you like.

RUM is great: you get real performance details from real user devices and can investigate the difference in performance for many different options.

For example, iPhone vs Android, Mac vs Windows, Mobile vs Desktop, Chrome vs Firefox, UK vs US, even down to ISP and connection type, in order to see who is getting a good experience and who can be improved.

This data is invaluable in prioritising performance improvements, since you can tie it back to the approximate number of users it will affect, and therefore the impact on your business.

There are loads of vendors who can provide this for you (I’ve used many of them), or you can write your own – if you’re a glutton for punishment (and high AWS bills – ask me how I know…😁)

However, since this is measured on your own website and reported into your own tooling, you can only see such real-world performance detail for your own website; no real-world user experience data from your competitors.

Automated tests are great: you get detailed measurements of any website you can access – your own or competitors, or basically any website – in a repeatable fashion so that you can track changes over time.

You can have as many automated tests as you like, you can test from wherever you’re able to set up a test agent, and with whatever device you’re able to automate or emulate.

However, since these are all automated tests running because you said so, you can’t use them to understand how users are using your site, on which devices, what devices are underperforming others, and from where.

Again, there are a load of vendors who can provide this for you; writing your own is a bit more of a headache though – I wouldn’t recommend it, especially while wpt continues to be free for self hosting.

What if you could get some of the usual key performance metrics you’re used to with RUM, but for sites you don’t own such as your competitors?

In this article I’ll talk about the Google Chrome User Experience dataset and how to use it in your performance test setup to find the intersection of RUM and Automated performance test results, wiring it all up into your WebPageTest setup! Continue reading

Virtual Shop Assistant Chatbot with Amazing Image Recognition

There has been some significant progress in “deep learning”, AI, and image recognition over the past couple of years; Google, Microsoft, and Amazon each have their own service offering. But what is the service like? How useful is it?

Everyone’s having a go at making a chatbot this year (and if you’re not, perhaps you should contact me for consultancy or training!) – and although there are some great examples out there, I’ve not seen much in the e-commerce sector worth talking about.

In this article I’m going to show you a cool use case for an image recognition e-commerce chatbot via a couple of clever APIs wired together by botframework.

Continue reading