WebPageTest Private Instance: 2021 Edition

Catchpoint's WebPageTest

The fantastic WebPageTest, free to use and public, has been available to set up your own private instances for many years; I wrote this up a while back, and scripted a Terraform version to make this as easy and automated as possible.

For AWS it was just a case of creating an EC2 instance (other installation options are available) with a predefined WPT server AMI (amazon machine image), add in a few configuration options and boom – your very own, autoscaling, globally distributed, website performance testing solution! New test agents would spin up automatically in other AWS regions, all based on WebPageTest Agent AMIs.

In 2020 WebPageTest was bought by Catchpoint and we finally saw improvements being made, pull requests being closed, and the WebPageTest UI getting a huge update; things were looking great for WebPageTest enthusiasts! If you havent heard of Catchpoint before, they are a company who are all about global network and web application monitoring, so a good match for WebPageTest.

Unfortunately, however, this resulted in the handy WebPageTest server EC2 AMIs no longer existing. If you want your own private installation you now need to build your own WebPageTest server from a base OS. It can be a bit tricky, though it gives you greater understanding of how it works under the hood, so hopefully you’ll feel more confident extending your installation in future.

In this article, I’ll show you how to create a WebPageTest private instance on AWS from scratch (no AMI), create your own private WebPageTest agents using the latest and greatest version of WebPageTest, and wire it all up.

Continue reading

Automating WebPageTest via the WebPageTest API

webpagetest robots

WebPageTest is incredible. It allows us to visit a web page, enter a few values and then produce performance results from any destination around the world. Best of all, you can do this in many different possible browser configurations; even on many different real devices.

If you’re doing this a lot, then using that simple web form can become the bottleneck to rapidly iterating on your web performance improvements.

In this article I’ll show you how to easily execute your web performance tests in a simple, repeatable, automated way using the WebPageTest API.

Continue reading

A Step by Step Guide to using Terraform to define an AutoScaling Private WebPageTest instance in code

Update: November 2021
This article is out of date; there are no longer WebPageTest Server AMIs, so you need to install WPT on a base OS. There is an updated article here Automate Your WebPageTest Private Instance With Terraform: 2021 Edition

WebPagetest Terraform

In a previous article I went through the steps needed to create your own private, autoscaling, WebPageTest setup in Amazon AWS. It wasn’t particularly complicated, but it was quite manual; I don’t like pointing and clicking in a GUI since I can’t easily put it in version control and run it again and again on demand.

Fortunately, whatever you create within AWS can be described using a language called CloudFormation which allows you to define your infrastructure as code.

Unfortunately it’s not easy to understand (in my opinion!) and I could never quite get my head around it, which annoyed me no end.

In this article I’ll show you how to use Terraform to define your private autoscaling WebPageTest setup in easily understandable infrastructure as code, enabling an effortless and reproducable web performance testing setup, which you can then fearlessly edit and improve!

Continue reading

Unit Testing Powershell with Pester

I write a lot of Powershell these days; it’s my go-to language for quick jobs that need to interact with external systems, like an API, a DB, or the file system for example.

Nothing to configure, nothing to deploy, nothing to set up really; just hack a script together and run it. Perfect for one-off little tasks.

I’ve used it for all manner of things in my career so far, most notably for Azure automation back before Azure had decent automation in place. We’re talking pre-Resource Manager environment creation stuff.

I would tie together a suite of separate scripts which would individually:

  • create a Storage account,
  • get the key for the Storage account,
  • create a DB instance,
  • execute a DB initialisation script,
  • create a Service Bus,
  • execute a Service Bus initialisation script,
  • deploy Cloud Services,
  • start/stop/restart those services

Tie that lot together and I could spin up an entire environment easily.

powershell_azure_createdb

Continue reading

Upload to Azure Blob Storage using Powershell

I needed to automate the process of uploading images to Azure blob storage recently, and found that using something like the excellent Azure Storage Explorer would not set the Content Type correctly (defaulting to “application/octetstream”). As such, here’s a little script to loop through a directory and do a basic check on extensions to set the content type for PNG or JPEG:

The magic is in Set-AzureStorageBlobContent.

Don’t forget to do the usual dance of calling the following!

These select your publish settings file, and set which subscription is the currently active one:

  • Import-AzurePublishSettingsFile
  • Set-AzureSubscription
  • Select-AzureSubscription

Update

Actually, the Aug 2014 version of Azure Storage Explorer already sets the content type correctly upon upload. Oh well. Still a handy automation script though!

DevOpsDays

My thoughts from the recent #devopsdays conference in London; notes, inspiration, todos, ahas, and OMGs.

(Since my handwriting was so poor on the first day and my name badge was illegible, this was the badge for day #2!)
@rposbo name badge

Wow. There’s a lot to learn about this whole DevOps movement, but I actually feel that I can contribute to it having a pretty broad range of experience; I’m mainly development, but over the past decade or so have turned my hand to basic sysadmin-ing, DBA-ing, QA-ing, BA-ing, dev managing – almost all of the areas that need to be covered. I like to think of myself and a catalyst for implementation 🙂

I had initially been concerned that I lacked depth in skills like virtualisation and automation, but I’ve realise that those systems like Puppet, Chef, and Vagrant are potential solutions to a problem of configuration management (CM) and automation. Understanding the need for these tools is half the battle; being a specialist in them isn’t completely necessary, but an understanding to a level of basic implementation would be useful.

I’m already enrolled on a bunch of puppet webinars so will be getting stuck into that more soon.

Even though I’m pretty new to this, chatting with some key people did validate my thoughts that people shouldn’t be referring to “devops” as a person or a role in their presentations instead of an approach/framework/culture.

Intro

What happened at #devopsdays?

After being drawn in to the Basho booth by a raspberrypi/arduino/riak mars-rover-esque robot and chatting with John Clapham, who I didn’t realise was about to do the second presentation of the day (!), I end up deciding to sit right at the front (like a swot).

This turned out to be a great move as I ended up chatting with the organisers and even one of the founders of the DevOps movement, Gene Kim. Really nice guy, looks exactly like his twitter avatar (unlike some of us.. ahem..), and he even took a picture of the two of us together and let me know how to get onto the book review group for the long awaited DevOps Cookbook – bloody nice chap.

Chris Little, from BMC, is a really interesting guy to talk to as well – gave me a great overview of the background to DevOps, its “founders”, and helped me understand both its role in a company and in the future of the industry.

A pic Chris took from the stage
A pic Chris took from the stage
Try to play Where’s Wally and find lil old meeee…!

Presentations

Each day there were four main presentations to start with, then some “ignite” talks.

I’m not going to try to go into the details of these presentations as 1) I’m lazy, 2) I tried to take notes on my phone and the battery died, and 3) other people have done it much better than I would have anyway!

Essentially, I found several of the presentations difficult to really grasp as I felt I was lacking a frame of reference. Perhaps this was just me not having enough of a Ops background, but I felt that the presentations could have benefited from a slide or two at the start laying the foundation for the remainder of the talk.

I was very glad to see at least one full presentation being completely non-technical, and instead focussing on the culture side of DevOps.

OpenSpaces

The afternoon of each day was put aside to OpenSpaces: everyone has the opportunity to propose a discussion topic, pop it on a post-it, people who want to vote can take a marker and add a dot on the post-its that they’re interested in. Those with the most dots get allocated a room and a time, and the discussions commence.

I initially thought this was a bit of a cop-out: I paid to attend a conference and half of it is the other people who paid talking to each other?! Rip off!

However, the discussions that I chose to attend helped me understand much more about monitoring, logging, database CI, puppet/chef/CF Engine/vagrant basics than I could have got from the main presentations, so I’m a convert.

Summary

DevOps is an exciting opportunity to technically innovate around CI, automation, delivery pipelines etc, and also to work with the business to introduce concepts like Impact Mapping.

This is where I’ll be focussing my professional efforts for the near future. I think there’s a lot of potential to help relieve stress on development teams and operations teams and relieve frustration from the business teams that define the work.