Introduction to GruntJS for Visual Studio

As a developer, there are often tasks that we need to automate to make our daily lives easier. You may have heard about GruntJS or even Gulp before.

In this article, I am going to run through a quick intro to successfully using gruntjs to automate your build process within the usual IDE of .Net developers: Visual Studio..

gruntjs (Grunt)

gruntjs logo

What is it?

Gruntjs is a JavaScript task runner; one of a few that exist, but only one of two to become mainstream – the other being Gulp. Both do pretty similar things, both have great support and great communities.

Whereas gulp = tasks defined in code, grunt = tasks defined in configuration.

It’s been going on for a while – check this first commit from 2011!

What does it do?

A JavaScript task runner allows you to define a set of tasks, subtasks, and dependent tasks, and execute these tasks at a time of your choosing; on demand, before or after a specific event, or any time a file changes, for example.

These tasks range from things like CSS and JS minification and combination, image optimisation, HTML minification, HTML generation, redact code, run tests, and so on. A large number of the available plugins are in fact grunt wrappers around existing executables, meaning you can now run those programs from a chain of tasks; for example: LESS, WebSocket, ADB, Jira, XCode, SASS, RoboCopy.

The list goes on and on – and you can even add your own to it!

How does it work?

GruntJS is a nodejs module, and as such is installed via npm (node package manager). Which also means you need both npm and nodejs installed to use Grunt.

nodejs logo npm logo

By installing it globally or just into your project directory you’re able to execute it from the command line (or other places) and it will check the current directory for a specific file called “gruntfile.js“. It is in this gruntfile.js that you will specify and configure your tasks and the order in which you would like them to run. Each of those tasks is also a nodejs module, so will also need to be installed via npm and referenced in the package.json file.

The package.json is not a grunt-specific file, but an npm-specific file; when you clone a repo containing grunt tasks, you must first ensure all development dependencies are met by running npm install, which installs modules referenced within this packages.json file. It can also be used by grunt to pull in project settings, configuration, and data for use within the various grunt tasks; for example, adding a copyright to each file with your name and the current date.

Using grunt – WITHOUT Visual Studio

Sounds AMAAAAYYZING, right? So how can you get your grubby mitts on it? I’ve mentioned a few dependencies before, but here they all are:

  • nodejs – grunt is a nodejs module, so needs to run on nodejs.
  • npm – grunt is a nodejs module and depends on many other nodejs packages; sort of makes sense that you’d need a nodejs package manager for this job, eh?
  • grunt-cli – the grunt command line tool, which is needed to actually run grunt tasks
  • package.json – the package dependencies and project information, for npm to know what to install
  • gruntfile.js – the guts of the operation; where we configure the tasks we want to run and when.

First things first

You need to install nodejs and npm (both are installed with nodejs).

grunt-cli

Now you’ve got node and npm, open a terminal and fire off npm install -g grunt-cli to install grunt globally. (You could skip this step and just create a package.json with grunt as a dependency and then run npm install in that directory)

Configuration

The package.json contains information about your project, and the various package dependencies. Think of it as a slice of NuGet’s packages.config and a sprinkle of your project’s .sln file; it contains project-specific data, such as the name, author’s name, repo location, description, as well as defining modules on which your project depends in order to build and run

Create a package.json file with some simple configuration, such as that used on the gruntjs site:

{
  "name": "my-project-name",
  "version": "0.1.0"
}

Or you could run npm-init, but that asks for lots more info that we really need here, so the generated package.json is a bit bloated:

npm init

So, what’s going on in the code above? We’re setting a name for our project and a version. Now we could just add in a few more lines and run npm install to go and get those for us, for example:

{
  "name": "my-project-name",
  "version": "0.1.0",
  "devDependencies": {
    "grunt": "~0.4.5",
    "grunt-contrib-jshint": "~0.10.0",
    "grunt-contrib-nodeunit": "~0.4.1",
    "grunt-contrib-uglify": "~0.5.0"
 }
}

Here we’re saying what we need to run our project; if you’re writing a nodejs or iojs project then you’ll have lots of your own stuff referenced in here, however for us .Net peeps we just have things our grunt tasks need.

Within devDependencies we’re firstly saying we use grunt, and we want at least version 0.4.5; the tilde versioning means we want version 0.4.5 or above, up to but not including 0.5.0.

Then we’re saying this project also needs jshint, nodeunit, and uglify.

A note on packages: “grunt-contrib” packages are those verified and officially maintained by the grunt team.

But what if we don’t want to write stuff in, have to check the right version from the npm website, and then run npm install each time to actually pull it down? There’s another way of doing this.

Rewind back to when we just had this:

{
  "name": "my-project-name",
  "version": "0.1.0"
}

Now if you were to run the following commands, you would have the same resulting package.json as before:

npm install grunt --save-dev
npm install grunt-contrib-jshint --save-dev
npm install grunt-contrib-nodeunit --save-dev
npm install grunt-contrib-uglify --save-dev

However, this time they’re already installed and their correct versions are already set in your package.json file.

Below is an example package.json for an autogenerated flat file website

{
  "name": "webperf",
  "description": "Website collecting articles and interviews relating to web performance",
  "version": "0.1.0",
  "devDependencies": {
    "grunt": "^0.4.5",
    "grunt-directory-to-html": "^0.2.0",
    "grunt-markdown": "^0.7.0"
  }
}

In the example here we’re starting out by just depending on grunt itself, and two other modules; one that creates an html list from a directory structure, and one that generates html from markdown files.

Last step – gruntfile.js

Now you can create a gruntfile.js and paste in something like that specified from the gruntjs site:

module.exports = function(grunt) {
  // Project configuration.
  grunt.initConfig({
    pkg: grunt.file.readJSON('package.json'),
    uglify: {
      options: {
        banner: '/*! <%= pkg.name %> <%= grunt.template.today("yyyy-mm-dd") %> */\n'
      },
      build: {
        src: 'src/<%= pkg.name %>.js',
        dest: 'build/<%= pkg.name %>.min.js'
      }
    }
  });

  // Load the plugin that provides the "uglify" task.
  grunt.loadNpmTasks('grunt-contrib-uglify');

  // Default task(s).
  grunt.registerTask('default', ['uglify']);

};

What’s happening in here then? The standard nodejs module.exports pattern is used to expose your content as a function. Then it’s reading in the package.json file and putting that object into the variable pkg.

Then it gets interesting; we configure the grunt-contrib-uglify npm package with the uglify task, setting a banner for the minified js file to contain the package name – as specified in package.json – and today’s date, then specifying a “target” called build with source and destination directories.

Then we’re telling grunt to bring in the grunt-contrib-uglify npm module (that must already be installed locally or globally).

After the configuration is specified, we’re telling grunt to load the uglify task (which you must have previously installed for this to work) and then set the default grunt task to call the uglify task.

BINGO. Any javascript in the project’s “src” directory will get minified, have a header added, and the result dumped into the project’s “build” directory any time we run grunt.

Example gruntfile.js for an autogenerated website

module.exports = function(grunt) {

  grunt.initConfig({
  markdown: {
    all: {
      files: [
        {
          cwd:'_drafts',
          expand: true,
          src: '*.md',
          dest: 'articles/',
          ext: '.html'
        }
      ]
    },
    options: {
      template: 'templates/article.html',
      preCompile: function(src, context) {
        var matcher = src.match(/@-title:\s?([^@:\n]+)\n/i);
        context.title = matcher && matcher.length > 1 && matcher[1];
      },
      markdownOptions: {
        gfm: false,
        highlight: 'auto'
        }
      }
  },
  to_html: {
    build:{      
        options: {
          useFileNameAsTitle: true,
          rootDirectory: 'articles',
          template: grunt.file.read('templates/listing.hbs'),
          templatingLanguage: 'handlebars',

        },
        files: {
          'articles.html': 'articles/*.html'
        }
    }
  }
});

grunt.loadNpmTasks('grunt-markdown');
grunt.loadNpmTasks('grunt-directory-to-html');

grunt.registerTask('default', ['markdown','to_html']);

};

This one will convert all markdown files in a _drafts directory to html based on a template html file (grunt-markdown), then create a listing page based on the directory structure and a template handlebars file (grunt-directory-to-html).

Using grunt – WITH Visual Studio

Prerequisites

You still need nodejs, npm, and grunt-cli so make sure you install nodejs and npm install -g grunt-cli.

To use task runners within Visual Studio you first need to have a version that supports them. If you already have VS 2015 you can skip these install sections.

Visual Studio 2013.3 or above

If you have VS 2013 then you need to make sure you have at least RC3 or above (free upgrades!). Go and install if from your pals at Microsoft.

This is a lengthy process, so remember to come back here once you’ve done it!

TRX Task Runner Explorer Extension

This gives your Visual Studio an extra window that displays all available tasks, as defined within your grunt or gulp file. So go and install that from the Visual Studio Gallery

NPM Intellisense Extension

You can get extra powers for yourself if you install the intellisense extension, which makes using grunt in Visual Studio much easier. Go get it from the Visual Studio Gallery.

Grunt Launcher Extension

Even more extra powers; right-click on certain files in your solution to launch grunt, gulp, bower, and npm commands using the Grunt Launcher Extension

Tasks Configuration

Create a new web project, or open an existing one, and add a package.json and a gruntfile.js.

Example package.json

{
  "name": "grunt-demo",
  "version": "0.1.0",
  "devDependencies": {
    "grunt": "~0.4.5",
    "grunt-contrib-uglify": "~0.5.0"
 }
}

Example gruntfile.js

module.exports = function(grunt) {
  // Project configuration.
  grunt.initConfig({
    pkg: grunt.file.readJSON('package.json'),
    uglify: {
      options: {
        banner: '/*! <%= pkg.name %> <%= grunt.template.today("yyyy-mm-dd") %> */\n'
      },
      build: {
        src: 'Scripts/bootstrap.js',
        dest: 'Scripts/build/bootstrap.min.js'
      }
    }
  });

  // Load the plugin that provides the "uglify" task.
  grunt.loadNpmTasks('grunt-contrib-uglify');

  // Default task(s).
  grunt.registerTask('default', ['uglify']);

};

Using The Task Runner Extension in Visual Studio

Up until this point the difference between without Visual Studio and with Visual Studio has been non-existent; but here’s where it gets pretty cool.

If you installed everything mentioned above, then you’ll notice some cool stuff happening when you open a project that already contains a package.json.

The Grunt Launcher extension will “do a nuget” and attempt to restore your “devDependencies” npm packages when you open your project:

npm package restore

And the same extension will give you a right click option to force an npm install:

npm package restore - menu

This one also allows you to kick off your grunt tasks straight from a context menu on the gruntfile itself:

grunt launcher

Assuming you installed the intellisense extension, you now get things like auto-suggestion for npm package versions, along with handy tooltip explainers for what the version syntax actually means:

npm intellisense

If you’d like some more power over when the grunt tasks run, this is where the Task Runner Explorer extension comes in to play:

task runner

This gives you a persistent window that lists your available grunt tasks and lets you kick any one of them off with a double click, showing the results in an output window.

task runner explorer output

Which is equivalent of running the same grunt tasks outside of Visual Studio.

What’s really quite cool with this extension is being able to configure when these tasks run automatically; your options are:

  • Before Build
  • After Build
  • Clean
  • Solution Open

task runner explorer

Which means you can ensure that when you hit F5 in Visual Studio all of your tasks will run to generate the output required to render your website before the website is launched in a browser, or when you execute a “Clean” on the solution it can fire off that task to delete some temp directories, or the output from the last tasks execution.

Summary

Grunt and Gulp are fantastic tools to help you bring in automation to your projects; and now they’re supported in Visual Studio, so even you .Net developers have no excuse to not try playing around with them!

Have a go with the tools above, and let me know how you get on!

Top 5 Biggest Queries of 2014

During this year I became slightly addicted to the fantastic community site bigqueri.es; a site to help people playing around with the data available in Google’s BigQuery share their queries and get help, comments, and validation on that idea.

A query can start a conversation which can end up refining or even changing the direction of the initial idea.

BigQuery contains a few different publicly available large datasets for you to query, including all of Wikipedia, Shakespeare’s works, and Github meta data.

HTTP Archive

The main use of bigqueri.es is for discussing the contents of the HTTP Archive (there are a few about other things, however) and that’s where I’ve been focussing my nerdiness.

What follows is a summary of the five most popular HTTP Archive queries created this year, by page view. I’m hoping that you find them as fascinating as I do, and perhaps even sign up at bigqueri.es and continue the conversation or even sign up for Big Query and submit your query for review.

Here they are, in reverse order:

5) 3rd party content: Who is guarding the cache? (1.5k views)

http://bigqueri.es/t/3rd-party-content-who-is-guarding-the-cache/182

Doug Sillars (@dougsillars) riffs on a previous query by Ilya Grigorik to try investigating what percentage of requests are coming from 3rd parties, what is the total amount of this (in MB), and how much of it is cacheable.

I’ve run what I believe to be the same query over the entire year of 2014 and you can see the results below:

We can see that there’s a generally good show from the 3rd parties, with June and October being particularly highly cacheable; something appears to have happened in September though, as there’s a sudden drop-off after 80 of the top 100 sites whereas the other months we see that same drop-off after 90 sites.

4) Analyzing HTML, CSS, and JavaScript response bodies (2.4k views)

http://bigqueri.es/t/analyzing-html-css-and-javascript-response-bodies/442

Ilya Grigorik (@igrigorik) gets stuck into a recent addition to the HTTP Archive (in fact, it only exists for ONE run due to the sheer volume of data); the response bodies! Mental.

By searching within the response bodies themselves – such as raw HTML, Javascript, and CSS – you’re able to look inside the inner workings of each site. The field is just text and can be interrogated by applying regular expressions or “contains” type functions.

The query he references (actually created as an example query by Steve Souders (@souders)) examines the asynchronous vs synchronous usages of the Google Analytics tracking script, which tells us that threre are 80577 async uses, 44 sync uses and a bizarre 6707 uses that fall into neither category.

I’m working on several queries myself using the response body data; it’s amazing that this is even available for querying! Do be aware that if you’re using BigQuery for this you will very quickly use up your free usage! Try downloading the mysql archive if you’re serious.

3) Sites that deliver Images using gzip/deflate encoding (4.4k views)

http://bigqueri.es/t/sites-that-deliver-images-using-gzip-deflate-encoding/220

Paddy Ganti (@paddy_ganti) starts a great conversation by attempting to discover which domains are disobeying a guideline for reducing payload: don’t gzip images or other binary files, since their own compression algorithms will do a better job than gzip/deflate which might even result in a larger file. Yikes!

The query looks into the response’s’ content type, checking that it’s an image, and compares this with the content encoding, checking if compression has been used.

There are over 19k compressed image responses coming from Akamai alone in the latest dataset:

Although you can see the results suggest a significant number of requests are gzip or deflate encoded images, the great discussion that follows sheds some light on the reasons for this.

2) Are Popular Websites Faster? (4.9k views)

http://bigqueri.es/t/are-popular-websites-faster/162

Doug Sillars (@dougsillars) has another popular query where he looks into the speed index of the most popular websites (using the “rank” column).

We’re all aware of the guideline around keeping a page load as close to a maximum of 2 seconds as possible, so do the “big sites” manage that better than the others?

If we graph the top 1000 sites – split into top 100, 100-500, and 500-1000 – and get a count of sites per Speed Index (displayed as a single whole number along the x-axis; e.g. 2 = SI 2000), we can see the relative performance of each group.

Top 100

The top 100 sites have between 25-30 sites with Speed Indexes around 2000-3000 then drop off sharply.

Top 100-500

Although the next 400 have over 60 sites each with a Speed Index of 2000 or 4000, and almost 90 sites with 3000, their drop off is smoother and there’s a long tail out to 25000.

Top 500-1000

The next 500 have a similar pattern but a much less dramatic drop off, then a gentle tail out to around 25000 again.

This shows that although there are sites in each range which achieve extremely good performance, the distribution of the remainder gets more and more spread out. Essentially the percentage of each range who achieve good performance is reduced.

The post is very detailed with lots of great visualisations of the data, leading to some interesting conclusions.

1) M dot or RWD. Which is faster? (7.6k views)

http://bigqueri.es/t/m-dot-or-rwd-which-is-faster/296

The most popular query by quite a way is another one from Doug Sillars (@dougsillars).

The key question he investigates is whether a website which redirects from the main domain to a mobile-specific domain performs better than a single responsive website.

He identifies those sites which may mobile specific using the cases below:

 WHEN HOST(requests.url)  LIKE 'm.%' then "M dot"
 WHEN HOST(requests.url)  LIKE 't.%' then "T dot"
 WHEN HOST(requests.url)  LIKE '%.mobi%' then "dot mobi"
 WHEN HOST(requests.url)  LIKE 'mobile%' then "mobile"
 WHEN HOST(requests.url)  LIKE 'iphone%' then "iphone"
 WHEN HOST(requests.url)  LIKE 'wap%' then "wap"
 WHEN HOST(requests.url)  LIKE 'mobil%' then "mobil"
 WHEN HOST(requests.url)  LIKE 'movil%' then "movil"
 WHEN HOST(requests.url)  LIKE 'touch%' then "touch"

The key is this clause, used to check when the HTML is being served:

 WHERE requests.firstHtml=true

These are then compared to sites whose url don’t significantly change (such as merely adding or removing “www.”).

The fascinating article goes into a heap of detail and ultimately results in the conclusion that responsively designed websites appear to outperform mobile-specific websites. Obviously, this is only true for well written sites, because it is still easy to make a complete mess of a RWD site!

bigqueri.es

Hopefully this has given you cause to head over to the http://bigqueri.es website, check out what other people are looking into and possibly help out or try your own web performance detective work out over the holiday season.

Automatic Versioning & “Cache Busting”

Implemented a CDN/caching layer but haven’t had time to get the versioning of assets worked out properly? Try some basic “cache-busting” using a querystring parameter which gets updated each time you build:

Update AssemblyInfo.cs to end with an asterisk (“*”):
[csharp]
[assembly: AssemblyVersion("1.0.0.*")]
[/csharp]

Create a little helper method:
[csharp]
public static class AppHelper
{
private static string _version;
public static string SiteVersion()
{
return _version ?? (_version =
Assembly.GetAssembly(typeof (HomeController))
.GetName().Version.ToString());
}
}
[/csharp]

And use this value in static file references:

[html]
<img src="/img/[email protected]()" alt="Logo" />
[/html]

Which will render something like:

[html]
<img src="/img/logo.png?v=1.0.0.20123" alt="Logo" />
[/html]

This number will change with every build, so should force retrieval of updated static – cached – content.

London Web Performance Group meetup

London Web Performance Group Meetup

I’ll be speaking with my cohort, Dean Hume, at the next London Web Performance meetup on Oct 14th at the Financial Times offices!

We’re presenting an extended (director’s cut?) version of our Velocity NY 2014 session, The Good, the Bad, and the Ugly of the HTTP Archive where we investigate a selection of websites exposed by the HTTP Archive as well as talk about how to use Google’s Big Query and dig into some awesome example queries to explore what’s happening in the interwebs.

The LWPG is where you can “meet with other web site system administrators, developers, designers and business people who’re interested in making their sites work fast to get better user experience, lower abandonment rates and make more money.

If you’ve read Steve Souder’s books and you use ySlow & PageSpeed and you want learn more or share your knowledge please please sign up!”

Go on – you know you want to!

My face is in a video! Velocity Conference Interview

As part of my appearance at this year’s Velocity Conference NYC, I have been interviewed for the O’Reilly youtube channel and also the podcast.

In it I’m covering the contents of the upcoming talk that I’m doing with Dean, such as the HTTP Archive and Google’s Big Query and how people should approach these in order to get the most out of it.

I also mention some of the common pitfalls that poorly performing sites are doing, as well as what the good and the great are doing – some of their sneaky tricks.

If you’re attending Velocity Conf in NYC right now, then why not have a little look and get a sneak peak before attending the full session on Wednesday at 5pm

What I’m looking forwards to at VelocityConf NYC 2014

There’s a good mixture of Performance and Mobile sessions in my list, and a couple of Operations and Culture ones too. However, there are so many conflicting sessions that are awesome, so please let me know your thoughts to help me decide!

Day One

I saw Tammy Everts‘ vconf session last year, Understanding the neurological impact of poor performance, which was fascinating. Her tweets are great resources for web performance.

As such, I’ll start the conference with Everything You Wanted to Know About Web Performance (But Were Afraid to Ask)

Then I’ve got to decide between Colin Bendell’s (Akamai) Responsive & Fast: Iterating Live on Modern RWD Sites and Patrick Meenan’s (Google) Deploying and Using WebPagetest Private Instances

Unfortunately, then my ability to make a decision completely fails! I can’t decide between these three:

Finding Signal in the Monitoring Noise with Flapjack, RUM: Getting Beyond Page Level Metrics, and an Etsy-powered mobile session – Building a Device Lab. Help!

I’m planning to finish day 1 with W3C Web Performance APIs in Practice

Day Two

Following the ever-impressive opening sessions, I’ll head over to see a Twitter session: Mitigating User Experience from ‘Breaking Bad’: The Twitter Approach, then it’s decision time again: either The Surprising Path to a Faster NYTimes.com or A Practical Guide to Systems Testing.

Following that, how could I miss Yoav Weiss talking about how Responsive Images are Coming to a Browser Near You?!

Then I’m deciding between another Etsy session – It’s 3AM, Do You Know Why You Got Paged? – and another Tammy Everts (and Kent Alstad, also of Radware) session – Progressive Image Rendering: Good or Evil?.

I’ll finish the day with another session from Yoav – Who’s Afraid of the Big Bad Preloader? – and Etsy’s Journey to Building a Continuous Integration Infrastructure for Mobile Apps, probably.

Day Three

After some more kickass sessions opening the day, I’ll head over to see Signal Through the Noise: Best Practices for Alerting from David Josephsen (Librato).

After that will be Handling The Rush, then another descision between How The Huffington Post Stays Just Fast Enough and Creating a Culture of Quality: How to Sell Web Performance to Your Organization

During the break there will be a couple of jokers talking about the Http Archive, Big Query, and Performance using .Net and Azure! I’ll not miss that for the world! They’re amazing! And ridiculously handsome. kof
Office Hour with Dean Hume (hirespace.com) and Robin Osborne (Mailcloud)

The afternoon will be made up of Test Driven Mobile Development with Appium, Just Like Selenium, Making HTTP/2 Operable and Performant, AND THE AMAZING LOOKING HTTPARCHIVE+BIGQUERY SESSION The Good, the Bad, and the Ugly of the HTTP Archive (which is going to be AMAZINGLY EPIC)

DONE!

Then beers, chatting with other like-minded nerds, and a spare day to wander around NYC before an overnight (I believe the term is “red eye”) return flight.

My schedule

You can check my full schedule here, should you want to be creepy and stalk me.

I’ll be speaking at Velocity Conference New York 2014!


Velocity New York 2014

For the second year running I’ve been invited to speak at the fantastic web performance, optimisation, dev ops, web ops, and culture conference VelocityConf; in fact, it has become the essential training event and source of information for web professionals over the years it has been running.

Last year was the Europe leg of the conference, in London. This year I’ll be jetting off (via the cheapest possible flights known to google..) to New York City!

The Good, the Bad, and the Ugly of the HTTP Archive

I’ll be speaking once again with Dean Hume (who has literally written the book on website performancee) about The Good, The Bad, and the Ugly of the HTTP Archive; we’ll be talking about technologies like the HTTP Archive and Google’s Big Query, but mainly about the secrets learned from some of the great sites and their dev teams, as well as some of the traps from some of the not so great sites. In most cases we’ll look at one small change which could help those no so great sites become a bit more great!

Where? When?

  • 09/17/2014 5:00pm
  • Room: Sutton South

Come and see us!

Venue

New York Hilton Midtown
1335 Avenue of the Americas
New York, NY 10019
map

Office hours

Dean and I will also be hosting an Office Hours session where you’re invited to come and say hello, and talk to us about your Windows, .Net, Azure, EC2, web performance concerns or ideas; we’d love to have the opportunity to meet and speak with you, so please come and say hi!

Where? When?

  • 09/17/2014 2:45pm
  • Table 1 (Sponsor Pavilion)

DISCOUNTS!

There’s never been a better year to attend Velocity Conference; the line up is amazing, the contents are incredible, and the opportunity to talk with experts and passionate developers is priceless.

If you’re not sure how to convince your manager to send you – try the official Convince your Manager steps!

Once you’ve sorted that out, register and use the code given to you by your speaker friend (me!), for a whopping 20% discount: SPKRFRIEND.

You want more?


Velocity New York 2014

Velocity Conference EU 2013

The 3 day conference of web performance and operations & culture wrapped up recently, and having had the honour of presenting a session with my partner in crime Dean Hume called Getting The LEAST Out Of Your Images, and wandering around with the green underscored “Speaker” lanyard, here’s a brief summary of the event and some of my personal highlights.

Keynotes

First up, here are all of the keynote videos over on youtube; there were some really great keynotes including several from various sections of the BBC; some highlights for me were Addy Osmani’s UnCSS demo, Ilya Grigorik’s Big Query lightning demo, and the fantastic Code Club session from John Wards.

Presentations

There were a large number of sessions across three streams (web perf, mobile, and devops) covering all manner of topics from extreme anomaly detection in a vast torrent of data, through to optimising animation within a browser.

Some of the stand out sessions for me were:

Making sense of a quarter of a million metrics

Jon Cowie gave a brain melting 90 minute session taking us through how Etsy make sense of all of their monitoring data; given that they graph everything possible, this is no easy task.

Understanding the neurological impact of poor performance

Tammy Everts not only gives us an insight into the poor aeroplane connectivity where she lives, but also how people react emotionally to a poor performing website.

Rendering Performance Case Studies

Unfortunately this session clashed with the Etsy metrics one, but from what I heard it sounds like Addy Osmani had one of the best sessions at the whole conference.

High Performance Browser Networking

Another brain-melt session; Ilya gave an incredible insight into the complexities of fine tuning performance when taking into account what HTTP over TCP (and also over 3G) actually does.

Other Resources

All of slide decks are here, all of the keynotes are here, and there’s even a free online version of Ilya Grigorik’s High Performance Browser Networking book.

Summary

I probably enjoyed the 90 minute tutorial session on Wednesday more than the rest of the conference, but the Thurs and Fri were really jam packed with excellent sessions and impressive keynotes.

I loved speaking there and will certainly be pitching for more such conferences next year!

#velocityconf notes part 3: network performance amazingness

An absolutely brain melting session from Ilya Grigorik , talking about the intricacies of tcp, http (0.9-1.1-2.0), the speed of light, how the internet  instructure works, how mobile network browsing works, how http 1.1 doesn’t support the current use cases, and most fascinating for me: what mobile browsers actually do under the hood.

Amazing how an analytics beacon on a webpage or app could cause your entire battery to be zapped in a matter of hours.

It’s going to take me a few days to decompress this information in my fuzzy brain, so why not check the slides yourself here: http://bit.ly/hpbn-talk