Automated web analytics testing with Selenium WebDriver and Sauce Labs

I've been vaguely aware of Selenium and WebDriver for years through my mate Simon, but hadn't dipped my toe yet. Last night I had a little time to check it out.

For those who don't know, Selenium is a framework for automated testing of web applications. WebDriver is the component that allo

 

ws you to drive individual browsers. There's now hosted services that will spin up browsers for your tests at your command, one of which is Sauce Labs, so you don't have to go through the pain of setting up and maintaining all the browsers and platforms you want to test.

There's two ways you can test analytics tagging in this environment. The ideal is to set up a proxy (see Steve Stojanovski's approach) and then inspect the beacons as they pass through. That would definitely be an ideal situation, but it introduces another element to deal with. My approach is a bit simpler and involves getting WebDriver to run a bit of JavaScript that finds and outputs the beacon URLs for us to then handle.

To run this script you'll need Ruby, a Sauce Labs account and the WebDriver Ruby bindings. Follow Sauce Labs' instructions to get set up. The Sauce Labs free account gives you plenty of minutes to get started. You'll need to replace the username and API key with your details. Then run the script.

This script is incredibly simplistic. It just loads the page, runs the JavaScript, and checks to see if there was anything in the output. I'm having trouble with using it in IE6, so it'll need some improving. At the moment this is just for Omniture beacons, but it wouldn't be difficult to get it looking into any other types of beacons.

So this at least allows a simple sanity check that your analytics beacons are at least firing. A good start. I'll start doing more soon, though I suppose I should learn some Ruby first.

Omniture: execute plugin code only on page views

Omniture uses the term "plugin" to refer to a few different things. In this example, I'm talking about the plugins placed in the s_doPlugins function in your s_code.js file. This area is run whenever Omniture code is called, whether it by an s.t() page view call, an s.tl() link call or an something like ClickMap.

My problem was that I wanted to trigger some code to run only when there's a page view and not in any other circumstances. Useful, for example, if you want to fire off something to another analytics system only for page views, or you've got some code that only makes sense for pages.

The trick is to test for the value of s.eo, which contains the object associated with the link. So when you do an s.tl() and pass in the linked item (usually with "this") it'll become the value on s.eo. Page views don't have this link, and items that don't define it end up with a value of "0". That means this is an effective test to see if the current call is a page view.

Anyone know a better way to do this? Perhaps a way to definitively differentiate between the different types of call?

 

function s_doPlugins(s) {
        // Only run this code on page views
        if (s.eo === undefined) {
                // Your code goes here
        }
}

 

Video performance reporting

We were asked if there was some way to provide a way to measure how users were experiencing video playback. That is, how long to people spend buffering and how often does the player run out of video and have to rebuffer. We're using Omniture SiteCatalyst for this, and already do the standard video reporting.
It's important to get this kind of information from the client side. We can do server-side reporting, but all we'll know from that is how many times the files were requested. The approach we took gives us information direct from our customers, so we can quantify what they actually experience.
I started out by creating an eVar (conversion variable) for the video identifier and another for the video player. Handily this is also what we'll need to do to start using the SiteCatalyst 15 video solution, which saves us one task there. Another eVar captures the buffer time. I also created four new custom events to captur
e the different steps in video playback. These events are recorded alongside the eVars and so we can report by video identifier (and its classifications) or player.
When the user first requests the video, we send the "Video request" event. When the video has finished buffering, we send the "Video first frame" event, alongside the number of milliseconds spent buffering, rounded to the nearest 100 milliseconds.
If, during playback, the player runs out of video and has to start buffering, we send the "Buffer underrun" event. Again when the video starts playing again, the "Buffer underrun first frame" event is sent, alongside the rounded buffering time.

So now we've got our eVars for video ID, video play
er and buffer time, alongside these video events. We can report on any of these events happening with those eVars.
A simple report looks something like this using a simple "buffer underruns / first frame" calculated metric to show us the videos that people are finding most problematic.

The problem is that the "worst" videos according to this report are viewed by a miniscule number of people. Some poor sucker has been desperately trying to watch VOD:37367 on his dialup connection, with 521 buffer underruns. That doesn't sound like much fun, but it doesn't really help us optimise our service for the majority of people.
So I came up with some simple calculated metrics that insert a modifier to show us poorly performing videos, but weighted towards ones where the poor performance is widespread.

So now we can see there's a few videos that are problematic, and for a reasonably number of people. We can start optimising and seeing if we can improve this experience.

That calculated metric:
[Video buffer underrun] / [Video first frame] * ( [Video first frame] / [Total Video first frame] ) * 1000