Category Archives: Archives

Processing Rules vs Satellite: do you want a band-aid or a doctor?

Cross-post: I originally published this on SearchDiscovery’s blog in April 2013.

There is still one big topic from this year’s Adobe Summit standing out in my mind. I attended a few great sessions that discussed Processing Rules, and I had a hard time not interrupting when I heard attendees ask: “When should I use Processing Rules instead of a Tag Management system? Clearly, a good Tag Management tool can do practically everything Processing Rules can?” Coincidentally, someone asked a very good question on my previous post(the first in a series on moving away from using plugins), wondering how using Satellite to set a ?cid tracking parameter was different from using a Processing Rule within the SiteCatalyst admin settings. Obviously, some clarification is in order.

Do you want to spend your efforts treating the symptoms of a limited implementation, or do you want to heal the disease?

Processing rules, while an awesome tool, cannot replace a good TMS: they have a few functions in common but the methods, intention and potential are so very different. When people wonder what the difference is, it makes me wonder if sometimes we’re missing the forest because we’re too busy focusing on the trees: the little things that a processing rule or tag management system can both fix, and the day-to-day headaches that plague those responsible for maintaining a SiteCatalyst implementation. But when you step back and look at the bigger picture you realize you’ve been looking at the little issues so much that you never were able to push your implementation to the next stage- a stage that doesn’t require as much maintenance, giving you time to actually use the data you’ve worked so hard to get.

This isn’t to say you don’t need to fix the little things- most certainly you do, and both Processing Rules and Tag Management Systems have some things in common to help you do so. But never forget the bigger picture.

Treating the Symptoms

For those not very familiar with Processing Rules, they are a set of conditions and corresponding actions that are applied in between data collection and VISTA rules/data processing. For example, you can use a Processing Rule if you want to copy an eVar to a prop without changing your code, set an event whenever a specific eVar has a specific value, or assign a context variable to a variable. Processing rules are free for all Adobe Customers,  but do require a certification before they can be set up.*

processing rules example

The Advantages

What can a processing rule offer that a Tag Management System can not? Here are some ideas we came up with:

  1. Most obviously, they are included for free in your Adobe product. They are already accessible, waiting to be used without any big changes to your overall approach to implementation. (Of course, some might argue that big changes are* needed* for most organizations out there.) You don’t need to overhaul or fix your current implementation in order to use them.
  2. Processing rules require zero code knowledge: There is no need to touch JavaScript or Regular Expressions. Many Tag Management Systems will claim this “code-less implementation” as well, but we all know that unless you have a simple, well-constructed website and the most basic of reporting requirements, even with a great TMS some code work is necessary. Of course, some Tag Management Systems are less code-heavy than others, for example, Satellite was designed to need as little code as possible without losing any of the power and flexibility that a little bit of code can give you.
  3. Processing rules provide control in places where there is no access to a tag management tool, such as third-party pages where you can’t change your code to put a Tag Management snippet on.
  4. Processing rules can be enabled without a test cycle, completely independent of IT or QA teams. This could be both a good and a bad thing (a little testing is a good thing before touching live data), but there is no need to prove there are no adverse effects on the page to a nervous IT team, and the changes can be immediate.
  5. *Processing rules are set at the report-suite level. *In some cases, like if you are multi-suite tagged, it can be useful to have a way to change one data set without changing another.

I’m thrilled that this relatively new piece of SiteCatalyst functionality exists. It’s already been used to solve previously show-stopping problems, to clean up implementations, and to slowly move the whole industry to a not-page-code-dependent place—all good things. They are great for what they are intended for, but they are not a magical pill to heal all that ails your implementation.

How Satellite and Processing Rules differ

  1. Satellite can work with your cookies, DOM, meta tags, JavaScript objects, custom code, or data elements from previous pages.
    *A processing rule can only work with the data sent by your code, such as context data or existing SiteCatalyst variables. It can’t pull data out of your cookies, DOM, or meta tags. While it may be easier to tell your developer to set *s.contextData[‘section’]to “products” than explain what a prop is and which prop is used for what, your developer must still  know what must be captured, where to get the values for the variable, and be consistent in HOW it is set across the site. A tag management system allows you to access elements in your page, using your existing code. For instance, Satellite could “grab the value of this form field and put it into eVar40″ or “on the search results page, grab the search term from out of the header 1 tag in the content”.
  2. *Satellite requires no certification, has a support team, and you can focus your learning on the things you need.*
    **To enable processing rules, you must be certified. The test is free but asks a lot of questions that require a deep knowledge of not just processing rules, but of SiteCatalyst as a whole. I’ve seen clients farm out their processing rules work because it is simpler than training someone to pass the test, or because they don’t have enough confidence in their ability to use the tool and don’t want to mess with real, live data. Of course, this limitation could be a problem even with a TMS: there is a certain amount of knowledge and confidence you must have when making changes to live data, which leads to the next point.
  3. *Satellite has flexible, fast testing capabilities.*
    **Processing rules can be difficult to test before letting them affect live data. You can’t see them being set on your pages, with the debugger or Charles, for instance, and it can take a day or two to be sure they are working the way intended. You also can’t copy just one rule from one Report Suite to another, which can add another layer of difficulty to setup and testing.
  4. *Satellite can scale for the most complex enterprise needs.*
    **Each report suite is limited to only 50 processing rule sets. I know that sounds like it should be more more than sufficient, but I’ve already seen 2 or 3 clients hit that mark- it’s easy to do since they currently don’t allow nested conditions (as in “if … then… else…”) . Also, if an organization is using many context variables (which the newest AppMeasurement libraries do), this limit can become even more problematic.
  5. *Since Satellite code lives client-side, it can work with plugins and other code solutions you already have. *
    Depending on other reporting requirements, you may still have plugins that require certain variables like s.campaign to be set client-side, so that it can be used in script on the page- for instance, for the cross-visit-participation plugin. Many of these plugins could be worked around with a good TMS, but not with processing rules alone.
  6. *Satellite can access any SiteCatalyst variable- including RSIDs.*
    **Processing rules can’t access/change certain variables or settings, such as the report suite, hit type (page view vs. link), link type (custom, download or exit link), or products string.
  7. *Satellite leaves your implementation visible to existing free tools like Charles and firebug, and Satellite has user roles and workflow so you can manage who has visibility and control of your rules.
    Visibility into processing rules is restricted to the Admin Console in SiteCatalyst. SiteCatalyst users must have admin access to see how they are set, and must know specifically the one report suite where they might be set, since you can’t view how they are set across multiple report suites like you can with your custom variables. This may seem like it isn’t a big deal, but it is definitely a pet peeve of mine to not be able to see how variables are being set, for instance while QAing an implementation.
  8. Satellite allows for for regular expressions, giving much greater flexibility. You don’t have to use it, but it’s there if you want it.
    Processing rules don’t currently allow for Regular Expressions. I know I listed this as a perk (“no coding skills needed”), but it can also be very limiting. It does have the standard SiteCatalyst options in the conditions (equals, contains, starts with, is set*…) but the actions do not allow you to parse out or match values from a string. For instance, (and this is a real life recent use case), I couldn’t set up a condition to only capture the first 5 digits of the value of a query string parameter.
  9. Satellite can totally change the way your organization approaches digital measurement.

In truth, listing the things that processing rules lack really misses the point: Processing Rules are a tool to help Band-Aid (or in some cases expand) existing implementations. And they’re good for that: I’ll freely admit I have used them, and been grateful for them as a Band-Aid. But even when used with context variables to create a clean new implementation, even with many of the above limitations removed, they are *still meant to be just a mode of deployment. *

Healing What Ails You

We need to get the industry to stop viewing Tag Management as just another “mode of deployment”. As Evan rightfully put in his post a while back, Tag Management shouldn’t resemble another ticketing system. Satellite was built so that it doesn’t merely fill in the gaps of your SiteCatalyst Deployment Guide. It helps you iteratively rewrite it, from the Business Requirements on down. So while yes, many of my tactical blog posts will be about “treating symptoms”, don’t think for a moment that that is all there is to Satellite. We need to change the game, not just make it easier to play the current game. The business stake holder shouldn’t have to find someone certified in processing rules to tell them what they can and can’t track. Data collection shouldn’t be so far removed from your website that the user experience doesn’t even factor into the collection process.

So if you find yourself asking “how is this different/better than processing rules”, please reach out and let us show you that both can be good: while they do have a few tasks in common, processing rules are still a method of implementation; Satellite addresses a much bigger picture. It not only makes the day-to-day implementation easier, it can change the way you view digital measurement: the data you can bring in, the questions you can ask, and the actions you can take.

(A big thanks to Bret Gundersen and the folks at Adobe for their insight into processing rules for this post, and for the work they’ve done on that tool.)

*More information on how to set processing rules can be found at *Kevin Rogers’ blog or the Processing Rules section of the Adobe Marketing Cloud Reference site.*

SiteCatalyst Implementation Series: Breaking up with SiteCatalyst plugins

Cross-post: I originally published this on SearchDiscovery’s blog in February 2013.

We’ve all had a love/hate relationship with our SiteCatalyst plugins. I mean, where would we be today without good ol’ s.getQueryParam or s.getTimeParting? And when you have a new tracking parameter from your marketers, who doesn’t love to get your developers and implementation consultants to change and test those fateful lines in the doplugins section? But it’s time to move on. Technology has finally caught up to our needs. Plugins you previously couldn’t have lived without are now obsolete, since their functionality is built right into the Satellite interface.

I’ll admit, when Rudi Shumpert posted about how to get a SiteCatalyst Base Code (scode) into Satellite before I started working here, I thought “well, it will be nice to be able to manage/make changes to the scode right within the satellite interface, but Rudi left off a pretty critical piece- plugins! That’s where all the hard work is, anyways.” That plain little s_code you get from the admin console doesn’t include any plugins- yet somehow pretty much every scode on the web includes lines and lines of plugin code. You can’t live without them! Or at least, you couldn’t in the past. But Rudi and Satellite were already a few steps ahead of me.

Where did all those s_codes get their plugin code? Some can be found in the SiteCatalyst knowledge base documentation, some can be found smattered across the web on various blogs (I can’t tell you how often I’ve referred clients toKevin Rogers and Jason Thompson‘s blogs). Either way, you need someone familiar with the plugins, their use-cases, and their gotchas, to implement them for you: the Implementation Consultants. As someone who has made a living for 6 years as an scode jockey, I can tell you that while there is always plenty of s_code work to keep an IC busy, we’d much rather focus our efforts on streamlining processes, create better governance and standards, and helping bring in data that  can really change the way an organization does digital business.

Now that I’ve had more of a chance to work with Satellite, I see Rudi wasn’t wrong to omit plugins in his post: he just knows that with Satellite, you may simply just not need them! Take getQueryParam for example.

Query Param Tracking – The Old Way

In the past, if I used a query string parameter, such as “cid”, “cmpid”, or “source”, in my marketing URLs to track campaigns, I’d need to include this in my scode in the sdoPlugins function:


Then in my plugins section, I’d need to include the whole getQueryParam plugin (hopefully the most recent version, which is currently 2.3).

/* * Plugin: getQueryParam 2.3 */ s.getQueryParam=new Function("p","d","u","" +"var s=this,v='',i,t;d=d?d:'';u=u?u:(s.pageURL?s.pageURL:s.wd.locati" +"on);if(u'f')u=s.gtfs().location;while(p){i=p.indexOf(',');i=i<0?p" +".length:i;t=s.pgpv(p.substring(0,i),u+'');if(t){t=t.indexOf('#')>-" +"1?t.substring(0,t.indexOf('#')):t;}if(t)v+=v?d+t:t;p=p.substring(i=" +"=p.length?i:i+1)}return v"); s.pgpv=new Function("k","u","" +"var s=this,v='',i=u.indexOf('?'),q;if(k&&i>-1){q=u.substring(i+1);v" +",'&','pgvf',k)}return v"); s.pgvf=new Function("t","k","" +"if(t){var s=this,i=t.indexOf('='),p=i<0?t:t.substring(0,i),v=i<0?'T" +"rue':t.substring(i+1);if(p.toLowerCase()k.toLowerCase())return s." +"epa(v)}return ''");

Now you’d wait for developers to test it, and/or for the next code release cycle. Not exactly the most clear-cut solution for something so simple, is it? In truth, this is probably the easiest, most simple plugin you can implement in your s_code. And if you wanted to keep it in your s_code, you still could, and manage it within Satellite to keep it easy to manage and control.

Query Param Tracking -The New Way

Now, you can accomplish the exact same purpose with a few keystrokes and a mouse click. In either your page rules (to apply to only a subset of pages) or your overall tool settings (to apply site-wide), you’ll find the campaign variable :

set up campaign tracking in one click

Done! Yep, that’s it.

Additional Uses

If you need to capture different query string parameters into different variables – for example, an internal promotion into ?icid – this is done with one additional step: the creation of a simple data element (under “rules” then “data elements”). Give it a friendly name for the satellite interface, select “URL Parameter” under type, enter the parameter, and save.

Creating a data element from a query string parameter

All that’s left to be done is to call that data element in the eVar itself. This can be done site wide (in your tool settings) or for a specific page or subset of pages- either way, the set up is the same and very simple. I set the eVar number, type “%” to let Satellite know I am calling a data element, choose my already-created element from the list, and hit save:

Adding the data element to an eVar.

I save my rule or my settings, test it, and publish it- a process that could be completed in as few as 5 minutes. Done. No code touched. The s_code file is still that basic, plain s_code straight from the admin console.

Conclusion and Next Steps

Obviously this is a very simple (if very common) use case. I implemented both campaign and internal promotions on my own site in under 15 minutes. I’ll be following this post by looking at other plugins that Satellite can free you from, including the linkhandler plugins, getPreviousPageName, getAndPersistValue, appendToList, Form Analysis, and forceLowercase plugins.

Are you ready to move on? Which plugins are you ready to leave behind?

Using Google Analytics Settings to Properly Identify Pages

This year, I’ve been involved in many Google Analytics implementations and audits, and there has been a recurring theme around misunderstood GA Configuration Settings, mostly regarding how a page is identified. For instance, one recent client of mine had a 350-page site. But because of missed configuration settings, those 350 pages were showing up as literally 28000 URIs! Can you imagine pulling a report on any given page of that site? So to clear the air and hopefully save some GA users out there from future headaches, here are 3 quick ways to use GA Configuration to properly identify your pages:


The default page setting is used whenever a page URI ends in a trailing slash without specifying a file name- for instance, if you used this setting to specify that “index.html” is your default file name, “”and “” would merge into just “” in your content report, making analysis on that single page much easier.

Unfortunately, the name of the setting is misleading and tempts people into entering what they consider the “default page” of their site: their home page. But if you enter “” as your default page, the real result would be that any page that ends in a trailing slash will have the full home page URL appended to it:

would become this in the reports:

This is obviously not desirable, so please do not put your full home page URL as your “Default Page”. If you have a site that sometimes uses index.html or index.php, then you may want to specify THAT as your default page, so all pages with a trailing slash would consistently have index.html appended to them. Otherwise, leave the setting blank.


The default page setting cannot be used for what most people WANT to use it for- to standardize whether or not a page ends with a trailing slash. If you give in to the temptation to simply put a “/” in this setting, then “folder/” and “folder” wouldn’t merge together as desired- rather, “folder/” would become “folder//”, and “folder”would stay the same (remember, the setting only looks at which pages have a trailing slash, then appends the setting value to it).

If you would like to have all trailing slashes removed as the standard, so that and would appear as the same line item in the Content report- and who wouldn’t want that?- you will need to set up a custom filter that removes all trailing slashes:


Field A -> Extract A should be set to “^/(.*?)/+$”
Output To -> Constructor should be set to “/$A1”

Please note, much to my chagrin, such a filter would prevent your profile from being eligible for the not-filter-friendly Real Time Analytics(for now), but I promise this isn’t as big a deal as you might think it is, though I’ll save my reasoning for the unimportance of “real time” analytics for another blog post.


Most GA implementations I’ve seen have at least a few query string parameters excluded, but I don’t think I’ve seen anyone get it “just right” yet (admittedly, my level of nitpickiness may be a tad unrealistic). The problem with not excluding all non-content-identifying parameters is that parameters will cause one page to show up as separate items in the content report. For instance, if I want to report on how many page viewspromotions/springlanding.html got, I might need to pull the following 3 pages:




Into my reports, to report on only one piece of content. This isn’t the end of the world; using filters in my reports I can usually get the info I need, though it does make trending harder. But it’s such an easy fix!

To see which query parameters might have escaped your settings, go to your Top Content report and do a search for “?”. If there are a variety of those pernicious params in there, you may want to use an advanced filter to filter them out one at a time, to be sure you’ve got them all. Now you have a handy list of parameters you can take to your configuration settings for exclusion. If you want to track one of the parameters, but not necessarily in your content report, don’t forget you can always use a Profile Filter if you want to extract a query parameter and put it into another field, like a user defined variable, or just clean up parameters in general.

Be careful to not exclude parameters that actually have importance in identifying content. For instance, a products page may have a ?sku=12345 that specifies which product is being viewed- this is a rather critical piece of information for some types of analysis, and should not be excluded.

Please be aware that users can add whatever parameters they want to your URLs, so you will never have full control here. Tools like Google Translate like to wreak havoc on URIs, but generally account for a very small percentage of page views.

Cleaning up your Content Report is an easy quickwin- it doesn’t take a lot of effort and can make analysis much easier. For questions about identifying content in Google or SiteCatalyst, contact me on twitter- @Jenn_Kunz.

Campaign Tracking Tips and Tricks for Omniture SiteCatalyst

The standard Omniture Campaign reporting offering, based solely on a query string/tracking code pair, is pretty solid: campaign reports and all the possible classifications, paid/natural search detection and reporting, and marketing channel reporting, managed from right there in the SiteCatalyst admin console. But over time we’ve found some tips and tricks that can maximize the value of your campaign tracking efforts.

All of these tips rely on granular but intuitive tracking codes, which your s_code grabs automatically out of a query string parameter. If you do not have a well-founded system for setting your tracking codes, then that should be your focus before expanding your campaign reports. Assuming you have your marketing efforts well identified, here are some ideas to implement:


Out of the box, SiteCatalyst has the “click-throughs” metric in any campaign report, which is essentially the same as “instances” in any other conversion report. The problem is that you cannot pull your campaign click-throughs as a metric into non-campaign reports- for instance, into a campaign landing pages report, or a breakdown for any other eVar that lasts between sessions or is set on a campaign landing page. A common need is to see Campaign Click-throughs in a conversion funnel:

For this reason, many of my past clients have implemented a “custom” campaign clickthrough event, which they then have full control over.

The code for this is simple. I recommend using the Append List plugin (s.apl) to ensure the custom event does not overwrite any other events on the page. In this example, event3 has been set aside and named “Custom Campaign Click-throughs”, and the append list plugin code is included in the plugins section:



This isn’t so much a tip as it is a best practice. While generally the recommendation has always been to just have a granular tracking code and all information could be uploaded afterwards using SAINT classifications, you might save yourself a lot of headache by including a marketing channel identifier in each tracking code. SAINT is a wonderful tool, but classifications cannot be used to define the (fantastic) Marketing Channel reports. So instead of relying solely on classifications to specify that tracking code “2364FS” was an email campaign, turn the tracking code into “em-2364FS”. Not only does this make it easy to filter your tracking codes report, but you can also then set up a Marketing Channel processing rule that specifies that any tracking code that beings with “em” should go into the “Email Campaign” channel:


What? You haven’t set up your Marketing Channel reports? Well, get on it! It doesn’t take long, this blog post will wait.


I’ll admit, of all the tips listed here, this is my favorite. The standard pathing reports are wonderful but not a lot of help for marketers. Campaign Landing Page Pathing uses a pathing prop to show how campaigns play into paths that users take through the site. Any time the s.campaign variable is set, it is appended to the pageName and put into a prop that has pathing enabled (“Promo Landing Page: em1435”). For all other pages, just the pageName is set:

if(s.campaign){s.prop11=s.pageName+": "+s.campaign}

As a freestanding page views report, I’ll admit the data is nearly useless- a jumble of campaigns and pageNames. BUT, once viewed in a pathing report, you can now see for each campaign and landing page, what next steps a user took. You might see paths that look like this:

Landing Page A: em12543 > Page B > Page C > Exit

You can now create Fall-Out reports based on particular campaigns. You could also see a next page flow report like this, showing the full range of possible paths users are taking from a campaign landing page:


Is your campaign taking your users somewhere you didn’t expect? Is that something you can capitalize on?

You can hit two birds with one stone and use the same logic to also set a “Campaign Landing Page” eVar, if you want to see how individual landing pages contribute to conversion (see the final code of this post for an example).

Please note, this kind of trick becomes less necessary in SiteCatalyst 15 because you can now segment your pathing reports based on campaigns, but it would still be useful if you wanted to compare campaigns to each other quickly without setting up a segment for each.


Campaign Cross-visit Participation, also known as “campaign stacking”, has been well covered elsewhere on the web, so I won’t go into too much detail, but no campaign “tips” post would be complete without it. The Marketing Channel reports can show you First and Last Touch, but not much in between. Using the crossVisitParticipation plugin, you can see all of the different campaigns a certain user touched before accomplishing a certain event. Using this plugin for my tracking codes, I could do a report filter for one specific tracking code and see how other tracking codes contributed to it leading to a conversion:


The code looks like this:


While this plugin is wonderful for campaigns, the possible uses of it are limitless- any time you want to see the “path” a user takes through the values of any of your variables, and how that path affects conversion.


If I were to use all of the above tips and standard best practices, my final code might look something like this:

/* Campaign tracking */
 if (s.getQueryParam('cid'))
 {s.campaign=s.getQueryParam('cid');                                         //capture campaign ID,'event7',',',1)   ;                                 //custom Campaign Clickthroughs
 s.eVar26=s.crossVisitParticipation(s.campaign,'s_cpmcvp','90','5','>','');  //campaign "stacking"
 s.eVar24=s.prop11=s.pageName+" : "+s.campaign ;                             //campaign landing page eVar and pathing prop
 s.eVar25=s.campaign=s.getValOnce(s.campaign, 'cid_cookie', 0);              //campaign eVar with original allocation
 }else{s.prop11=s.pageName}  ;                                               //campaign landing page pathing

The final result is a set of marketing reports that paint a fuller picture of campaign behavior on your site.

Code for the Append List and GetValOnceplugin can be found in the Help section of SiteCatalyst, under Product Documentation > Implementation > Plugins. Contact your friendly implementation consultant if you need information about implementing Cross-Visit Participation.

Omniture Form Analysis Pitfalls – How to Avoid and Optimize

There is an Omniture plugin, Form Analysis, that will automatically flag which form field a user last touched before abandoning a form. Sounds great, right? This post, which does a great job of going over setting up the plugin, even claims the plugin will make you an office hero! The problem is, this plugin is inherently Evil. And not the good kindof evil. We’d like to tell you how to avoid the evil and keep your Form Analysis on the side of Good.

This plugin is notorious for being implemented incorrectly, and the implications go beyond just one report not working. Since the plugin sends a custom link call ( each time a user abandons a form, it has the potential to eat up your secondary server calls (which you pay Omniture for). A recent client of mine has accidentally set one extra secondary server call on every single page view since 2008 because of this plugin. Sending two server calls (one with the click of a link and one on the load of the next page) so close together has been known to cause problems- every once in a while, the two calls compete and only one makes it through to Omniture, decreasing data accuracy. Worst case scenario, the plugin can actually break your form.

And what bugs me most- and this is in no way the fault of the plugin- is that folks use this as a safety crutch: they know they should optimize forms, so they put effort into setting up form tracking, then pat themselves on the back for  job well done. But, feeling comfortable that the data is there, they don’t take it to the next step and actually optimize their forms.

In this situation, you’d be better off skipping the plugin and go right to the “optimize your form” step by using your God-given intelligence. Use common sense. Think like a user: keep forms simple and easy to use. Simply running a plugin or tool will do nothing to improve your forms- remember, brains are more important than tools.

Don’t get me wrong, the plugin has much potential to do good. You just have to put some thought into it. Educate yourself on using this plugin and you can get great value out of it. To help you, here are some implementation pitfalls to avoid:


The configuration of the plugin includes two critical variables:


When trackFormList is set to true, it will only track forms specified (using the form’s HTML “name” attribute) in the formList (if formList is blank, then nothing will be tracked). So far so good. When set to false, the plugin will track every form on your site that is NOT specified in the formList. This includes:

  • User login forms (even if they are on every page)
  • Some “forms” that are not actually user forms
  • Find-a-location-near-you and other single-field forms
  • And, the biggest culprit: Internal Search forms

So if a user hits a page with an internal search form field without hitting the search “submit” button, that’s one server call to Omniture letting you know a “form” was abandoned. If each page has that internal search field, that’s one server call for every page view on your site. Not only is that information useless to you, it has potentially terrifying billing implications, depending on your contract.


My suggestion? ALWAYS have trackFormList equal to “true”. This will require you going through your site and finding the forms that matter most. Focus on one or two key forms on your site. This isn’t just a work-around to get the plugin to work- it’s a great idea for focusing your analysis efforts and avoiding the Data Squirrel pitfall.


If you do not have the “name” attribute in the HTML <form> tag, the plugin won’t work. And if you do have names but they look like “form134″, it probably won’t have much meaning for you in the reports. And if every form on your site has the same name, the report will be useless. Same goes for form field/input names.
On another note, if your form script references the “this” object at all, the plugin may break your form, or vice versa. Oops. The only way around it is to remove the plugin, or remove the references to “this”.


I’m afraid there is no quick javascript fix here: if you want to automate form abandonment tracking using this plugin, you may have to manually change the “name” attribute of the HTML forms. BUT, if you are focusing on only your 1-2 key forms, as mentioned above, the changes to your HTML should be minimal.


The plugin doesn’t tell you which field was the “turn-off”: it tells you the last field touched, which may be counter-intuitive. For example, on the right is what my form looked like when the user abandoned:

Our friend Mal only made it through the 4th field- the one with the name “email”. The plugin would report this as:


Which might make one think the “email” field was to blame for the abandon, when in truth, he filled out that field successfully and was (presumably) “turned off” by the FOLLOWING field, asking for him to confirm his email- he’s much too busy to waste time by entering the same information twice.


One can get around this by 1)educating users of the report on what the data means (which you should be doing anyways) and/or 2) creating a classification report that says the FOLLOWING field for each item. Either way, please be aware that a report can only give hints- it cannot actually tell you what the user was thinking when they left the form or if they had looked ahead further in the form and were “turned off” by a later field.

Which brings us back to the idea that your brain is a critical part of making this plugin valuable. If you have a form on your site, now is a great time to optimize it. We’ve removed the roadblocks; what’s stopping you?

Tracking ~ Why Business Intelligence leads to User Benefits

A few weeks ago, I received an email from a respected consumer watchdog group campaigning for support for two online privacy bills currently being discussed in Congress. Here’s a sample of what it contained:

“Hidden ‘cookies’ in your computer that track your Internet movements and record what you buy online. Smartphones that secretly log the places you go. Even your personal information on Facebook may have been accidentally leaked to advertisers, according to media reports…”

“…The latest bill from Sen. Rockefeller would require companies to honor your decision if you do not want to be tracked online. And it would put the full force of the law behind your decision, so tracking companies would be held accountable if they go against your wishes.”



Things like Facebook data leaks and the recent Sony data security fiasco should be taken very seriously. Users should be able to feel confident that their personal information is secure. But this email, along with some of the legislation it references, mentions online tracking cookies together with major security breaches despite the fact that they are two VERY separate issues. *Ironically, the links in the email take you to a site that uses Google Analytics tracking cookies.*


The first bill, proposed by McCain/Kerry, seems to me to make a lot of sense: if a website is collecting personal information, it has a responsibility to the users to notify them and keep their information secure and private. This falls well in line with how the Web Analytics industry treats data (see the Google Analytics or Omniture Terms of Service, or the Web Analytics Association’s Code of Ethics).

Another piece of legislation, led by Senator Rockefeller, is on the floor right now: a “Do-Not-Track” bill that will legislate a user’s ability to opt-out of having any of their online activity tracked. Along with this bill comes much propaganda like the above-mentioned email, scaring people into thinking technology that they don’t understand must be harming them.

I’m not saying that users should not be able to opt-out of online tracking. Most web analytics vendors already have an opt-out for the data mentioned above and most websites already have a privacy policy detailing what is being tracked. The industry is already self-regulating things like opt-outs- I have yet to hear of a case where a user’s request to opt-out has not been respected. Odds are, you go about your online business every day without ever looking for opt-outs and privacy policies because you don’t feel threatened until politicians bring it up.


The problem with the legislation and the information going out with it is that it lumps all kinds of tracking together, from security leaks of personal data (very bad), data-mining and sharing (potentially bad) to web analytics tracking (good). Web Analytics tracking, when done in accordance with industry standards, leaves practically no room for harm to the user and so much room for benefit for the user, but by legislating it and scaring the public with mixed-up information about it, it may lead internet users to opt-out of services that are actually beneficial to them.

Let’s take a look at just how “evil” these tracking cookies are and what information they store. On my blog currently (unless you have blocked it) I have set an Omniture cookie that may look something like this:


The “value” (what the cookie contains) is just a string of arbitrary letters and numbers that serve no purpose other than to identify the user from page to page. “User afb5ced3bc5c8c3767728316b1db17f2 came in on the Google keyword ‘blue wugs’ and went from the home page to a content page then left the site.” If I had a retail site, I might use this cookie to track what items were purchased and how much revenue I generated, so that I could tie it back and see which marketing efforts are returning value.


Every internet user should WANT this kind of tracking on them. It’s like voting by secret ballot: every time you click (or don’t click) on an item, you are submitting an anonymous vote for how the website could be better.

These are the kinds of messages you are anonymously sending to the website you are on, without even knowing about it, when you are tracked:

  • “It took me 18 clicks to get to the content I was looking for. Not cool. Please streamline your website.”
  • “Your advertising partner’s “interactive media” ad on your homepage is so annoying that I could only handle being on your site for two seconds before I closed my browser. You may make money off of hosting that ad, but you just lost my business.”
  • “Your video got really boring about 2 minutes into it. Best keep it short and sweet.”
  • “That article was fantastic, I clicked your share button to pass it along! Please make more content like that!”
  • “Asking me to verify my email address for a third time on your 16-field form probably wasn’t a great idea. You won’t find me hitting the submit button on your form until you ask for less info.”
  • “I can’t find the Technical Support information I needed online. Instead, I’m going to have to call one of your employees, wasting both my time and your staffing budget.”
  • “I came from a Google advertisement on one of my favorite blogs and ended up finding just what I needed. Thanks!”

When these pieces of feedback are looked at in aggregate- thousands of users all trending one way or another- Web Analysts not only can see what ways to improve their site but also where they should spend marketing dollars.


Much of the current legislative hullabaloo is around targeted marketing efforts. Targeting is when websites use tracking technology (usually something beyond basic web tracking) to automatically present a user with content that is particularly relevant to them based on information that the website has learned about that user. For instance, on Facebook I just saw this ad:

Either highly coincidental, or Facebook has learned (probably from my user profile where I state my profession as a web analytics professional) that I have an email-based job and that I am obsessed with measuring/analyzing data. This is targeted advertising and it comes in many forms. Once you get past feeling mildly creeped out (it helps to remember it’s all anonymous and automated), targeted advertising benefits you in all sorts of ways.

All internet users want online marketers to succeed, whether they are aware of it or not. The existence of (and free access to) the sites we know and love relies on successful marketing. I know no one loves an internet cluttered with ads, but I’m pretty sure we’d hate it even more if it were ad-free with the caveat of having to pay for each site we visited. The more relevant the ads can be to the users, the more successful they are- meaning less ads are needed to keep the internet afloat. Targeting not only helps keeps the internet free it makes the internet more relevant to users.

With web analytics tracking technology as it is right now, you can send an online marketer a message that their ad is annoying or ineffective by simply not clicking on it. Marketers can see if their ads are a waste of your internet space and a waste of their marketing dollars.

These are just a few of the ways that, by volunteering a little bit of anonymous information about their web usage, users contribute to a better internet- not just better for marketers and analysts, but better for internet users everywhere. Regardless of current legislation, the Web Analytics Industry must put effort into educating the public about what is being tracked, why, and how it benefits the average user. Where do you stand?

Worthy TED watch: A Moral Operating System

At TEDx Conference in Silicon Valley, Damon Horowitz (who recently joined Google as In-House Philosopher / Director of Engineering, heading development of several initiatives involving social and search) reviews the enormous new powers that technology gives us: to know more — and more about each other — than ever before.

A deep-dive into the ethics and philosophy behind the power of data.

The video is a bit long (around 16 minutes) but worth the watch if it gets you thinking.

Collecting Data vs Using Data

You’ve heard of report monkeys—I’m going to tell you about data squirrels. Data squirrels spend lots of energy collecting data; then they hoard it. They carry it around in their cheeks, feeling well-prepared for the winter, but their mouths are so full they can’t process and swallow their hoarded goods.


I’m thinking of one former client who was using nearly every single variable available at the time- virtually every available report in Sitecatalyst was filled with data. According to “Best Practices”, they had a weekly meeting to discuss their implementation and keep it up-to-date- the data was tested, current, and abundant. Their dashboards were vibrant and plentiful. But they were so busy collecting and hoarding data that they had no time to process and analyze it. The idea of taking action on the data was never part of the process.

For all of their effort, how much value do you think they were getting out of their analytics tool?

Sidenote: Data Squirrels and Report Monkeys make great friends. The Squirrel hands data to the Monkey and the Monkey hands “reporting requirements” back to the squirrel. This cycle can go on indefinitely without any real value.

You know what makes the web analyst in me want to go cry in a corner? This all-too-common situation:

A major company spends thousands of dollars and dozens of man-hours on an implementation project to grab and report on some new data. The implementation consultant gets the data, shows them the working reports, signs off on the project, then walks away. Six months later, the company calls the consultant and says “this report doesn’t work anymore, someone disabled the variable five months ago and it took us until now to notice. Can we bring you back on to fix the problem?” (Presumably, so the data can go another 5 months without being used).

If you can go 5 months without looking at a report, why-oh-why did you throw money at someone to make the report available? Sadly I know the answer: it’s easier to throw money at something and feel that sense of accomplishment than it is to spend time analyzing and making potentially risky decisions based on the data.

It’s a lovely feeling to wrap up an implementation and feel like the hard part is done. But if you really want value out of your tool, the work is far from done. Fortunately, the post-implementation work is the fun stuff: the stuff that doesn’t count on your IT department to roll out new code; the stuff that has the potential to realize actual, provable value for your company.

If you must, on the day your implementation project is completed, mark a few hours of time two weeks or one month out to play with the data until you get an action out of it. Schedule a meeting with the analyst-minded folks of your organization and go over the data- NOT to discuss how complete your implementation is, but to discuss what insight the data has given you and what the next steps are. Don’t let USING your data fall down the priority list just because it doesn’t have a project and a budget attached to it.


This may just be a matter of semantics, but as an industry we need to change our mindset from “I want to track…” to any of the following:

  • I want to HYPOTHESIZE…
  • I want to TEST…
  • I want to COMPARE…
  • I want to BRAINSTORM…
  • I want to OPTIMIZE…
  • I want to DECIDE…
  • I want to ACT UPON…

“Tracking” or merely collecting data should never be the goal or even a main focus. ALWAYS implement with this question in mind: “How am I going to USE this data to optimize my site?”

A major healthcare website recently hired an agency with which I am familiar. The client has used Adobe Analytics for years and noticed a few things that were a little off in their reports (have I mentioned how much I hate the Form Analysis plugin?), so they hired this agency to do an implementation audit. As they sign the contract for the implementation audit contract, they say, “You’ll notice we don’t use conversion variables or events, and that’s fine. We’re really content with just the traffic data, just help us make sure the traffic data is coming through properly”. In other words, “This solution worked for us four years ago, please just keep it working as is.”

Oh, how this breaks my heart! Because I know a year from now they might say, “Gee, we’re spending a chunk of money on our web analytics, but we’ve never done a thing with the data. Let’s bring on an expert to help us analyze”. And that expert will say “analyze? analyze what?” Then they’ll need to re-implement, then wait even more time before they have real, actionable data.

If you’re using the same set of reports that you were 18 months ago, you are very likely losing value on your implementation. That is, unless your website hasn’t changed, in which case there may be bigger problems. The web is constantly evolving: so should your site, and so should your implementation.

The problem, of course, is finding a balance between “We don’t need any newfangled reports” and “Let’s track everything in case someday we might need it”.  The best way around this? Any time you are changing your implementation, don’t think of what data you want to track now (again: never think “I want to track”). Think of which reports you’ve actively used in the last 3 months. Think of what reports you will use in the next 3 months. If current reports don’t fall in that list, scrap them or at least hide them from your menu so they don’t distract you.

See what analytics trends are on the rise. Check out some of the top blogs in the industry- particularly your vendor’s blog, like Omniture’s Industry Insights– to see what’s up-and-coming in analytics. If you’re hiring an implementation consultant- either from a vendor or an agency- don’t just tell them what reports you’d like. Ask them which cool reports others having been using. Use their experience. They may be content to take orders or fulfill the requirements of the contract (which are usually made by salespeople, not analysts), but it’s very likely that they have great ideas if you’ll give them a couple hours to incorporate them.

As an implementation consultant, this is something I’ve always struggled with. I get excited by the challenge of finding unique ways to bring in new data. When a client says “I want to track order confirmation numbers broken down by coupon codes with campaign click-throughs as a metric”, my natural inclination is to say “sure, we can do that!” then whip out my javascript editor. And if that report NEVER gets used, I’ll never know unless the client calls me back in.

I write this post as a step in the direction of repentance. I have been a Data Squirrel enabler, and I know it. Learn from my past and don’t allow these mistakes to diminish the value of your implementation.

Challenge: 30 Minutes of Action

Because right now my role is mostly in implementation, it sometimes feels like the world of analytics is nothing but figuring out how to report- either how to get the data in there or how to present it. I want to see some action!

I challenge you all to spend a mere 30 minutes in your tool of choice to find one- just ONE- actionable piece of data, and *here’s the kicker*: actually take steps towards that action (even if it’s just making a plan). I don’t expect the action to only take 30 minutes, but you should definitely be able to find your piece of data and start a plan in that time.

Here are a few ideas that range from simple to ambitious to help you get started:

  • Look at your Error Pages report and fix the most-clicked broken link. If needed, use a pathing report to find the page sending people to your error page. This is practically a freebie.
  • Look at your top internal search keyword. Figure out a way to make content on that topic more easily findable from your homepage. Ask yourself: would this make a good internal promotion?
  • Look at your top 5 highest-traffic landing pages, then see which is converting the least. Make a hypothesis about what could improve (compare to highest-converting page if you need ideas), then make a game plan to A/B test it.
  • See which paid search keyword has the highest bounce rate. Hypothesize on how to make your landing page appeal to users clicking on that keyword more, or reword your keyword so it brings in more qualified traffic. Make an A/B test out of it.
  • Think of the one thing on your site you wish users were doing more of. Now put it in a fall-out report. Find the point of highest abandonment. Hypothesize about why users are falling out. Test it.
  • Find a low-performing call-to-action. Figure out a different way to present it: perhaps a different graphic or reworded text. Test it. (Are you noticing a “test it” theme, here?)test
  • Take your highest-performing campaign. Play with segments until you see who the campaign appeals to the most. Earmark that segment for more marketing efforts. Find a video with a high conversion rate. Feature it in an area with higher visibility.
  • Look at your top Organic Search Terms. Do you see a lot of your brand name in there? Find a high-converting product page and focus some SEO efforts there so you can reach users looking for your products, not your brand.

If you reach the end of your 30 minutes with no action plan, don’t give up. Spend some time finding a recent success or failure. What’s trending up? What’s trending down? Try segmenting the data different ways. Make some theories, then plan some tests. Not to sound like a broken record here, but you can’t go awry with a well-executed test.

Ready, Set, Go!


I’ll happily take suggestions for more ideas, too. I’d love to make one huge long list of your ideas for actionable data.

Ready? Go team!

When you’re done, please come back here and tell me how it went.

PS: If you can’t find one piece of actionable data to move with, then either you need to revamp your implementation, or congratulate yourself on a perfect website and implementation. In which case, you have free time on your hands to volunteer at the Analysis Exchange!