Referencing “this” in Event-Based Rules

Not many people know you can pull information out about the element that an Event-Based Rule fires on, without any custom code. Let’s say I want to fire a rule on a link that looks like this, and I want to capture the domain of the link that was clicked in eVar3:

<div partner="adobe">
     <a href="http://www.adobe.com" class="partnerExit" alt="go to our partner Adobe" target="_blank">This is an example link.</a>
</div>

I would set my rule up to correctly fire on that link (with something like “a.partnerExit”), then for eVar3 I would put %this.hostname%, where “this” refers to “this thing that the rule fired on”.

I don’t have to have to do any custom code, or have a data element set up (in fact, data elements are NOT particularly useful at pulling out information specific to the element that fired an event-based rule.)

Putting this in the interface…

Would let me access…

Which would yield this…

%this.hostname% The domain of the link that was clicked www.adobe.com
%this.href% The full URL of the link that was clicked http://www.adobe.com/
%this.src% The source of the element that was clicked (works for images, not links) (Not applicable here.)
%this.alt% The “alt” value of the element that was clicked go to our partner Adobe.
%this.@text% The internal text of the element that was clicked This is an example link.
%this.@cleanText% The internal text of the element that was clicked, trimmed to remove extra white space This is an example link.
%this.className% The class of the element that was clicked (less handy in reports, but very handy for DTM troubleshooting) partnerLink

For more advanced “DOM-scraping” you may need to take to the custom code. I find jQuery often simplifies things greatly for this. For instance, in the above example, if I wanted to get the ID not of the anchor tag, but of the <div> that HOLDS the anchor tag, I could do this in the custom code:

Note that I remembered to also add it into s.linkTrackVars, since this is an s.tl beacon (DTM automatically creates s.linkTrackVars for any variables you configure directly in the interface, but can’t know which variables you are adding to the custom code section, so you must be sure to add them to linkTrackVars or linkTrackEvents yourself, or the s.tl() beacon will ignore those variables).

How to get a global “s” object in DTM

At this point in time, by default, DTM creates your analytics object (usually an “s”, as in “s.pageName”) with a local scope. This means it should be able to be referenced from any custom code blocks within DTM that are tied to your analytics tool. However, it would NOT be accessible from code on the page (or a developer console) or even non-analytics code blocks in DTM (like Third Party Tags). This can cause some pretty big problems if you aren’t aware.

sUndefined

 

 

 

To get around this, you need to define your own s object within your library. This does mean you can’t let DTM manage your library, but the change you need to make is pretty minor. You need a “Custom” configuration, and you’ll need to “Set report suites using custom code below” (since you’re essentially going to be overwriting the “s” object that DTM created, where it set your report suites for you.

configureScode

 

 

 

 

 

 

When you open the editor, add this code to the top:

s = new AppMeasurement();
if(_satellite.settings.isStaging==true)
{s.account="myDevSuite"}else{s.account="myProdSuite"}

Make sure to replace the “myDevSuite” and “myProdSuite” with the correct report suites- these should match what you have in the interface. This uses _satellite.settings.isStaging to detect the current library and set the appropriate s_code.

 

With that in place, you should be able to access the “s” object from anywhere in DTM or on your page.

UPDATE: Because of a current quirk in DTM where it looks for the H code version of your s.account, I recommend also setting this line, below the ones above:

var s_account=s.account

This should prevent any report suite confusion on s.tl beacons from Direct Call Rules and Event Based Rules.

Why do my Adobe Analytics numbers differ so much from my bit.ly stats?

Link shorteners like bit.ly, goo.gl, and t.co generally keep their own metrics for how many times users click through. Unfortunately, those numbers rarely match any other Analytic’s system, including Adobe Analytics. We’ve seen situations where the bit.ly number was as much as 10 times the analytics number. With such a big difference, both numbers look a little suspect. So, what gives?

Bots

First, there’s non-human traffic. Link shorteners’ numbers do not always account for whether the click came from a real live human being, or a bot. Goo.gle and bit.ly DO try to eliminate bot traffic, but the list they use to identify bots is likely different from the list Adobe uses (which is maintained by the IAB).
However, link shorteners are often used on twitter, and bot traffic on twitter is a whole different beast. Web bots generally crawl pages, sometimes searching for specific things. Adobe and the IAB keep a good list of the worst web-crawling bot offenders. But on twitter, the environment is more predictable (tweets, unlike webpages, can predictably be accessed/interacted with the same methods) and the actions bots perform are simpler: click, retweet, reply. This means many more “hackers” out there can create a bot with much less effort, and since they aren’t harming much, they fly under the radar of the IAB and other bot-tracking organizations.
I found a book, “TWITTER: The Dark Side – Does Bit.ly Enable a Massive Click Fraud?” (by Roman Latkovic and Robert LaQuay, Ph.D) which takes a thorough look at this issue. Basically, when you start to break down individual clicks by IP and user agent, it becomes clear many bots are indeed inflating bit.ly tracking.

Javascript Disabled

Second, there’s javascript. The majority of Adobe implementations these days rely on the user firing JavaScript (some older implementations or mobile implementations may fire a hard-coded beacon, but this is increasingly rare). Link shorteners, however, run WITHOUT javascript- meaning devices that have javascript disabled would count in bit.ly but NOT in Adobe.

Lost Referrer Information

Third, bit.ly (or twitter traffic in general) may cause you to lose referrer information. Someone coming from a twitter app rather than a webpage won’t have a referrer-it may look like “Direct Traffic”. Query string parameters SHOULD always carry through, though, so make sure your shortened links include tracking codes.

Conclusion

With this information in mind, I’d be more surprised to see bit.ly numbers that MATCH an Analytics tool than I am to see wide discrepancies. What’s your experience, and what are your theories behind them?

Beacon Parser again!

Some new updates in the beacon parser:

  • Fixed a bug that prevented heartbeat beacons from tracking
  • Made it possible to remove beacon columns
  • Added API functionality, which can pull down your report suite IDs and custom variable names using the Adobe Admin API.

A huge thanks to this summit lab session for hints on using JSON/AJAX on the API.

Let me know how it works for you!

Beacon Parser round 3

First, you’ll notice a new look and feel for both the blog and the parser- I’m trying to give it a more uniform look.

Functionally, the biggest changes in this go-around are:

  • Variables and parameter should keep a clean, sorted order.
  • You can highlight values in older beacons that don’t match the most current beacon.
  • You can change display options before submitting your first beacon.
  • Display options now include friendly names and variable descriptions, taken largely from this Adobe help article.
  • I fixed some bugs with exporting to CSV and the products string.

Setting the Products String in DTM

Enough people have asked for an example of a products string that I decided to post my example page on my server.

On the example page, you can see:

  • How to build a W3C-standard data layer object for a cart with multiple items
  • How to create a simple s.products string
  • How to create more elaborate s.products strings, including merchandising eVars.
  • What the final beacon looks like.

I hope someone out there finds it helpful!

Beacon Parser: Round One

I’ve had two major frustrations while troubleshooting and doing QA for clients lately:
1. The new POST beacons don’t split up prettily in debuggers (like Charles proxy).
2. Even if they did, it can be hard to compare beacons to each other if they don’t have the same number of variables.
So, as a pet project in the evenings, I’ve been working on a tool to solve for these problems. Today I introduce: the Beacon Parser!

I don’t consider it done yet, but it’s ready for Beta testing, perhaps. I am positive there are probably better ways to accomplish what I’ve done- I’m no development genius- but I must say i’m pretty happy with the result. I would love for a few folks to provide feedback, if possible.

Current Features:

  • If you enter multiple beacons, It aligns rows for similar variables across beacons.
  • You can export the table to a “csv” file. Note, in certain browsers (including firefox), you will need to add the “.csv” extension to the downloaded filename.
  • If your beacon includes “varName.”/”.varName” variables (clickmap, context data), it will concatenate all the currently “open” variables to output one variable/value pair:
    parserExample

Features I’m still working on (in order of ambitiousness):

  • I definitely still need to work out some bugs (see bottom of this post).
  • It doesn’t currently work great with incomplete beacons.
  • A “clear button”.
  • I’d like to sort the variables back to their original order- currently as you add new beacons that have variables the previous beacons did not, it just adds them to the end.
  • I’d like to have columns for notes on the purpose of the variable, and data quality if there’s anything clearly broken (like character limits being exceeded).
  • I’d love to connect to the Admin API and return the names of custom variables.

I’d also love to create a W3C data layer parser.

Known bugs (already). I will try to fix sometime this evening:

  • It breaks when the current or referring URL has query params in it. Kind of a big problem.
  • It struggles with some products strings.

Processing Rules vs Satellite: do you want a band-aid or a doctor?

Cross-post: I originally published this on SearchDiscovery’s blog in April 2013.

There is still one big topic from this year’s Adobe Summit standing out in my mind. I attended a few great sessions that discussed Processing Rules, and I had a hard time not interrupting when I heard attendees ask: “When should I use Processing Rules instead of a Tag Management system? Clearly, a good Tag Management tool can do practically everything Processing Rules can?” Coincidentally, someone asked a very good question on my previous post(the first in a series on moving away from using plugins), wondering how using Satellite to set a ?cid tracking parameter was different from using a Processing Rule within the SiteCatalyst admin settings. Obviously, some clarification is in order.

Do you want to spend your efforts treating the symptoms of a limited implementation, or do you want to heal the disease?

Processing rules, while an awesome tool, cannot replace a good TMS: they have a few functions in common but the methods, intention and potential are so very different. When people wonder what the difference is, it makes me wonder if sometimes we’re missing the forest because we’re too busy focusing on the trees: the little things that a processing rule or tag management system can both fix, and the day-to-day headaches that plague those responsible for maintaining a SiteCatalyst implementation. But when you step back and look at the bigger picture you realize you’ve been looking at the little issues so much that you never were able to push your implementation to the next stage- a stage that doesn’t require as much maintenance, giving you time to actually use the data you’ve worked so hard to get.

This isn’t to say you don’t need to fix the little things- most certainly you do, and both Processing Rules and Tag Management Systems have some things in common to help you do so. But never forget the bigger picture.

Treating the Symptoms

For those not very familiar with Processing Rules, they are a set of conditions and corresponding actions that are applied in between data collection and VISTA rules/data processing. For example, you can use a Processing Rule if you want to copy an eVar to a prop without changing your code, set an event whenever a specific eVar has a specific value, or assign a context variable to a variable. Processing rules are free for all Adobe Customers,  but do require a certification before they can be set up.*
*

processing rules example

The Advantages

What can a processing rule offer that a Tag Management System can not? Here are some ideas we came up with:

  1. Most obviously, they are included for free in your Adobe product. They are already accessible, waiting to be used without any big changes to your overall approach to implementation. (Of course, some might argue that big changes are* needed* for most organizations out there.) You don’t need to overhaul or fix your current implementation in order to use them.
  2. Processing rules require zero code knowledge: There is no need to touch JavaScript or Regular Expressions. Many Tag Management Systems will claim this “code-less implementation” as well, but we all know that unless you have a simple, well-constructed website and the most basic of reporting requirements, even with a great TMS some code work is necessary. Of course, some Tag Management Systems are less code-heavy than others, for example, Satellite was designed to need as little code as possible without losing any of the power and flexibility that a little bit of code can give you.
  3. Processing rules provide control in places where there is no access to a tag management tool, such as third-party pages where you can’t change your code to put a Tag Management snippet on.
  4. Processing rules can be enabled without a test cycle, completely independent of IT or QA teams. This could be both a good and a bad thing (a little testing is a good thing before touching live data), but there is no need to prove there are no adverse effects on the page to a nervous IT team, and the changes can be immediate.
  5. *Processing rules are set at the report-suite level. *In some cases, like if you are multi-suite tagged, it can be useful to have a way to change one data set without changing another.

I’m thrilled that this relatively new piece of SiteCatalyst functionality exists. It’s already been used to solve previously show-stopping problems, to clean up implementations, and to slowly move the whole industry to a not-page-code-dependent place—all good things. They are great for what they are intended for, but they are not a magical pill to heal all that ails your implementation.

How Satellite and Processing Rules differ

  1. Satellite can work with your cookies, DOM, meta tags, JavaScript objects, custom code, or data elements from previous pages.
    *A processing rule can only work with the data sent by your code, such as context data or existing SiteCatalyst variables. It can’t pull data out of your cookies, DOM, or meta tags. While it may be easier to tell your developer to set *s.contextData[‘section’]to “products” than explain what a prop is and which prop is used for what, your developer must still  know what must be captured, where to get the values for the variable, and be consistent in HOW it is set across the site. A tag management system allows you to access elements in your page, using your existing code. For instance, Satellite could “grab the value of this form field and put it into eVar40″ or “on the search results page, grab the search term from out of the header 1 tag in the content”.
  2. *Satellite requires no certification, has a support team, and you can focus your learning on the things you need.*
    **To enable processing rules, you must be certified. The test is free but asks a lot of questions that require a deep knowledge of not just processing rules, but of SiteCatalyst as a whole. I’ve seen clients farm out their processing rules work because it is simpler than training someone to pass the test, or because they don’t have enough confidence in their ability to use the tool and don’t want to mess with real, live data. Of course, this limitation could be a problem even with a TMS: there is a certain amount of knowledge and confidence you must have when making changes to live data, which leads to the next point.
  3. *Satellite has flexible, fast testing capabilities.*
    **Processing rules can be difficult to test before letting them affect live data. You can’t see them being set on your pages, with the debugger or Charles, for instance, and it can take a day or two to be sure they are working the way intended. You also can’t copy just one rule from one Report Suite to another, which can add another layer of difficulty to setup and testing.
  4. *Satellite can scale for the most complex enterprise needs.*
    **Each report suite is limited to only 50 processing rule sets. I know that sounds like it should be more more than sufficient, but I’ve already seen 2 or 3 clients hit that mark- it’s easy to do since they currently don’t allow nested conditions (as in “if … then… else…”) . Also, if an organization is using many context variables (which the newest AppMeasurement libraries do), this limit can become even more problematic.
  5. *Since Satellite code lives client-side, it can work with plugins and other code solutions you already have. *
    Depending on other reporting requirements, you may still have plugins that require certain variables like s.campaign to be set client-side, so that it can be used in script on the page- for instance, for the cross-visit-participation plugin. Many of these plugins could be worked around with a good TMS, but not with processing rules alone.
  6. *Satellite can access any SiteCatalyst variable- including RSIDs.*
    **Processing rules can’t access/change certain variables or settings, such as the report suite, hit type (page view vs. link), link type (custom, download or exit link), or products string.
  7. *Satellite leaves your implementation visible to existing free tools like Charles and firebug, and Satellite has user roles and workflow so you can manage who has visibility and control of your rules.
    *
    Visibility into processing rules is restricted to the Admin Console in SiteCatalyst. SiteCatalyst users must have admin access to see how they are set, and must know specifically the one report suite where they might be set, since you can’t view how they are set across multiple report suites like you can with your custom variables. This may seem like it isn’t a big deal, but it is definitely a pet peeve of mine to not be able to see how variables are being set, for instance while QAing an implementation.
  8. Satellite allows for for regular expressions, giving much greater flexibility. You don’t have to use it, but it’s there if you want it.
    *
    Processing rules don’t currently allow for Regular Expressions. I know I listed this as a perk (“no coding skills needed”), but it can also be very limiting. It does have the standard SiteCatalyst options in the conditions (equals, contains, starts with, is set*…) but the actions do not allow you to parse out or match values from a string. For instance, (and this is a real life recent use case), I couldn’t set up a condition to only capture the first 5 digits of the value of a query string parameter.
  9. Satellite can totally change the way your organization approaches digital measurement.

In truth, listing the things that processing rules lack really misses the point: Processing Rules are a tool to help Band-Aid (or in some cases expand) existing implementations. And they’re good for that: I’ll freely admit I have used them, and been grateful for them as a Band-Aid. But even when used with context variables to create a clean new implementation, even with many of the above limitations removed, they are *still meant to be just a mode of deployment. *

Healing What Ails You

We need to get the industry to stop viewing Tag Management as just another “mode of deployment”. As Evan rightfully put in his post a while back, Tag Management shouldn’t resemble another ticketing system. Satellite was built so that it doesn’t merely fill in the gaps of your SiteCatalyst Deployment Guide. It helps you iteratively rewrite it, from the Business Requirements on down. So while yes, many of my tactical blog posts will be about “treating symptoms”, don’t think for a moment that that is all there is to Satellite. We need to change the game, not just make it easier to play the current game. The business stake holder shouldn’t have to find someone certified in processing rules to tell them what they can and can’t track. Data collection shouldn’t be so far removed from your website that the user experience doesn’t even factor into the collection process.

So if you find yourself asking “how is this different/better than processing rules”, please reach out and let us show you that both can be good: while they do have a few tasks in common, processing rules are still a method of implementation; Satellite addresses a much bigger picture. It not only makes the day-to-day implementation easier, it can change the way you view digital measurement: the data you can bring in, the questions you can ask, and the actions you can take.

(A big thanks to Bret Gundersen and the folks at Adobe for their insight into processing rules for this post, and for the work they’ve done on that tool.)

*More information on how to set processing rules can be found at *Kevin Rogers’ blog or the Processing Rules section of the Adobe Marketing Cloud Reference site.*