Adobe’s Opt-In Service for Consent Management

This is cross-posted from the 33 Sticks blog.

Adobe’s Opt-In Service is a tool provided by Adobe to help decide which Adobe tools should or should not fire, based on the user’s consent preferences. It has some advantages I don’t see folks talk about much:

  1. It manages your Adobe tags (Analytics, ECID, Target) at a global level. If you’ve got it set up right, you don’t need to make any changes to your logic that sets variables and fires beacons (you don’t need a ton of conditions in your TMS)- any rule’s Analytics or Target actions will still run, but not set cookies or send info to Adobe.
  2. When your existing variable-and-beacon-setting logic runs while the user is opted out, Adobe queues that logic up, holding on to it just in case the user does opt in. Any Adobe logic can run, but no cookies are created and no beacons get fired until Adobe’s opt-in service says it’s ok.

This latter point is very important, because in an opt-in situation, frequently consent happens after the initial page landing – the page that has campaign and referrer information. If you miss tracking on that page altogether, you will lose that critical traffic source information. Some folks just re-trigger the page load rule, which (depending on how you do it) can mess up (or at bare minimum, add complexity) to your data layer or rule architecture. So I’m a big fan of this “queue” of unconsented data.

You can even experiment and see for yourself- this site has a globally-scoped s object, and an already-instantiated Visitor object. Open the developer console, keep an eye on your network tab, and put this in, to opt out of analytics tracking on my site:

adobe.optIn.deny('aa');//opt out analytics

Then run code that would usually fire a beacon:

s.pageName="messing up Jenn's Analytics data?" //you can tell me a joke here, if you want
s.t()

No beacon!

Now let’s pretend you’ve clicked a Cookie Banner and granted consent. Fire this in the console:

adobe.optIn.approve('aa');//opt in analytics

And watch the beacon from earlier appear! Isn’t that magical?

Side note: The variables sent in the beacon will reflect the state when time the beacon would have fired. I tested this- if I do this sequence:

adobe.optIn.denyAll()
s.prop15="firstValue"
s.t() //no beacon fires because consent was denied
s.prop15="secondValue"
adobe.optIn.approveAll() //now my beacon fires

…the beacon that fires will have prop15 set to “firstValue”- it reflects things at the time the beacon would have fired.

To re-iterate: if you have the ECID service set up correctly, it will still look like your Analytics/Target rules fire, but the cookies and beacons won’t happen until consent is granted.

How to Use within Adobe Launch*

Most of Adobe’s documentation assumes you’re using Adobe Launch*, and even then, leaves out some key bits. The Experience Cloud ID extension has what you need to get things set up, but is not the complete solution (you can skip this and go straight to the Update Preferences section below if you want the part that’s not particularly well documented). Let’s walk through it, end-to-end:

Opt-In Required

First, tell Launch when to require Opt-in.

If it’s just “yes” or “no”, then it’s simple enough. If you want to decide dynamically (based on region, for example), then you need to create a separate Data Element that returns true or false. For example:

if(_satellite.getVar("region")=="Georgia"){
     return false //Georgia doesn't care about consent. Yet.
}else{
     return true
}

To be honest, to me it’s always made more sense to just click the “yes” option, then further down, change up people’s default pre opt-in settings based on region.

Store Preferences

I’ll admit, I’ve never used these, but I think they’re self-explanatory enough: the ECID is offering to store the user’s consent settings for you. This seems convenient, but I’ve never seen a set up where the overall CMP solution doesn’t already store the consent preferences somewhere, usually in a way that is not Adobe-specific.

Previous Permissions/Pre Opt In Approvals:

The “Previous Permissions” and “Pre-Opt-In Approval” settings work very similarly, but “Pre-Opt-In Approval” is more of a global setting, and “Previous Permissions” is user-specific (and overwrites “Pre-Opt In Approval”). In other words: Pre-Opt-In Approval settings are used when we don’t have Previous Permissions for the user.

The key here is the format of the object, usually in a Custom Code Data Element, that Adobe is looking for:

{
   aam:true,
   aa:true,
   ecid:true,
   target:true
}

…where true/false for each category would be figured out dynamically based on the user. So your data element might have something like this:

var consent={}
if(_satellite.cookie.get("OptInInfoFromMyCMS").indexOf("0002")!=-1){
    consent.aa=true
    consent.target=true
    consent.ecid=true
}else{
    consent.aa=false
    consent.target=false
    consent.ecid=false
}
return consent

Important but often missed: updating preferences

All of the above is great for when the ECID extension first loads. But what about when there is a change in the user’s preferences? As far as I know, this can’t be done within the ECID extension interface, and it’s not particularly well documented… this is where you have to use some custom code which is kind of buried in the non-Launch optIn service documentation or directly in the Opt-In reference documentation. (I think we’re up to 4 different Adobe docs at this point?)

When your user’s preferences change, perhaps because they interacted with your banner, there are various objects on adobe.optIn that you can use:

adobe.optIn.approve(categories, shouldWaitForComplete)
adobe.optIn.deny(categories, shouldWaitForComplete)
adobe.optIn.approveAll():
adobe.optIn.denyAll():
adobe.optIn.complete(); //have adobe register your changes if you set shouldWaitForComplete to True

The “shouldWaitForComplete” bit is a true or false- if true, then Adobe won’t register your changes until you fire adobe.optIn.complete(). If false, or omitted, then Adobe will immediately recognize those changes.

So I might have a rule in Launch that fires when the user clicks “approve all” on my banner (or if I’m using oneTrust, within their OptanonWrapper function), with a custom code action that looks something like this:

adobe.optIn.approve('target',true);//opt in target 
adobe.optIn.approve('aa',true);//opt in analytics
adobe.optIn.approve('ecid',true);//opt in the visitor ID service
adobe.optIn.complete(); 

Which could also be accomplished like this:

adobe.optIn.approve(['target','aa','ecid']);

Or even just this (in my tests, this did not require adobe.optIn.complete()):

adobe.optIn.approveAll();

What if I’m Not Using Launch?

If you’re not using the Launch ECID extension, then the initial stuff- the stuff that WOULD be in the extension, as detailed above, is all handled when you instantiate the Visitor object. This is all covered Adobe’s Opt-In Service documentation, which I personally found a bit confusing, so I’ll go through it a bit here.

As far as I can tell, this bit from their documentation:

adobe.OptInCategories = {
    AAM: "aam",
    TARGET: "target",
    ANALYTICS: "aa",
    ECID: "ecid",

};

// FORMAT: Object<adobe.OptInCategories enum: boolean>
var preOptInApprovalsConfig = {};
preOptInApprovalsConfig[adobe.OptInCategories.ANALYTICS] = true;

// FORMAT: Object<adobe.OptInCategories enum: boolean>
// If you are storing the OptIn permissions on your side (in a cookie you manage or in a CMP),
// you have to provide those permissions through the previousPermissions config.
// previousPermissions will overwrite preOptInApprovals.
var previousPermissionsConfig = {};
previousPermissionsConfig[adobe.OptInCategories.AAM] = true;
previousPermissionsConfig[adobe.OptInCategories.ANALYTICS] = false;

Visitor.getInstance("YOUR_ORG_ID", {
    "doesOptInApply": true, // NOTE: This can be a function that evaluates to true or false.
    "preOptInApprovals": preOptInApprovalsConfig,
    "previousPermissions": previousPermissionsConfig,
    "isOptInStorageEnabled": true
});

Could be simplified down to this, if we skip the extra OptInCategory mapping and the preOptInApprovalsConfig and previousPermissionsConfig objects:

Visitor.getInstance("YOUR_ORG_ID", {
    "doesOptInApply": true, 
    "preOptInApprovals":{
         "aa":true,
     },
    "previousPermissions": {
         "aam":true,
         "aa":false,
     },
    "isOptInStorageEnabled": true
});

Literally, that’s it. That’s what that whole chunk of the code in the documentation is trying to get you to do.

Beyond that, updating preferences happens just like it would within Launch, as detailed above.

Side note- what’s up with adobe.OptInCategories?

I’m not sure why Adobe has that adobe.OptInCategories object in their docs. These two lines of code have the same effect, but one seems much more straightforward to me:

previousPermissionsConfig[adobe.OptInCategories.AAM] = true;
previousPermissionsConfig["aam"] = true;

The adobe.OptInCategories mapping is set by Adobe- despite it being in their code example, you don’t need to define it, it comes like this:

I’m guessing this was to make it so folks didn’t have to know that analytics is abbreviated as “aa”, but… if you find the extra layer doesn’t add much, you can bypass it entirely if you want.

Conclusion

Once I understood it, I came to really like how Adobe handles consent.

I am by no means considering myself an expert, or trying to make a comprehensive guide. But the documentation is lacking or spread out, so most of this took some trial and error to really understand, and I’m hoping my findings will prevent others from having to do that same trial and error. That said, I’d love to hear if folks have experienced something different, or have any best practices or gotchas they feel I left out.

*still not calling it “Adobe Experience Platform Data Collection Tags”

The Adobe Launch “Rule Sandwich”

I’ve mentioned before how much I love that Launch* can “stack” all sorts of rules- by which I mean multiple rules of varying scopes can combine together into a single analytics beacon.

This means (using an example from that old post) I can have a global rule that sets my universal variables, a rule for search results, a rule for filters, a rule for null search results, and a rule that fires the beacon, all resulting in a single beacon where all the variables might be coming from a different rule. I call this a “Rule Sandwich”:

That particular example may be a bit overkill, but this ability to divide up your variables by scope can be key to a scaleable implementation.

As a programmer, there are some coding best practices we should all try to follow:

  • Don’t Repeat Yourself (DRY): If you can find a single place to set a variable, do it. Don’t set site section in every page that has a site section- set it in a global place that can dynamically set the right value from the data layer.
  • Keep It Simple, Stupid (KISS): Daisy-chaining Direct Call Rules can quickly complicate an implementation and introduce multiple points of failure. Huge Switch statements in code can make it hard to find specific dimensions or events.
  • Principal of Least Astonishment: People who encounter your setup shouldn’t have any surprises. If the next person who signs in to your Launch property says “oh, wow” or needs a “eureka” moment, you may have over-engineered things.

Following those principals, here’s the “Sandwich” set up I most like to use:

The bottom slice: Global Variables

My first rule sets my universal variables- things like site section, campaign, login state, user ID, etc. Stuff where I can say “if it exists in the data layer, I want it in my beacon”. It is set up to fire on all triggers that might end up in an analytics beacon. If you’re using the Adobe Client Data Layer extension and all your rules are triggered by your ACDL data layer, then this can be pretty simple, thanks for “Listen to All Events”:

But I’ve also had setups like this:

I add “#1” to the trigger name and rule name to indicate that these are triggered with a rule order of 1:

Which means for any given trigger**, this will be the first rule to fire. In this rule, I set any variables that can be applied globally- usually things like page name, site section, page type, language, etc.

**A note about sequencing: the order for rules only applies when rules share a common trigger. If you have a rule that fires on DOM Ready, and a rule that fires on a “page view” event in your data layer when DOM Ready occurs, even if they happen at the exact same time, Launch won’t compare their order to see which comes first. The rules would either both have to be triggered on DOM Ready, or both triggered on the “page view” event.

The Analytics extension does a great job of keeping everything in sequence. In other words, if I have a rule with an order of 1 that sets analytics variables I can be confident that a rule with the same trigger with an order of 50 will apply its analytics variables only after the first rule has finished. (However, Custom Code blocks will not wait for earlier-ordered custom code blocks from other rules to finish. If I have a rule with an order of 1 that runs some script in a custom code action, I can’t know for sure that script will have finished before code in a separate rule with an order of 50 happens.)

You might ask “Why don’t you use the Adobe Analytics Extension’s global variables?

The problem with these is they only evaluate once, when the extension first kicks in on page load (go upvote my idea if you, too, want it to behave differently). So if you clear variables, they disappear and won’t come back until the next page load. And if you try to pass new values- maybe the language preference changed, or maybe you have a SPA and need to pass a new page name- the extension won’t pick those up.

Many folks use the doPlugins function for this purpose, because it fires on every beacon. There’s a few reasons I avoid this when I can:

  1. I try to keep my variables out of code as much as possible- the more transparency in the interface, the better.
  2. I try to keep doPlugins light. With default settings, it runs on every click, whether that click results in a beacon or not.
  3. doPlugins is the LAST thing to fire before a beacon is sent to Adobe, meaning any customization I’ve done in other rules might be overwritten.

Which brings me back to my #1 rule that fires on all of my common triggers. I set my global variables, and that’s it- no sending a beacon (yet).

The middle of the sandwich- the “ingredients”

Next come all my sandwich ingredients: the rules that set variables based on more specific scenarios. Any logic that needs to only fire under certain conditions go here. It’s where I set events, set any user-action-specific dimension, and hard code any values.

These rules might be triggered by a page view under certain conditions (like “pageType of article”), or based on a specific event or element interaction.

So I might have a rule that fires on page view if the page type is equal to “article”:

(In theory, I could set something like content.title in my global variables rule, because it will only set if the data layer currently has a value, but I’d rather keep all my blog stuff together and not make my global rule evaluate data elements unless they’re needed.)

I may also use these rules to customize/fix anything that had been set in my global variables. In an ideal world, data layers would be perfect and we’d never have to “fix” anything in Launch. But this isn’t that ideal world. My global variables might set the page type as “search results”, then later I realize our other search results page has a page type of “search”. Obviously, I should go to the devs and ask them to make it consistent. But in the meantime, I can use an “ingredient” rule to overwrite what was set in my global variables rule.

Not everything merits its own rule. If the scope only calls for a single extra event to be set, I could probably just do that in my global rule with a conditional JavaScript statement. But something like Purchase Confirmation, which has its own product string, purchaseID, etc… that makes sense as an “ingredients” rule. I try to find a balance between not having too many rules, and not having any single rule be super complicated.

The top slice: Send Beacon and Clear Vars

For any situation where many rules might all contribute to the same beacon- for example, page views- you can have the top piece of the sandwich as a rule with an order of 100, which fires the beacon and clears the variables.

If you have global page view variables- stuff you want on all page views but not necessarily all beacons- you could also put them here.

Note, splitting the “set variables” (our ingredients) and the “send beacon” (our top slice) like this makes the most sense in a situation where you’ll have many potential rules all needing a beacon. Page Views is the best use case. Having a single separate “Send Beacon” rule makes it so if you have a few dozen page-specific Page Load rules, you don’t have to add the beacon and the clear vars action to all of them. It’s a time-saver, and a way to guarantee whatever the combination of ingredient rules, a single beacon will fire.

But in cases where there’s just one rule contributing to a beacon- something like a certain button click, for instance- then there isn’t much advantage to having a single “set variables” rule and a separate single “send beacon/clear vars” rule. You can combine your “ingredients” and your “top slice” into a single rule, like this:

Clear Variables makes sure that variables (and particularly events) from earlier beacons won’t accidentally get attached to something they don’t belong to. Different folks handle “Clear Variables” differently. Some people set it at the beginning of rules (or sequences of rules). I find it easiest to always set it any time I fire a beacon, so I don’t have to worry about where in the sequence of things I’m clearing my variables.

Put It All Together (Examples)

Following this approach, my rules list might look like this:

(I tend to not put “#50” in the name of rules- it’s the defaultiest option; it can be assumed if no other order is specified. I also don’t specify “clear vars” in rule names anywhere- to me, it’s implied with any beacon being sent. See my post about rule naming conventions.)

On my search results page (which fires on a “page view” ACDL event where page type=”search results”), these rules would fire, in this order:

  • All Events | ACDL Any Event | Analytics: Set global vars #1
  • Search Result Views | ACDL Page View | Analytics: Set vars (#50)
  • All Page Views | ACDL Page View | Analytics: Send s.t, clear vars #100

On a Search Filter click, the following would fire:

  • All Events | ACDL Any Event | Analytics: Set global vars #1
  • Search Filters | ACDL Search Filter | Analytics: Set vars, send s.tl beacon (#50)

Conclusion

There are, of course, many “correct” ways to architect a TMS; this is just the one that has worked best for me across many organizations, particularly if they have a reliable event-driven data layer. I will say, it can get complicated if you’re using a wide variety of rule triggers, like page bottom, clicks on this but NOT clicks on that, form submission, element enters viewport, etc… the” “Top Slice”/Global Vars Rule’s list of triggers can get very unwieldy. And sadly, if you have a very Direct-Call-Rule-based implementation, since there is no out-of-the-box “fire on all direct calls”, you may find resorting to doPlugins is the simplest, most scaleable route. Your mileage may vary.

I’d love to hear what has worked for others!

No AI was used in this post. Images are good old-fashioned paid stock images.

*I’m still not going to call it “Adobe Experience Platform Data Collection Tags”.

Problems with the “Autoblock” approach to Consent Management

In my post on setting up Onetrust, I mention how problematic Autoblock can be, but it comes up often enough that I decided it merited its own post, because enabling any Consent Management Platform (CMP)’s Autoblock functionality is a big deal, with far-reaching implications.


There are two approaches to consent management:

  1. Use your Consent Management Platform to create banners and keep track of user’s consent status. The CMP doesn’t affect what tags fire at all, it merely keeps track of user’s consent. Then you use your Tag Manager to get info from the CMP about what has been consented, then use conditions in your marketing tag rules, and the ECID opt-in service for Analytics and Target, to determine what tags are ok to fire.
  2. Use the Consent Management Platform to create banners, track user’s consent status, and block any files or cookies that try to load without consent. Your Tag Management System has essentially no control. You don’t have to mess with extra conditions in Launch (which some see as a perk), but someone has to maintain the CMS so it knows which scripts and cookies to allow to fire.

“Autoblock” is what we call that CMP functionality that proactively blocks any non-consented technology from loading. Instead of counting on you to manage which vendor code gets to run, it lets you run all your tag code, then it just blocks any non-consented files or cookies from actually loading.

While that post was specific to OneTrust, this is actually relevant for most Consent Management Platforms. I know for sure Cookiebot, Osano, Trustarc and Ensighten can use it, and I’m pretty sure every other CMP does too, because it lets their salespeople say that setting up their tool “takes no effort at all!”

Call it what it is: consent management is complicated

We often get ourselves in trouble in our industry by thinking a simple solution will solve a complicated problem. But simple solutions often require complicated workarounds to actually work in the real world. Going with the “simple” solution may actually create more work and more points of failure.

The alternative to using Autoblock is to use the consent preferences gathered by your CMP to create conditions in your TMS to decide which vendor’s code gets to run. This sounds like more work, and sounds more prone to human error, but after reading this post I hope you’ll understand that Autoblock doesn’t remove human effort (and the possibility for error), it just shifts it.

Part of the complexity comes from the fact that for all CMPs I’ve worked with, Autoblock doesn’t just block cookies, it blocks scripts that might be setting cookies. The theory behind this is sound enough: privacy compliance shouldn’t just be about cookies, it should be about data collection. Lots of tracking can happen without cookies, and cookie names can change even if script locations do not. If a user hasn’t consented to Analytics tracking, we don’t want to just block my Adobe AMCV cookie, we want to block the appMeasurement script from trying to identify me and build beacons about my user experience. So even though everyone talks about CMPs in terms of cookies, and even most consent banners only really mention cookies, we’re usually talking about much more. Just to drive it home, because this is key to understanding how CMPs work: CMPs affect scripts AND cookies.

Also, Autoblock is a binary thing: though you can have a “hybrid” approach that uses both TMS conditions and Autoblock, Autoblock is either on (and potentially affecting everything) or off (leaving it entirely up to your TMS).

Now we’re clear on that, let’s talk about reasons to avoid Autoblock (regardless of which Consent Management Platform you use). Each CMP implements Autoblock slightly differently, and some approaches work better than others, but they all have some common pitfalls.

Complication #1: Many Points of Failure

As I said in my other post, most problems I’ve seen with CMPs have been because of Autoblock functionality. In fact, I have only seen one deployment where it didn’t cause any major problems. Even though consent management is NOT my main role, I somehow keep hearing about Autoblock issues:

  • In a client’s Osano POC, we saw consented analytics beacons firing in duplicate or triplicate with no explanation, inflating our Analytics data.
  • That same POC caused a bunch of JavaScript errors, even on site-essential scripts, because Autoblock was changing the order things loaded in or the speed in which they loaded.
  • One org I know of lost 2 months of consented Adobe Analytics data because of a misconfiguration in Ensighten. They didn’t lose ALL data, or they would have spotted the problem sooner- they just thought they had abysmally low opt-in rates, not that 30% of their data was blocked even when users opted in.
  • Another org using Cookiebot has a consumer lawsuit on their hands because an old Adobe visitorApi.js file somehow snuck through their Autoblock.
  • Just now, trying to test out just how OneTrust would handle AudioEye, nothing I did could get my AudioEye Script to fire if Autoblock was on. (The problem may very well have been user error- I didn’t spend too much time troubleshooting.)
  • A week or two back, I was brought into a call with a non-client to discuss a timing issue that was causing cookies to be created even with Autoblock on. The vendor had suggested a manual workaround that involved writing a lot of custom code to manually manipulate cookies.
  • The next day, someone else reached out in response to my OneTrust blog post to let me know they had just discovered that because of an issue with hashing, their Autoblock file had exploded in size, causing a sudden traffic deflation issue and impacting their page performance for months. It took a lot of resources to track down the issue, and even then they might not have spotted it if they hadn’t been running regular Observepoint scans.

Time and time again, I see troubleshooting and maintaining Autoblock taking more resources than a TMS-condition-based approach would. Again, I’ll admit a number of these problems could have been from user error. But users do err.

Complication #2: Cookie/script inventory management

Regardless of the CMP, Autoblock functionality requires some sort of inventory of all the cookies and scripts your marTech vendors may be using, so that it can know what to block and what to allow. This is often where most of the “work” of maintaining a CMP comes from. There is usually some list of all possible cookies on your site that someone is going to have to sort through and maintain. Such lists can take a lot of effort and knowledge to maintain. I’ve seen these inventory lists get hundreds of items long and require weekly upkeep, forcing orgs to shift their efforts from making sure they’re using their data in compliant ways, to doing detective work on where obscure cookies like “ltkpopup-session-depth” come from and whether or not they are necessary for your site to function.

Some CMPs allow you to filter this list down to the user, so by clicking through somewhere on your banner, the user can see documentation on all the types of cookies possibly set on your site:

I am not a lawyer, but a bit of research seems to suggest this is not legally required by GDPR or any of the other major regulations (though I’ll admit there may be some reasons lawyers want you to). In fact, from my non-lawyerly view, such lists may be a liability because keeping it 100% accurate would be very difficult for most sites. So I don’t actually see this in-depth public-facing cookie documentation as a worthy benefit.

Most CMPs (including OneTrust) will scan your site and let you know what cookies and scripts it found. Some may fill in some knowledge- for example, OneTrust’s Cookiepedia knows that “_gcl_aw” is likely tied to Adwords and belongs in the Marketing category- but others may rely entirely on you to categorize each cookie appropriately. Some CMPs may only look at cookies and scripts set by your site; others (like one TMS/CMP that rhymes with “enlighten”) may make you categorize all cookies and scripts that a user’s browser extensions may be bringing with them- things that have nothing to do with your site (or what you’re liable for).

One reason keeping such an inventory is so difficult is the often-unclear relationship between scripts/cookies and what purpose they serve. The marTech vendors don’t make it easy- because of acquisitions, concern about optics, or any number of other reasons, most vendors use multiple domains and set multiple cookies which often don’t have a clear association with their vendor. As part of my Tagging Overload presentation last summer, I made an inventory of 200+ common domain/vendor associations but even that is far from comprehensive (and probably far out of date by now). And if cookies are set on your own domain, it can be even harder to know if they require consent, or if your own developers use them to make your site function properly. Miscategorization could mean a lawsuit, or it could mean broken website functionality.

Let’s take two examples:

Many sites use a tool called AudioEye, which makes a site more accessible (for instance, by making it easier for Screen Readers to describe content to blind users) to stay compliant with regulations from the American Disabilities Act (ADA). I’m not a lawyer, but a strong case could be made for AudioEye being considered Strictly Necessary and therefore not require opt-in.

To add AudioEye to my site, I essentially just add a couple lines of code that reference a single .js file (https://ws.audioeye.com/ae.js). That file then brings in a bunch of other files:

Between those scripts, a few cookies get set. OneTrust picked up these third-party cookies in its scans:

(Though it did not seem to pick up on the first-party _aeaid cookie so I may need to flag that myself.) These cookies and domain are not currently in OneTrust’s Cookiepedia, and/or are incorrectly categorized as Marketing cookies. So now, instead of keeping track of the single line of code I added to the site, I’ve got a dozen+ javascript files, 5 third-party cookies, and 1 first-party cookie, for a single vendor. I’m going to need to make sure that Autoblock knows that these are all “Strictly Necessary”, not for marketing or targeting.

Let’s look at Facebook for a Marketing Pixel example next: the pixel script they give me loads one JS file, fbevents.js. But that file actually loads another file from “connect.facebook.net/signals/config/”. On the site I’m currently looking at, between those two files, Facebook sets four third-party “.facebook.com” cookies (“usida”, “fr”, “wd”, and “datr”) and one first-party cookie (“_fbp”).

If I’m relying on my TMS conditions to manage tags based on consent (instead of Autoblock), I can nip that all in the bud by not letting the code that loads that fbevents.js file to ever run to begin with. I don’t need to know that Facebook uses multiple JS files and sets a _fbp cookie on my domain- I just don’t run the script that might set any cookies to begin with.

By contrast, with most Autoblock set ups, the CMP might let me know it found two FB scripts, 4 third-party cookies and 1 first-party cookie on my site. It might suggest categorizations for known cookies for me (OneTrust is pretty good about that; some other CMPs are not), or it may count on me to know which categories they belong to, and which cookies are “strictly necessary” or not. I’d need to keep this list updated- whether or not I’m changing the tags on my site, Facebook might start setting cookies under a different name.

From what I can tell (because it’s not transparent in the interface), OneTrust then finds the scripts setting those cookies and blocks those scripts. I can’t really categorize the scripts themselves in the OneTrust interface. While other CMPs may focus on the scripts and not the cookies, OneTrust’s cookie-centric approach leaves me with very little control in the interface of which scripts I want to have fire.

(To be fair, if Autoblock blocked the fbevents.js file, it would stop all the rest from happening… but the cookie inventories seldom understand that nuance, so you still have 5+ cookies and 2+ scripts to categorize.)

But even once this list is in place and correct, you still may need to do more work to get it to work with your site.

Some CMPs, like OneTrust, use a one-size-fits-all approach: things are either on “strictly necessary” or they get blocked. You can categorize cookies as strictly necessary or not, but as far as I can tell, there is no where within the OneTrust interface to tell it a script is necessary (and if my AudioEye javascript is blocked, then it doesn’t matter that I’m allowing AudioEye to set cookies.) Instead, if I need to allow a script, I can add a custom attribute to the html of the tag:

<script data-ot-ignore src="examplescript.com/javascript.js"></script>

(This goes to show: never believe a vendor that says you can implement their solution without developer work. It’s too good to be true that you could become compliant, not break your site, and not have to talk to a developer.)

Govern Tags, Not Cookies

You can be privacy-compliant without a complete list of third-party cookies. Don’t get me wrong, such a list could be handy, especially if you can keep it accurate and up-to-date. And by all means, you should have some sort of inventory of the tags on your site.

If you know what tags you are firing on your site, you can stop tracking at the source, by stopping those tags. Would you rather take the time to have a condition on your Salesforce tag in your TMS that stops it from ever firing, or take the time to find out that cookies on “cdn.krxd.net” come from Salesforce? There’s no magic bullet solution, either way.

Complication #3: Ownership

Most companies are struggling to find the right owner for maintaining a CMP. Lawyers know the regulations but may have no clue what an AMCV cookie does; marketers may know that they are using TradeDesk for advertising, but have no idea that it uses the domain adsrvr.org; developers may know that the third-party scripts from fonts.googleapis.com affect the visual appearance of the site and have nothing to do with marketing, but have no clue about Facebook cookeis.

I won’t pretend that moving away from Autoblock will clear up all the ownership issues. BUT, when the person deploying the tag (in the TMS) also has primary control of not firing the tag (via conditions in the TMS), it does simplify things.

Complication #4: No Nuance in Categories

For OneTrust at least, Autoblock is all-or-nothing; from what I understand, there are no categories beyond “strictly necessary” and “not strictly necessary”. Someone can’t opt for Analytics tracking but no Marketing tracking.

Complication #5: Timing

If you don’t have OneTrust’s library loading early enough, it may not be there in time to block early-loading scripts and cookies, leaving you with incomplete coverage. Whereas TMS conditions can have some defaults built in that don’t rely on the OneTrust script having been fully loaded (I write about one way to do that in my other post).

Complication #6: It can needlessly lose Adobe Data by not taking advantage of the Adobe Opt-In Service.

For Adobe Analytics tracking in a opt-in-only scenario, Autoblock limits the advantages of using the ECID Opt-In Service. The ECID Opt-In service doesn’t just kill any tracking until you have consent- it kind of holds “preconsented” tracking to the side, so once the user DOES consent, you don’t lose that data that would have been sent when the page loaded. If you’re relying only on Autoblock, you can never get that pre-consented data back. You’d need to re-run any logic that would have fired on your page view.

Complication #7: Page Performance

Autoblock has the potential to have a major impact on page performance, though this varies widely by CMP vendor. Depending on how they work, they may be adding a “gate keeping process” to every asset that loads on your site.

I haven’t tested OneTrust, but I did test another CMP’s autoblock feature and found it slowed the page’s load time up to 40%! (To be fair, I do think it was an inferior tool and hopefully other CMPs aren’t that bad.)

This is a very hard thing to test- did any changes in performance come about because of tags/scripts not firing, or because of the inherent weight of the CMP library, or because of Autoblock? I’d love to hear if any one else has found a good way to test or observed any changes in performance based on Autoblock functionality.

On the other hand, one case FOR Autoblock:

There is at least one situation in which you may need to rely on Autoblock: if you have any scripts or cookies you need to block that are NOT managed by your TMS. In order to use a condition-based approach, you have to be able to add some conditional logic to every script that needs consent. Autoblock doesn’t care if a script comes from your TMS or not- it applies to everything. If you are not confident that your TMS has full control over all tags that need consent, Autoblock may be for you.

Conclusion

I am not saying no one should use Autoblock. It has its uses. Just make the decision with a full understanding of the implications, and an awareness of what other options may look like.

I’m curious to hear other’s thoughts and experiences.

Silly Launch Builder Chime

I spend way too much time on a daily basis waiting for Launch libraries to build- on particularly bloated libraries, I’ve had it take up to two minutes. And, with my ADHD, I get too easily distracted during that wait time. It’s a major productivity killer.

So I figured out I can throw this code in as a Live Expression in my Chrome Console:

if (document.location.href.indexOf("data-collection/tags/") != -1 || document.location.href.indexOf("reactor-lens/tags/")!= -1) {window.launchChimer = window.launchChimer || false;if (document.querySelectorAll(".library .spectrum-CircleLoader.spectrum-CircleLoader--small").length > 0 && !window.launchChimer){window.launchChimer = true} else if (document.querySelectorAll(".library .spectrum-CircleLoader.spectrum-CircleLoader--small").length == 0 && window.launchChimer == true) {var snd = new Audio("http://soundbible.com/grab.php?id=1599&type=wav");snd.play();window.launchChimer = false}}

Like this:

And it chimes when the loading icon stops spinning.

I got the sound file from soundbible– you could replace it with whatever you want. Originally I had it pull from a sound that already exists on any mac (as found on stack overflow) but it made the script ugly and long.

If it isn’t working for you, double check which frame your Live Expression is trying to be on. For me, it automatically went to the “Main Content” frame, which is where it works best, but it sounds like for some folks, you may need to manually select it:

There are better ways to do this. I know this. If I had more hours in the day, I could even make a decent Chrome extension out of it. But this works for now, and I’ve already held on to this for months without blogging it, waiting for time to make it cooler, and it would appear that time is never going to come, so… here’s my imperfect solution.

Consider this a beta. I fully expect something about it will not work in some situation. Please let me know how it goes for you. Or if anyone wants to take it and improve upon it, I’d appreciate it!

How I got OneTrust to Work with Adobe Launch

Disclaimer: I’m not a lawyer. Please don’t take any of this post as legal advice. I claim no responsibility for how the information in this post is used. Each organization needs to work with their lawyers to ensure their setup is compliant. Please test thoroughly and often to make sure your OneTrust set up is working as expected.

You’ll note I’m not calling this “The Best Way to set up OneTrust in Launch”, because I don’t know if this IS the best way; it’s just the only way I could get it all to work in my particular situation. If folks have found other potentially better ways I’d love to exchange knowledge! I will be updating this post if and/or when I learn new things.

To be honest, this is one of the scarier posts I’ve written (and not just for legal reasons)- I fully expect someone to say “this is a really weird way of doing it” (heaven knows I’ve said to myself “there must be a better way”). And I do know there are probably a few places where I could simplify/optimize. But I’ve talked to a lot of different people trying to set up OneTrust, and no one has ever been able to give me a different/better way yet, and I’m getting asked weekly how I’ve managed to get it to work, so here goes!

While this post is very specific to Adobe Launch*, a lot of the script and logic could apply to any TMS/Adobe Analytics setup. Some of it could even apply to other Consent Management Platforms (CMPs).

This post is long, and while the topic is complicated, much of what I’ve included is just for reference, so please don’t feel overwhelmed. We’ve got this.

Also, don’t be evil. I primarily work with first-party analytics (ie, Adobe Analytics), and I have no ethical compunctions about doing so. The only impact we have on the user is we improve website experiences (and I don’t mean that in a “we show better ads” way… I mean our focus is the actual site user experience). That said, I don’t want to “trick” the user into allowing it, and I want them to be informed about it. But consent management (and tag management) also applies to stuff that can be very ethically (not to mention legally) questionable. Prioritize your user’s privacy. Keep them informed. Remember most regulations don’t care specifically about cookies, they care about what you are doing with user’s data (which often includes but is not limited to cookies). There is no “getting around” requests for privacy. (That goes for server-side tracking, too, by the way.)

My Goals With This Setup (Opt-In and Opt-Out)

I’ve worked with OneTrust*** and Launch a fair amount, but for one project in particular, I had nearly complete ownership of it- this is the project I’m basing this post off.

Their set up had to work for a Launch property that has users throughout the US, as well as in a few EU countries, meaning we need to account for situations where we can track only if the user has opted in (GDPR), AND situations where we can track until the user has opted out (most non-GDPR regulations, like California’s CCPA**).

A diagram showing groups of people before and after being presented with two types of prompts: one which opts the user in to tracking (GDPR-style), and another that allows them to opt out. Colors indicate which groups get to be tracked.

We’ve set up two OneTrust templates and two Geolocation rules. Fortunately, OneTrust uses geolocation to tell which banner they need to show the user, and whether they should be considered opted-in or opted-out by default. Unfortunately, on the very first page view, if OneTrust hasn’t had a full chance to run its script and define those categories, you can have timing issues, so we’ll have to work around that in this solution.

Some organizations choose to just treat EVERYONE as opt-in-only, including in the US where that is not legally necessary. This definitely is the legally safest choice but given that most orgs get opt-in rates of 30-50%, you may end up losing 50-70% of your data. I’ve seen one site in the UK that only had 11% of users opt in, though if you optimize your banner and really think through how you present information to your users, you should be able to get it much much higher. Either way, our goal here was to opt-in as many people as we legally/ethically could.

OneTrust Setup

I’m not going to get too into too much detail here about the actual configuration in OneTrust; OneTrust’s documentation is pretty thorough (though guilty of the atrocity of requiring you to login to access it***) and this post is more than long enough already. This whole post assumes you have an entirely correct and compliant set up in OneTrust including that you’ve correctly created your “Categorizations”, you’ve created banners (though I do have tips for working with their CSS- ping me on twitter or slack if it’s giving you trouble), and you’ve made sure your Geolocation rules are set up correctly (make sure you check the “Do Not Track” and “Global Privacy Control” options!)

A screenshot from OneTrust's geolocation rules showing where you can check boxes to honor Do Not Track or Global Privacy Control.

Don’t use Autoblock if you can help it

As you publish a script in OneTrust, it gives you an option to toggle on Automatic Blocking of Cookies, also known as Autoblock:

A screenshot of the toggle in OneTrust's publishing flow which allows you to enable Automatic Blocking of Cookies

Autoblock basically allows itself to become a middle man between your user and anything your site is trying to load. Everything gets routed through it and it gets to choose which requests to block or allow, based on what the user has consented to. This may be tempting… in theory, you could leave all of your tags and cookies in OneTrust’s hands and not have to worry about setting up conditions in your TMS. (I’ve heard a few people mention how much this feels like when TMSes were touted as “one simple line of code that does all the work for you!”)

But please consider it only as a last resort! Yes, turning off Autoblock and instead using conditions to decide which tags to fire looks like a lot more work, but it’s more flexible, and gives you more control and visibility. (And, to be honest, getting Autoblock to work the way you want may end up being more work than you expect, anyways.) I had a whole section of this post devoted to Autoblock, but it was enough it merited a post of its own. So please, before deciding to use Autoblock, get enough information to make an informed decision.

Why not use OneTrust to block Launch altogether?

This is another kind of lazy (or overzealous) solution I’ve seen folks try: just don’t load your TMS library at all until the user has consented to some tracking. Certainly, it is a fairly safe approach (assuming you don’t have anything on your site, outside of your TMS, that needs consent). It really limits your flexibility, though. Especially if you have different categories for consent, you may want to fire analytics (performance category) but not Doubleclick (marketing/targeting category). Or you may have script in your TMS that doesn’t require consent… for instance, I know of one site that uses Launch to deploy some ADA compliance updates (hardly a best practice, but it happens). Or your TMS may have code in it that helps keep you compliant by managing cookies, etc.

In an opt-out scenario, such an approach seems a bit silly- by the time they opt out, your TMS has already loaded and may continue tracking link clicks, etc.

In an opt-in/GDPR scenario, there is a key reason I don’t like this approach: the ECID Opt-In Service, when set up properly, is very good at holding on to tracking until there is consent.

So if my page loads, and we don’t have consent, but my Page Load rule normally would have set variables and fire a beacon… those actions get queued up, so to speak, for once the user DOES consent. Once they click the “allow all” button, my Page Load rule doesn’t re-run, but the things it would have done initially, like sending page view data to Adobe, finally get to fire. That way, you don’t have to either fire a new rule after acceptance, or wait until the next page view, to get information about that first critical page view (which often includes things like Campaign tracking codes or session referrer information).

Why not use the OneTrust Launch Extension?

I hate to say it, but the OneTrust extension doesn’t offer many advantages***. There is still enough you have to set up and tweak outside of it that in the end, you may as well just skip using it. Not using it keeps everything transparent, too, which I like.

Global “True” Page Top Rule

The setup we used needed a rule that runs before anything else possibly even starts. This is particularly important if you are firing your OneTrust library from within Launch, but you may want it either way, to set up compliance logic before anything else fires.

We didn’t even use “Page Top” as the trigger; rather we used a little trick that ensures it really is the first thing to run: we create an Custom Code Event that just contains one line of code:

trigger();

And we set the order on that to -1 (or 0 or 1… just make sure it’s a lower number than anything else):

A screenshot from Adobe Launch showing Event Configuration based on the custom code of "trigger();"

That may all seem overkill (and it may very well be) but it gives us some peace of mind, without negative side effects.

Inserting the OneTrust library (from within Launch, if you must)

Ideally you would have your devs insert the OneTrust library directly on the page, as high on the page as possible. This will give you the best chance to not have timing issues. To be clear: this is my recommendation. If you’re going that route, skip to the “Set up consent data elements” section.

But we live in the real world, where sometimes we have to work with what’s available. In this particular client case, a hard-coded reference to OneTrust wasn’t a practical option at the time, so we needed to add our OneTrust libraries into the Page Top rule we just created.

We put a single Custom Code Action in this rule, which calls on our OneTrust library. You can get these code snippets from within OneTrust- the important part is the data-domain-script. Note, you can get both a test script and a production script snippet from OneTrust, similar to how Launch has a dev environment to test your changes before you go to prod. Just make sure you’re using the right one for the stage you’re at.

A screenshot from a Custom code action in Adobe Launch, showing HTML references to the oneTrust testing and production libraries.

We’ll also use this rule and code block to fire our Adobe/ECID consent, but we’ll come back to that later.

Set up consent data elements

I tend to have one master “Onetrust Consent Groups” data element, then data elements for each category:

A screenshot from Launch showing 4 data elements: "onetrust consent groups", "onetrust consent: marketing", "onetrust consent: performance" and "onetrust consent: targeting"

For the master consent groups data element, we have a few things we need to consider. OneTrust provides two places to see what groups your user is in:

1. The most supported-by-OneTrust approach is to use a Javascript object, window.OnetrustActiveGroups, which returns a value like this:
‘,C0001,C0003,C0007,C0004,C0002,’
These codes correspond to the Categories you set up in OneTrust. You can view YOUR Categories by going to Categorizations>Categories.

2. An “OptanonConsent” cookie, which I’m pretty sure they don’t particularly want us using, but it can be handy because once it is created, it’s accessible as a page loads, before that window.OnetrustActiveGroups object is. It returns a value like this:

isGpcEnabled=0&datestamp=Tue+Mar+14+2023+12:10:28+GMT-0400+(Eastern+Daylight+Time)&version=202212.1.0&isIABGlobal=false&hosts=&landingPath=NotLandingPage&groups=C0001:1,C0003:1,C0007:1,C0004:1,C0002:1&AwaitingReconsent=false&geolocation=US;GAConsoleNetwork request blockingSearchIssuestop

…which is a bit messy, but potentially useable. I don’t currently use it in my solution but I figured I’d note it because I have seen others use it.

Important: Timing Issues for opt-in land (GDPR)

(To be clear, much of this section is only needed for folks who have timing issues because their OneTrust library is loaded by Launch or isn’t loading before Launch starts trying to do stuff.)

The problem is, if your OneTrust library is not called as high on the page as possible, there is a chance that OnetrustActiveGroups JavaScript object won’t exist when you need it; on each page load, it has to wait for the OneTrust library to create it. And the cookie only exists after the first time the OneTrust library loads… meaning if it’s the user’s first page view, you won’t know whether they count as opted-in or opted-out until after the OneTrust library loads! That’s one reason you should put the OneTrust library as high on your page as possible. But if you can’t, or you just want a failsafe, you can define some default consent groups. This would be simple if you were in a everyone-is-auto-opted-in scenario, or a everyone-is-auto-opted-out scenario. Where it gets fun is when you need to account for both.

Fortunately, my client already has a reliable method to tell me the user’s country (and even continent!) I’ll be honest, I’m not sure what I would have done otherwise, because we have to be able to detect the user’s jurisdiction to decide if they should be auto-opted-in or auto-opted-out. I have some theories of other ways we could do this but I’m glad I already had a reliable way to do it. Here is what I did for my “onetrust consent groups” data element:

try {
  var consentGroups=""
    if(window.OnetrustActiveGroups){
        consentGroups=window.OnetrustActiveGroups //if this exists and is accurate, use it. We often have a timing issue though. So, our back up options:
    }else if( _satellite.cookie.get("OptConsentGroups")){
        consentGroups=_satellite.cookie.get("OptConsentGroups") //if we've previously gotten the consent groups, we don't have to worry about the timing with onetrust, we just grab it from a cookie that we create
    }else if(_satellite.getVar("continentCode")=="eu" || navigator.globalPrivacyControl==1 || navigator.doNotTrack==1){//if no previous permissions, but in the EU or have GPC or DNT enabled, opt OUT of all not-strictly-necessary
        consentGroups=',C0001,'
    }else{ //everyone NOT in the EU who doesn't have GPC or DNT gets auto-opted-in to all categories
        consentGroups=',C0003,C0001,C0007,C0004,C0002,'
    }

    //store groups in a cookie 
    _satellite.cookie.set("OptConsentGroups", consentGroups, {expires: 360, samesite:"lax", secure:true, domain: "example.com"})

    return consentGroups;

} catch (e) {
	_satellite.logger.info("onetrust error:" + e);
}

Basically, if OneTrust has loaded, great, let’s use their JS object. If not, then based on the user’s location or privacy settings, we auto-opt them in or out of all categories (aside from “strictly necessary”). Then we create our OWN cookie, OptConsentGroups, so it’ll be easily accessible early on in the next page load.

NOTE: YOUR CATEGORIES WILL VARY. C0002 is for performance cookies for this particular client, based on how they have their categories set up within OneTrust. You will need to figure out how to identify the categories for your set up.

This data element gets referenced in a few places, meaning it is constantly getting updated, in case preferences change.

Set up data elements for each category

Once that data element exists, I create data elements for each category. My “onetrust consent: marketing” data element looks like this:

var consentGroups=_satellite.getVar("onetrust consent groups")||"" 

if(consentGroups.indexOf("C0007")!=-1){
  return "true"
}else{
  return "false"
}

I’ll use these to create conditions for any of my rules that contain non-Adobe tracking. Adobe rules don’t need conditions, they’re handled by the Experience Cloud Opt-in service.

Set up the ECID Opt-in Service

This is how I configured the Experience Cloud ID Service extension, essentially shifting all the work to my “onetrust consent: adobe” data element:

A screenshot of the ECID extension configuration from within Launch. Enable Opt In is toggled to "yes", and the previous permissions field says %onetrust consent: adobe%"

That “onetrust consent: adobe” data element looks like this:

var consentGroups=_satellite.getVar("onetrust consent groups")
var consent=false //false by default

//set per adobe category (we're treating all three- analytics, ecid and target- as performance. Your situation may merit a different split)
if(consentGroups.indexOf("C0002")!=-1){ //C0002= performance cookies
	var consent=true
}

//_satellite.logger.info("onetrust consent groups:" + consentGroups) //handy for debugging
return { aa: consent, target: consent, ecid: consent };

It checks if the user has consented to “performance” tracking, and if so, returns an object like this:

{aa: true, target: true, ecid: true}

This is the format that that “previous permission”/pre-opt-in approvals piece of the ECID is looking for. I’d love to say I learned that in the Adobe Documentation, but unless I’m missing something obvious, you have to backwards-engineer their code example to figure it out. Fortunately, this blog post by Abhinav Puri documents it all nicely.

(But wait, shouldn’t Adobe Target be categorized as “Targeting”? You should definitely ask your lawyers, but by my reasoning, the Targeting category is for targeted advertising and retargeting. Adobe Target may be named Target, but it is primarily a user experience/performance tool. It doesn’t chase you around the internet with personally-relevant advertising.)

Setting up the extension with this data element will cover Adobe permissions on page load, before the user interacts with the banner.

(In theory, this ECID extension part is the main thing the OneTrust extension would help with. But I like keeping all of the logic very transparent. And I’m a control freak that doesn’t trust vendor code I haven’t thoroughly tested.)

Updating Permissions (Banner Interaction)

To update permissions when the user has changed their preferences (including their initial reaction to the banner), I added code to my Page Top rule, within the OptanonWrapper() function, which is functionality provided by OneTrust that fires when OneTrust library first loads, as well as whenever there is a change to the user’s permissions:

<script type="text/javascript">
function OptanonWrapper() { //this function runs any time there are changes to one trust
  var consentSettings=_satellite.getVar("onetrust consent: adobe")   
  if(consentSettings.ecid===true){ //experience cloud ID service
      adobe.optIn.approve('ecid',true);
  }else{adobe.optIn.deny('ecid',true);}
  
  if(consentSettings.aa===true){ //analytics
      adobe.optIn.approve('aa',true);
  }else{adobe.optIn.deny('aa',true);}
  
  if(consentSettings.target===true){ //target, obv
      adobe.optIn.approve('target',true);
  }else{adobe.optIn.deny('target',true);}
  
  adobe.optIn.complete();
}
</script>

(In theory I could simplify this a bit since all three categories pretty much always have the same consent, but I wanted this to be flexible for potential changes in the future.) So now that rule looks like this:

A screenshot from Adobe Launch showing the above code block placed within the same rule/code that referenced our OneTrust library.

(If you’re not deploying your OneTrust Library through Launch, then lines 1-7 don’t apply.)

Set up Rule Conditions

Fortunately, the ECID Opt-in Service will handle everything Adobe-related: you do NOT need to take further action, such as adding Rule Conditions to all rules with Analytics actions, in order for Analytics to not fire when it shouldn’t. The rules will still fire but they will not send anything to Analytics or Target (yet).

Marketing tags, however, need to be explicitly told to pay attention to the user’s consent preferences. This is, unfortunately, still a very manual process. First, you may need to make your rules a little more category-based. In general it’s easiest if for any particular rule event/condition combo, if you have your performance-category actions together in one rule, and your marketing/targeting actions in a separate rule. We were already generally following that, but there were a few rules that had both analytics and marketing tags- for those, I copied the rule and deleted analytics out of one of them and deleted the marketing tags out of the other one, resulting in one analytics-only rule and one marketing-tag-only rule. The analytics-only rule is fine as-is (thanks, ECID!) but the marketing tag rule needs an extra condition.

If your entire rule falls into one consent category, then it’s as simple as adding a condition where “%onetrust consent: marketing%” equals “true”:

A screenshot from Adobe Launch which shows a rule with the condition of "%onetrust consent: marketing% equals true"

If, however, you need to use conditions within your code to decide what fires and what doesn’t (perhaps your rules don’t neatly align with your categories), that might look like this:

if(_satellite.getVar("onetrust consent: marketing")=="true"){
  //insert code for marketing tag here
}

If someone out there has some extra time on their hands and wanted to write a Launch API function that would automatically add such a condition to all rules, that sure would be swell. In the meantime, I tend to take this approach: open my list of rules, and starting at the top, left-click on each rule while holding down cmd (or ctrl for a PC). This will open each rule in a new tab, while leaving your Rule List tab alone so you can keep track of what you’ve done and not done.

A screenshot showing a ridiculous number of Adobe Launch tabs open in Chrome.

Then work you way through the tabs, adding your condition, saving, then closing that tab. I’ll often open 10-15 at a time, being careful to note where in the rule list I stop. When you’re done, you can use something like Tagtician to export your whole library and make sure you didn’t miss anything.

Side Note: Don’t forget to honor non-banner-based opt-outs!

Part of the California Consumer Privacy Act (CCPA) and others like it, is that sites must have a “Do Not Sell My Personal Info” page where the user can opt out, or view/delete/modify what data you have on them. Lawyers have generally been pretty good about making sure orgs get these pages in place- you probably already have one. Often, these pages are set up to sync up with the org’s CRM… if r.swanson@pawnee.gov fills out a form to opt out of tracking, then the CRM may delete all the info they have for dear R.Swanson, and/or someone may use the Adobe Privacy API or the Adobe Privacy JS to delete/edit/modify/view the Adobe data on that user. That’s all as it should be. The problem is, this process often exists completely separately from the Consent Management Platform. You may have deleted the person for your backend systems, but you’re still setting cookies and collecting data on the site!

Make sure that your Tag Manager knows when these kind of opt-outs occurs and honors them. OneTrust does have a way of tapping into those forms, though I can’t find any public documentation other than this general good advice about CCPA. But within their knowledge base, look up Do Not Sell for AdTech Vendors”- it has tips and script you can use to help OneTrust know what happens on those forms.

Side note: Cookie Deletion

Note, right now, when you tell the ECID Opt-in Service to opt a user out of tracking, it does not remove any existing cookies. So in an opt-out situation, where everyone gets cookies to begin with, you may have users that have declined tracking and won’t be getting analytics beacons, but still have Adobe-created-cookies in their browser. Legally, this is a bit of a gray area. Even if it is technically legal (I’ll leave that to the lawyers to confirm), unfortunately the general public cares a lot about appearances. I’ve already seen one instance where Adobe cookies that were set but not even used (the site didn’t fire analytics beacons) caused a lawsuit in Spain, where they were accused of being “one of the most invasive types of cookie for user privacy”. This is complete nonsense, but the authorities and all the lawyers involved may not understand that without a bunch of effort. Even if the lawsuit fails, it has consumed a lot of that company’s resources.

At bare minimum, you should consider adding verbage to your banner or privacy policy that instructs users how to delete cookies on their own. The regulations strongly encourage this.

If you want to delete the AMCV and AMCVS cookies on behalf of the user, it can be a little tricky, since most of the time there is a domain mismatch between the www.example.com site you are on and the “.example.com” domain the AMCV cookies are set at, so you can’t just use _satellite.cookie.remove or somesuch. But you can do something like this to delete them:

_satellite.cookie.set("AMCV_21C9EXAMPLE123450101@AdobeOrg", "delete", {domain:'.example.com', expires: -1})

Note, many plugins also create cookies- you might see s_ppv or s_vnum or any number of other “s_” cookies that are the default names of cookies created by Adobe plugins. Generally, these don’t have “personal information” and in theory shouldn’t have to worry too much about (though I am not a lawyer so please don’t base your decisions off of me), you may want to delete them or at least be aware of them.

Depending on your ECID settings, you may also be setting a demdex cookie. Since this is a third-party cookie (and can effect how users are identified on other sites) it’s even more legally dodgy, and unfortunately even more difficult to delete. You can either stop that cookie from being set to begin with ECID’s disableThirdPartyCookies setting, or be very careful in how you inform your users about it and their ability to opt-out and/or delete it.

Validation

Here is a non-comprehensive list of things to check to confirm you have things set up correctly: obviously, you want to validate that your Analytics, Target, ECID, marketing tags, targeting tags, etc all only fire with user consent. Check both for cookies (in the “Applications” tab if you’re in chrome) and beacons/scripts (in the “Network” tab). Double check that your setup honors the Global Privacy Control signal (I’ve found this easiest to do in Firefox). Check that the user experience isn’t hindered. Be careful to check things that may be using third party technology, like video players- both that they work, and that they comply with consent policies. If you have iframes anywhere on your site, make sure the content within them is also being controlled by your CMP. Make sure you test for each of your Geolocation rules in OneTrust.

Ideally, before you roll out major OneTrust changes, you’d do a full regression test (the type of QA typically done before a major site release, where you make sure none of your pre-existing stuff is impacted). If you are resorting to using Autoblock for anything, a full regression test is a necessity.

Things I’ve found useful for testing:

OneTrust does provide a preview mode, but the most important thing I got out of that is that you can use query parameters (even in production) to change how OneTrust behaves:

?otreset=true&otpreview=false&otgeo=gb

For example, the above combo resets my preferences so I’ll see the banner again (you can also do this by deleting the OptanonAlertBoxClosed cookie.) The handiest part is the otgeo param- I can have Onetrust treat me like I’m in Great Britain (or any other country code I put there) to test that it works as expected in different jurisdictions.

Another useful thing: you can use this in the console to see what Adobe thinks its current permissions are:

adobe.optIn.permissions

I console.logged the heck out of that when figuring out the timing of things.

Tracking Opt-In/Opt-Out Rates

This is a tricky subject: how do you track that someone said they don’t want to be tracked? It may be worth a chat with your lawyers about. It’s important to remember, most regulations are more about having personal information, such as an ID in a cookie (even anonymous IDs), sent to third parties. Cookies are generally needed for keeping track of a session/visitor from page view to page view (or domain to domain, in some cases). If you are only tracking one user action without context, and there is nothing that ties that action to a particular user in any way, then the user has far less to be concerned about when it comes to their privacy. It’s sensitive and definitely worth discussing with your lawyers, but most regulations do allow for the most basic of tracking if there are no IDs used, the data isn’t shared or sold, it’s stored and processed correctly, the user is informed, etc. The data is useless for journey analysis and attribution, but can actually be used to make sure you are respecting user’s privacy properly. But TALK TO LAWYERS before deciding on any such tracking.

OneTrust does have the option to get high-level reporting on consent rates. I would link to the documentation on it but I’ve never gotten a link to Onetrust documentation to work. Instead, I have to click the ? icon and read any documentation within that panel:

A screenshot showing how to access OneTrust's knowledge base

I’ll admit I have not enabled or used their reporting. I’m curious how they get the info they do in cases where the user has consent to zero tracking. I’m assuming that it is free of personal information such as a cookie ID, etc.

We implemented another solution, where for the first 2 weeks after launching OneTrust, we fired hard-coded, identifier-less/cookie-less Adobe Analytics beacons to a Report Suite set up for that purpose, just to get a rough estimate of how it was working. I hope to have a separate blog post about that in the near future.

Adobe Analytics Settings for Privacy

Adobe does offer some admin settings that also help with privacy. You may want to consider these in addition to what you have set up in your TMS:

  • Your General Report Suite Settings include a “Replace the last octet of IP addresses with 0” option and an “IP Obfuscation” option. The first happens before any processing on a hit, meaning the IP never gets stored or processed, and everywhere in Adobe, that user’s true IP is not accessible (including Data Warehouse and Data feeds). Accordingly, it has an impact on your reporting, especially geolocation rules. The IP Obfuscation Option, on the other hand, happens after some processing is done, so it has less impact. You can use it to merely obfuscate, or to remove the IP address altogether.
  • I only just learned of this back-end setting that respects browser privacy settings (presumably, the older Do Not Track option or the newer Global Privacy Control setting). In theory, if your OneTrust/TMS implementation is configured right, nothing with those flags will ever even make it to Adobe, but there’s no harm in a little redundancy, right?

Conclusion

As I said at the beginning, I would love to learn more and improve my approach where possible. The best solution may end up being a crowd-sourced one. Please let me know what has worked or not worked for you!

* Sorry, Adobe. No one is going to convince me to call it “Data Collection Tags”.

** CCPA: I know there are many laws out there beyond California’s Consumer Privacy Act; however, since CCPA tends to be one the most expansive laws ( for now), I tend to use the phrase “CCPA” as an umbrella stand-in term for US laws. It’s much more complicated than that, but I don’t want to open that can of worms every time I need to reference a opt-out-based regulation.

*** If it wasn’t clear, this post was not done in partnership with OneTrust, and I have no formal relationship with them. I’d be happy to talk to them about anything in this post, though. And for what it is worth, of the 5 CMPs I’ve worked with, OneTrust was the least awful, in my experience.

Adobe Launch’s Rule Ordering is a big deal for Single Page Apps

In November, I posted about some of the ways that Launch will make it easier to implement on Single Page Apps (SPAs), but I hinted that a few things were still lacking.
In mid-January, the Launch team announced a feature I’ve been eagerly awaiting: the ability to order your rules. With this ability, we finally have a clean and easy way to implement Adobe Analytics on a Single Page App.

The historical problem

As I mentioned in my previous post, one of the key problems we’ve seen in the past was that Event-Based Rules (EBRs) and Direct Call Rules (DCRs) can’t “stack”. Let me explain what I mean by that.

Not a single page app? Rule Stacking rocks!

For example, let’s say I have an internal search “null results” page, where the beacon that fires should include:

  • Global Variables, like “s.server should always be set to document.hostname”
  • Variables specific to the e-commerce/product side of my site with a common data layer structure (pageName should always be set to %Content ID: Page Name%)
  • Search Results variables (like my props/eVars for Search Term and Number of Search Results, and a custom event for Internal Searches)
  • Search Results when a filter is applied (like a listVar for Filter Applied and an event for User applied Search Filter)
  • Null Results Variables (another event for Null Internal Searches and a bit of logic to rewrite my Number of Search Results variable from “0” to “zero” (because searching in the reports for “0” would show me 10, 20, 30… whereas “zero” could easily show me my null results)

With a non-SPA, when a new page load loads, DTM would run through all of my page load rules and see which had conditions that were matched by the current page. It would then set the variables from those rules, then AFTER all the rules were checked and variables were set, DTM would send the beacon, happily combining variables from potentially many rules.

Would become this beacon:

If you have a Page Load Rule-based implementation, this allows you to define your rules by their scope, and can really use the power of DTM to only apply code/logic when needed.

Single Page App? Not so much.

However, if I were in a Single Page App, I’d either be using a Direct Call Rule or an Event-Based Rule to determine a new page was viewed and fire a beacon. DCRs and EBRs have a 1:1 ratio with beacons fired- if a rule’s conditions were met, it would fire a beacon. So I would need to figure out a way to have my global variables fire on every beacon, and set site-section-specific and user-action-specific variables, for every user action tracked. This would either mean having a lot of DCRs and EBRs for all the possible combos of variables (meaning a lot of repeat effort in setting rules, and repeated code weight in the DTM library), or a single massive rule with a lot of custom code to figure out which user-action-specific variables to set:

Or leaving the Adobe Analytics tool interface altogether, and doing odd things in Third Party Tag blocks. I’ve seen it done, and it makes sad pandas sad.

The Answer: Launch

Launch does two important things that solve this:

  1. Rules that set Adobe Analytics Variables do not necessarily have to fire a beacon. I can tell my rule to just set variables, to fire a beacon, or to clear variables, or any combination of those options.
  2. I can now order my rules to be sure that the rule that fires my beacon goes AFTER all the rules that set my variables.

So I set up my 5 rules, same as before. All of my rules have differing conditions, and use two similar triggers: one set to fire on Page Bottom (if the user just navigated to my site or refreshes a page, loading a fresh new DOM) and one on Data Element Changed (for Single Page App “virtual page views”, looking at when the Page Name is updated in the Data Layer).

UPDATE: I realize now that you probably wouldn’t want to combine “Page Bottom” and “Data Element Changed” this way, because odds are, it’s going to count your initial setting of the pageName data element as a change, and then double-fire on page load. Either way, it’s less than ideal to use “data element changed” as a trigger because it’s not as reliable. But since this post is already written and has images to go with it, I’ll leave it, and we can pretend that for some reason you wouldn’t be updating your pageName data element when the page initially loads. 

When I create those triggers, I can assign a number for that trigger’s Order:


One rule, my global rule, has those triggers set to fire at “50” (which is the default number, right in the middle of the range it is recommended that I use, 1-100). The rule with this trigger not only sets my global variables, it also fires my beacon then clears my variables:

Most of my other rules, I give an Order number of “25” (again, fairly arbitrary, but it gives me flexibility to have other rules fire before or after as needed). One rule, my “Internal Search: Null Results” rule is set to the Order number “30”, because I want it to come AFTER the “Internal Search: Search Results” rule, since it needs to overwrite my Number of Search Results variable from “0” (which it got from the data layer) to “Zero”.

This gives me a chance to set all the variables in my custom rules, and have my beacon and clearVars fire at the end in my global rule (the rule’s Order number is in the black circles):

I of course will need to be very careful about using my Order numbers consistently- I’m already thinking about how to fit this into existing documentation, like my SDR.

Conclusion

This doesn’t just impact Single Page Apps- even a traditional Page Load Rule implementation sometimes needs to make sure one rule fires after another, perhaps to overwrite the variables of another, or to check a variable another rule set (maybe I’m hard coding s.channel in one rule, and based on that value, want to fire another rule). I can even think of cases where this would be helpful for third party tags. This is a really powerful feature that should give a lot more control and flexibility to your tag management implementation.

Let me know if you think of new advantages, use cases, or potential “gotchas” for this feature!

Adobe DTM Launch: Improvements for Single Page Apps

For those following the new release of Adobe’s DTM, known as Launch, I have a new blog post up at the Cognetik blog, cross-posted below:

It’s finally here! Adobe released the newest version of DTM, known as “Launch”. There are already some great resources out there going over some of the new features (presumably including plenty of “Launchey Launch” puns), which includes:

  • Extensions/Integrations
  • Better Environment Controls/Publishing Flow
  • New, Streamlined Interface

But there is one thing I’ve been far more excited about than any other: Single Page App compatibility. I’ve mentioned on my personal blog some of the problems the old DTM has had with Single Page Apps:

  • Page Load Rules (PLRs) can’t fire later than DOMready
  • Event-Based Rules (EBRs) and Direct Call Rules (DCRs) can’t “stack” (unlike PLRs, there’s a 1:1 ratio between rules and analytics beacons, so you can’t have one rule set your global variables, and another set section-specific variable, and another set page-specific variables, and have them all wrap into a single beacon)
  • It can be difficult to fire s.clearVars at the right place (and impossible without some interesting workarounds)
  • Firing a “Virtual Page Load” EBR at the right time (after your data layer has updated, for instance) can be tricky.

So much of this is solved with the release of DTM Launch.

  • You can have one rule that fires EITHER on domReady OR on a trigger (Event-based or Direct Call).
  • You have a way to fire clearVars.
  • You can add conditions/exclusions to Direct Call rules

There are other changes coming that will improve things even further, but for now, these changes are pretty significant for Single Page apps.

Multiple Triggers on a Single Rule

If I have a Single Page App, I’ll want to track when the user first views a page, the same as for a “traditional” non-App page. So if I’m setting EBRs or DCRs for my “Virtual Page Views”, I’d need to account for this “Traditional Page Load” page view for the user’s initial entry to my app.
In the past, I’d either have a Page Load Rule do this (if I could be sure my Event-Based Rules wouldn’t also run when the page first loaded), or I could do all my tracking with Event-Based Rules, and I’d have to suppress that initial page view beacon. I may end up with an identical set of rules- one for when my page truly loads, and one for “Virtual Page Views”.

Now, I can do this in a single rule:

Where my “Core- Page Bottom” event fires when the page first loads (like an old Page Load Rule):

…and another “Page Name Changed” event that fires when my “page name” Data Element changes (like an old Event-Based Rule):

No more need to keep separate sets of rules for Page Load Rules and Virtual page views!

Clearing variables with s.clearVars()

Anyone who has worked on a Single Page App, or on any Adobe Analytics implementation with multiple s.t() beacons on a single DOM, has felt the pain of variables carrying over from beacon to beacon. Once an “s” variable (like s.prop1) exists on the page, it will hang around and be picked up by any subsequent page view beacon on that page.

Page 1

Page 2

Page 3

Page 4

s.pageName

Landing

Search Results

PDP > Red Wug

Product List

s.events

(blank)

event14

prodView

prodView

s.eVar1 (search term)

(blank)

Red Wug

Red Wug

Red Wug

My pageName variable is fine because I’m overwriting it on each page, but my Search Term eVar value is hanging around past my Search Results page! And on pages where I don’t write a new events string, the most recent event hangs around!

In the old DTM, I had a few options for solving this. I could do some bizarre things to daisy-chain DCRs to make sure I could get the right order of setting variables, firing beacons, then clearing variables. Or, I could use a hack in the “Custom Code” conditions of an Event-Based Rule, to ensure s.clearVars would run before I started setting beacons. Or, more recently, I could use s.registerPostTrackCallback to run the s.clearVars function after the s_code detected an s.t function was called.

Now, it’s as simple as specifying that my rule should set my variables, then send the beacon, then clear my variables:

Directly in the rule- no extra rules, no custom code, no workarounds!

Rule Conditions on ALL Rule Types (including Direct Call)

If I were using Direct Call Rules for my SPA, in the past, I’d have to account for Direct Call Rules having a 1:1 relationship with their trigger. If I had some logic I needed to fire on Search Results pages, and other logic to fire on Purchase Confirmation pages, I could have my developers fire a different “_satellite.track” function on every page:

Then in each of those rules, I’d maintain all my global variables as well as any logic specific to that beacon. This could be difficult to maintain and introduces extra work and many possible points of failure for developers.

Or, I could have my developers fire a global _satellite.track(“page view”) on every page, and in that one rule, maintain a ridiculous amount of custom code like this:

This would take me entirely out of the DTM interface, and make some very code-heavy rules (not ideal for end-user page performance, or for DTM user experience — here’s hoping your developer leaves nice script comments!)

Now, I can still have my developers set a single _satellite.track(“page view”) (or similar), and set a myriad of rules in Launch, each using that same “page view” trigger, but each with a condition so you can set different variables in different rules directly in the interface when your developers fire _satellite.track(“page view”) on your Search Results versus when they fire _satellite.track(“page view”) on your Purchase Confirmation page:

I’d love to say all my SPA woes were solved with this release, but to show I haven’t entirely drunk the Kool-aid, I will admit some of my most wished-for features (and extensions) aren’t in this first release of Launch. I know they’re coming, though- future releases of Launch will add additional features that will make implementing on a Single Page App even simpler, but for now, it still feels like Christmas came early this year.

Coming to Adobe Summit 2017

adobe-su-1024x454I’ll be at Adobe Summit in Las Vegas next week, Monday March 19th through Friday March 24th. If you happen to be out that way, shoot me a comment here and hopefully we can meet up! I’ll be attending a lot of the DTM sessions and will be ready to help folks understand what the DTM updates mean for them.
I’ll also be presenting at Un-Summit at UNLV on Monday, speaking about Marketing Innovation and Cognetik’s new tableau data connector. Come check it out!

Deploying Google Marketing Tags Asyncronously through DTM

I had posted previously about how to deploy marketing tags asynchronously through DTM, but Google Remarketing tags add an extra consideration: Google actually has a separate script to use if you want to deploy asynchronously. The idea is, you could reference the overall async script at the top of your page, then at any point later on, you would fire google_trackConversion to send your pixel. However, this is done slightly differently when you need your reference to that async script file to happen in the same code block as your pixel… you have to make sure the script has had a chance to load before you fire that trackConversion method, or you’ll get an error that “google_trackConversion is undefined”.

Below is an example of how I’ve done that in DTM.

//first, get the async google script, and make sure it has loaded
var dtmGOOGLE = document.createElement('SCRIPT');
var done = false;

dtmGOOGLE.setAttribute('src', '//www.googleadservices.com/pagead/conversion_async.js');
dtmGOOGLE.setAttribute('type','text/javascript');

document.body.appendChild(dtmGOOGLE);
dtmGOOGLE.onload = dtmGOOGLE.onreadystatechange = function () {
 if(!done && (!this.readyState || this.readyState === "loaded" || this.readyState === "complete")) {
 done = true;
 callback();

 // Handle memory leak in IE
 dtmGOOGLE.onload = dtmGOOGLE.onreadystatechange = null;
 document.body.removeChild(dtmGOOGLE);
 }
};

//then, create that pixel
function callback(){
 if(done){ 
 /* <![CDATA[ */
 window.google_trackConversion({
 google_conversion_id : 12345789,
 google_custom_params : window.google_tag_params,
 google_remarketing_only : true
 });
 //]]> 
 }
}

Why (and why not) use a Data Layer?

What’s a Data Layer?

Tag Management Systems can get data a variety of ways. For instance in DTM you can use query string parameters, meta tags, or cookie values- but in general, data for most variables comes from one of two sources:

  • To really take advantage of a tag management system like DTM, I may choose to scrape the DOM. I’m gonna call this the MacGyver approach. This uses the existing HTML and styles on a site to For instance, DTM could use CSS selectors to pull the values out of a <div> with the class of “breadcrumb”, and end up with a value like “electronics>televisions>wide-screen”. This relies on my site having a reliable CSS structure, and there being elements on the page that include the values we need for reporting.
  • If I want even more flexibility, control and predictability, I may work with developers to create a data layer. They would create a JavaScript object, such as “universal_variable.pageName”, and give it a value based on our reporting needs, like “electronics | televisions | wide-screen > product list”. This gives greater control and flexibility for reporting, but requires developers to create JavaScript objects on the pages.

Conceptually speaking, a data layer is page-specific (but tool-agnostic) metadata that describes the page and the actions a user may take on it. Practically speaking, a data layer typically consists of a JavaScript object that contains all of the values we’d want to report on for a given page or user.

Data layers are important because they save developers time by allowing them to abstract out the metadata into a tool-agnostic syntax that a TMS like DTM can then ingest and set as data elements. Whereas once I would have told IT “please set s.prop5 and s.eVar5 to the search term on a search results page, and set s.events to event20” now I can just say “please put the search term in a javascript object such as digitalData.page.onsiteSearchTerm and tell me what object it is.” Then the TMS administrators could easily map that to the right variables from there.

You can see an example data layer if you’d like, or you can pull open a developer console for this very blog and look at the object “digitalDataDDT” to see the data layer that is automatically created from Search Discovery’s wordpress plugin.

Why a Data Layer?

My friends at 33 Sticks also have a great blog post on the subject, but I’ll list out some of the reasons I prefer clients to use a Data Layer. To me, it’s an upfront investment for a scalable, easily maintained implementation going forward. It does mean more work upfront- you have to first design the data layer to make sure it covers your reporting requirements, then you’ll need developers to add that to your site. But beyond those upfront tasks, configuration in your TMS will be much simpler, and it will save you many hours of CSS guess work and DOM scraping, and it may prevent broken reporting down the line.

    Upfront LOE Maintenance LOE
Route Amount of Control Dev Analytics Dev Analytics
Old fashioned “page on code” Medium Heavy Heavy Heavy Heavy
DTM + “Macgyver” Low Minimal Heavy Minimal Heavy
DTM + Data Layer High Heavy Medium Minimal Minimal

Another potential benefit to a Data Layer is that more and more supplementary tools know how to use them now. For instance, Observepoint’s site scanning tool can now return data on not just your Analytics and Marketing beacons, but on your Data Layer as well. And one of my favorite debugging tools, Dataslayer, can return both your beacons and your data layer to your console, so if something is breaking down, you can tell if it’s a data layer issue or a TMS issue.

Ask Yourself

Below are some questions to ask yourself when considering using a data layer:

How often does the code on the site change? If the DOM/HTML of the site changes frequently, you don’t want to rely on CSS selectors. I’ve had many clients have reports randomly break, and after much debugging we realized the problem was the developers changed the code without knowing it would affect analytics. It’s easier to tell developers to put a data layer object on a page then leave it alone, than it is to tell them to not change their HTML/CSS.

How CSS-savvy is your TMS team? If you have someone on your team who is comfortable navigating a DOM using CSS, then you may be able to get away without a data layer a little more easily… but plan on that CSS-savvy resource spending a lot of time in your TMS.  I’ll admit, I enjoy DOM-scraping, and have spent a LOT of time doing it. But I recognize that while it seems like a simple short-term fix, it rarely simplifies things in the long run.

How many pages/page types are on the site? A very complicated site is hard to manage through CSS- you have to familiarize yourself with the DOM of every page type.

How are CSS styles laid out? Are they clean, systematic, and fairly permanent? Clearly, the cleaner the DOM, the easier it is to scrape it.

How often are new pages or new site functionality released? Sites that role out new microsites or site functionality frequently would need a CSS-savvy person setting up their DTM for every change. Alternatively, relying on a data layer requires a data-layer-savvy developer on any new pages/site/functionality. It is often easier to write a solid Data Layer tech spec for developers to reference for projects going forward than to figure out CSS selectors for every new site/page/functionality.

How much link-tracking/post-page-load tracking do you have on your site? If you do need to track a lot of user actions beyond just page loads, involving IT to make sure you are tracking the right things (instead of trying to scrape things out of the HTML) can be extremely valuable. See my post on ways to get around relying on CSS for event-based rules for more info on options.

What is the turn-around time for the developers? Many clients move to DTM specifically because they can’t work easily within their dev team to set up analytics. A development-driven data layer may take many months to set up, stage, QA, and publish. Then if changes are needed, the process starts again. It may be worth going through the lengthy process initially, but if changes are frequently needed in this implementation, you may find yourself relying more on the DOM.

Are there other analytics/marketing tag vendors that may use a data layer? You may be able to hit two birds with one stone by creating a data layer that multiple tools can use.

Have you previously used another tag management system? Often, a data layer set up for a different tool can be used by DTM. Similarly, if the client ever moves away from DTM, their data layer can travel with them.

Does the site have jQuery? The jQuery library has many methods that help with CSS selectors (such as .parent, .child, .closest, .is, .closest…). A CSS-selector-based implementation may be more difficult without jQuery or a similar javascript library.

Who should create my Data Layer?

Ideally, your data layer should be created by your IT/developers… or at bare minimum, developers should be heavily involved. They may be able to hook into existing data in your CMS (for instance, if you use Adobe Experience Manager you can use the Context Hub as the basis for your data layer), or they may already have ideas for how they want to deploy. Your data layer should not be specific to just your Analytics solution; it should be seen as the basis of all things having to do with “data” on your site.

Yet frequently, for lack of IT investment, the analytics team will end up defining the data layer and dictating it to IT. These days, that’s what most Tech Specs consist of: instructions to developers on how to build a data layer. Usually, external documentation on data layers (like from consulting agencies) will be based on the W3C standard.

The W3C (with a task force including folks from Adobe, Ensighten, Microsoft, IBM…) has introduced a tool-agnostic data layer standard that can be used by many tools and vendors. The specifications for this can be found on the W3C site, and many resources exist already with examples. Adobe Consulting often proposes using the W3C as a starting point, if you don’t have any other plans. However, in my experience, generally that W3C is just a starting point. Some people don’t like the way the W3C is designed and most everyone needs to add on to it. For example, folks might ask:

  • why is “onsiteSearchTerms” part of digitalData.page? Can I put it instead in something I made up, like digitalData.search?
  • I want to track “planType”- the W3C didn’t plan for that, so can I just put it somewhere logical like digitalData.transaction?
  • I don’t need “digitalData.product” to be in an array- can I just make that a simple object.

The answer is: yes. You can tweak that standard to your heart’s delight. Just please, PLEASE, document it, and be aware that some tools will be built with the official standard in mind.

The Phased Approach

Many folks adopt a TMS specifically because they don’t want to have to go through IT release cycles to make changes to their implementation. You can still use a TMS to get a lot of what you need for reporting without a data layer and without a ton of CSS work. It may be worthwhile to put a “bare minimum” TMS deployment on your site to start getting the out of the box reports and any reports that don’t require a data layer (like something based on a plugin such as getTimeParting), then to fill in the data layer as you are able. I’d be wary though, because sometimes once that “bare minimum” reporting is in place, it can be easy to be complacent and lose some of the urgency behind getting a thorough solution implemented correctly from the start.

Conclusion

I fully understand that a properly designed data layer is a lot of work, but in my experience, there is going to be a lot of effort with or without a data layer- you can choose for that effort to be upfront in the planning and initial implementation, or you can plan on more longterm maintenance.