Stop Saying “Cookieless”

I created this very cheesy graphic for a blog post I wrote in… 2011. That’s right, I’ve been trying to stop cookie panic for 13 years.

(This is cross-posted from the 33 Sticks blog.)

I know we’re all sick of hearing about the “cookieless future”. As a topic, it somehow manages to be simultaneously boring (it gets too much air time) and terrifying (the marTech world is ending). But I want to discuss the problematic word itself, “cookieless”. I get that a lot of people use it metaphorically- they know that cookies as a general concept are not going anywhere; we just need a way to refer to all the changes in the industry. But I still argue the phrase is ambiguous at best, and misleading at worst. If someone brings up “the cookieless future” they might be talking about:

Technology Limitations

  • Browsers and ad blockers blocking 3rd party cookies (like Chrome will maybe-probably-someday do, and which Safari and other browsers have already been doing for years)
  • Browsers blocking or capping certain 1st party cookies (like Safari’s ITP)
  • Browsers and ad blockers blocking tracking technology (like Safari’s “Advanced Tracking Protection”, which blocks not just analytics, but GTM altogether)
  • Browsers and ad blockers blocking marTech query params like gclid or fbclid
  • App ecosystems blocking the use of device identifiers within apps (like Apple’s ATT)

OR Privacy and Consent
Depending on your jurisdiction:

  • Your users may have the right to access, modify, erase or restrict data you have on them. This applies to data collected online tied to an identifier, as well as PII data voluntarily given and stored in a CRM.
  • Your users have the right to not be tracked until they’ve granted permission (GDPR), or to opt-out of their data being shared or sold (CCPA and many others)
  • Data must be processed and stored under very specific guidelines

You’ll note under the “Privacy and Consent” list, I don’t mention cookies. That’s because laws like CCPA and GDPR don’t actually focus on cookies. CCPA doesn’t even mention cookies. GDPR is very focused on Personal Information (which is broader than Personally-Identifiable Information, or PII. Personal Information refers to anonymous identifiers like you might find in marTech cookies, but it also refers to IP addresses, location data, hashed email addresses, and so on). Again, for those in the back: CONSENT IS FOR ALL DATA THAT HAS TO DO WITH AN INDIVIDUAL, NOT JUST COOKIES. Cookies are, of course, a common way that data is tied to a user, but it is only a portion of the privacy equation.

I understand why we want one term to encapsulate all these changes in the industry. And in some ways, it makes sense to bundle it all together, because no matter the issue, there is one way forward: make the best of the data we do still have, and supplement with first-party data as much as possible. However, this conflation of technology (chromeCookiePocalypse) with consent (GDPR) has led to exchanges like this one I saw on #measure slack this week:

Q: “After we implemented consent, we noticed a significant drop in conversions. How can we ensure accurate tracking and maintain conversion rates while respecting user privacy?”

A: “Well, that’s expected, isn’t it? You will only have data from those who want to be tracked. If folks opt out, you will have less data.”

Q: “Yes, we expected a dip in conversion, however our affiliates are reporting missed orders and therefore commission discrepancies, which is affecting our ranking. They suggested API tracking and also said that since they are using first party cookies, they should not be blocked, and we should categorize them as functional“. 

Sigh. People are understandably confused. First-party cookies don’t need consent, right? Privacy just means cookie banners, right? Losing third-party cookies will mean a lot of lost tracking on my site, right? If we solve the cookie problem, work can continue as normal, right? (Answers: no, no, no, and no). 

Even people who should know better are confused. The focus on cookies has silliness like this happening:

  • Google’s “cookieless” pings that collect data even if a user has requested to not have their data collected
  • Sites removing ALL cookies from their site (talk about throwing the baby out with the bathwater, if baby-throwing took a lot of effort and resources)
  • Server-side tag management being touted as “vital for a cookieless future” (as if adding a stopping point between the user’s browser and data collection points somehow reduces the need for lasting identifiers in the user’s browser, or for consent. Server-side tag management has advantages, but it just shifts the cookie and consent issues. It doesn’t solve them. )
  • People thinking that a Javascript-based Facebook’s CAPI deployment will provide notable, sustained protection against online data loss (I have a whole other blog post about this I need to finish up)
  • Technology vendors pivoting from using anonymous online identifiers in cookies tied to one browser instance, to using online identifiers tied to personal emails and phone numbers. (One might argue this does not feel like a step forward.)
  • Agencies selling “third-party cookie audits”, to scan your site for third party cookies and help you document and quantify each one to prepare for the impending loss of data.

I want to talk specifically about this last one. This idea (that auditing your site for cookies is a key way to prevent data loss due to Chrome blocking 3rd party cookies) has been a key talking point at conferences, has been promoted by agencies and vendors selling their 3rd-party-cookie-auditing services, and is even promoted by Google’s don’t-panic-about-us-destroying-the-ad-industry documentation

But all the ChromeCookiePocalypse dialog- and the cookie audit recommendations- leaves out critically important context (especially if we’re talking to an audience of analysts and marketers):

  1. Between Safari, Firefox, Opera, and Brave (not to mention ad blockers), 3rd party cookies have not been reliable for some time. There is a good chance more than 50% of your traffic already blocks them, and the world continues to turn, partially because of so much mitigation that’s already been done: Adobe Analytics, Google Analytics, Facebook, Adwords, Doubleclick, etc… all already use 1st party cookies for tracking on your site (that “on your site” bit is important, as we’ll discuss in point #4).
  2. That said, we can’t pretend that if we fix/ignore the 3rd party cookie problem, then our data is safe and we can continue business as usual. Yes, 3rd party cookies are being blocked, but even 1st party cookies may be capped or limited because of things like Apple’s ITP.  Some browsers and/or ad blockers may block tracking even if it’s first-party. And there are other issues: bots are rampant on the web and most tools do a poor job of filtering them out. Browsers strip out query parameters used to tie user journeys together. Depending on your local privacy laws, you could be losing a significant portion of your web data due to lack of consent (I’ve seen opt-out rates up to 65%). All data collected online is already suspect.
    I suppose it’s better late than never, but even if Chrome weren’t changing anything, you should already be relying on first-party data wherever possible, supplementing with offline data as much as possible.
  3. A thorough audit of 3rd party cookies is not going to tell you what you need to know. I, as a manager-of-tags tasked with such an audit, can tell you your site has a Doubleclick cookie on it. I can’t tell you what strategies go into your Doubleclick ads. I can’t tell you how much of your budget is used on it. I can’t even tell you if anyone is still using that tracking. I can’t tell you from looking at your cookies how your analysts use attribution windows, or if you currently base decisions off of offsite behavior like view-through conversions. I can’t tell you, based on your cookies, if you have a CDP that is integrated with your user activation points.
    Even if the cookies alone were the key factor, a scan or a spot-check of your own site is likely to miss some of the more important cookies. If I come directly from Facebook or Google to a site, then I may have cookies that wouldn’t be there if I came directly to the site. If I’ve ever logged in to Facebook or Google within my browser, that will add even more 3rd party cookies on my site. It would be virtually impossible to audit all of those situations to find all of those cookies, but it’s THOSE cookies that matter most. Because…
  4. ...it’s the cookies that will go missing from *other* websites that will have the biggest impact. Where the ChromeCookiePocalypse gets real is for things like programmatic advertising, or anything that builds a user’s profile across sites, or requires a view into the full user journey. Accordingly, the 3rd party cookies on your own site might not be nearly as important as the cookies on other places on the web that enrich the profiles of your potential audience.

I think the reason I’m so frustrated by the messaging is because 1, I hate anything that resembles fear-mongering (especially when it includes the selling of tools and services), and 2, I’ve already seen so much time focused on painstaking cookie audits that don’t actually move an org forward. Focusing on the cookies encourages a bottom-up approach: a lot of backwards engineering to figure out the CURRENT state, rather than taking the steps towards 1st party data… steps that you should be taking regardless. Finding a 3rd party Facebook cookie on your site shouldn’t be how you find out your organization uses targeted advertising, nor should it be the reason you update your strategies. I wonder how much the push to scan websites for cookies and create spreadsheets is because that task, tedious as it is, sounds much more do-able than rethinking your overall data strategy?

If you’re afraid something has slipped through the cracks, then yes, do a cookie audit: go to a conversion point like a purchase confirmation page and look at the 3rd party cookies. Before you research each one, just note the vendors involved. Just seeing that a 3rd party cookie exists gives you a heads up that you have tracking on your site from a vendor that relies on 3rd party cookies for some part of their business model. Because if they’ve got a 3rd party cookie on your site, odds are they use that same cookie on other sites. That’s what you need to solve for: how will your business be affected by your vendors not collecting data on other sites? Don’t focus on the cookie, focus on your strategies. What technology do you have that relies on cross-site tracking? How much of your advertising budget is tied to behavioral profiling? Programmatic advertising? Retargeting? Does your own site serve advertisements that use information learned on other sites? Does your site do personalization or recommendations based on user data collected off your site? What 1st party data do you currently have that could be leveraged to fill in gaps? How can you incentivize more users to authenticate on your site so you can use 1st party identifiers more?

Talk to your marketers and advertising partners. Don’t ask about cookies. Ask what advertising platforms they use. Ask about current (or future) strategies that require some sort of profile of the user. Ask about analysis that requires visibility into what the user is doing on other sites (like view-through conversions, if you still think that’s useful, though you probably shouldn’t). Ask about analysis that relies heavily on attribution beyond a week (which is currently not very reliable and likely to become even less so.)

And, most importantly, talk to the vendors, because it’s going to be up to them to figure out how their business model will work without 3rd party cookies. Most of them will tell you what we already know, but may not be ready to make the most of: 1st party data is the key (which usually means supplementing your client-side online data with data from a CDP or other offline databases). Ask what options there are to supplement client-side tracking with 1st party data (like Meta’s Conversions API). Ask how they might integrate with a Customer Data Platform like Adobe Experience Platform’s Real-Time CDP.

I’m not arguing that we don’t need to make some big changes. In fact, I’m happy the ChromeCookiePocalypse pushed people into thinking more about all of this, even if it was a bit misguided. Technology is changing quickly. Consent is confusing and complicated. Analysts and marketers are having to quickly evolve into data scientists. It’s a lot to keep up with. But words are important, and it’s not just about cookies anymore. Welcome to the “consented-first-party-data future”*.


*I’m open to suggestions for other “cookieless” alternatives

Adobe’s Opt-In Service for Consent Management

This is cross-posted from the 33 Sticks blog.

Adobe’s Opt-In Service is a tool provided by Adobe to help decide which Adobe tools should or should not fire, based on the user’s consent preferences. It has some advantages I don’t see folks talk about much:

  1. It manages your Adobe tags (Analytics, ECID, Target) at a global level. If you’ve got it set up right, you don’t need to make any changes to your logic that sets variables and fires beacons (you don’t need a ton of conditions in your TMS)- any rule’s Analytics or Target actions will still run, but not set cookies or send info to Adobe.
  2. When your existing variable-and-beacon-setting logic runs while the user is opted out, Adobe queues that logic up, holding on to it just in case the user does opt in. Any Adobe logic can run, but no cookies are created and no beacons get fired until Adobe’s opt-in service says it’s ok.

This latter point is very important, because in an opt-in situation, frequently consent happens after the initial page landing – the page that has campaign and referrer information. If you miss tracking on that page altogether, you will lose that critical traffic source information. Some folks just re-trigger the page load rule, which (depending on how you do it) can mess up (or at bare minimum, add complexity) to your data layer or rule architecture. So I’m a big fan of this “queue” of unconsented data.

You can even experiment and see for yourself- this site has a globally-scoped s object, and an already-instantiated Visitor object. Open the developer console, keep an eye on your network tab, and put this in, to opt out of analytics tracking on my site:

adobe.optIn.deny('aa');//opt out analytics

Then run code that would usually fire a beacon:

s.pageName="messing up Jenn's Analytics data?" //you can tell me a joke here, if you want
s.t()

No beacon!

Now let’s pretend you’ve clicked a Cookie Banner and granted consent. Fire this in the console:

adobe.optIn.approve('aa');//opt in analytics

And watch the beacon from earlier appear! Isn’t that magical?

Side note: The variables sent in the beacon will reflect the state when time the beacon would have fired. I tested this- if I do this sequence:

adobe.optIn.denyAll()
s.prop15="firstValue"
s.t() //no beacon fires because consent was denied
s.prop15="secondValue"
adobe.optIn.approveAll() //now my beacon fires

…the beacon that fires will have prop15 set to “firstValue”- it reflects things at the time the beacon would have fired.

To re-iterate: if you have the ECID service set up correctly, it will still look like your Analytics/Target rules fire, but the cookies and beacons won’t happen until consent is granted.

How to Use within Adobe Launch*

Most of Adobe’s documentation assumes you’re using Adobe Launch*, and even then, leaves out some key bits. The Experience Cloud ID extension has what you need to get things set up, but is not the complete solution (you can skip this and go straight to the Update Preferences section below if you want the part that’s not particularly well documented). Let’s walk through it, end-to-end:

Opt-In Required

First, tell Launch when to require Opt-in.

If it’s just “yes” or “no”, then it’s simple enough. If you want to decide dynamically (based on region, for example), then you need to create a separate Data Element that returns true or false. For example:

if(_satellite.getVar("region")=="Georgia"){
     return false //Georgia doesn't care about consent. Yet.
}else{
     return true
}

To be honest, to me it’s always made more sense to just click the “yes” option, then further down, change up people’s default pre opt-in settings based on region.

Store Preferences

I’ll admit, I’ve never used these, but I think they’re self-explanatory enough: the ECID is offering to store the user’s consent settings for you. This seems convenient, but I’ve never seen a set up where the overall CMP solution doesn’t already store the consent preferences somewhere, usually in a way that is not Adobe-specific.

Previous Permissions/Pre Opt In Approvals:

The “Previous Permissions” and “Pre-Opt-In Approval” settings work very similarly, but “Pre-Opt-In Approval” is more of a global setting, and “Previous Permissions” is user-specific (and overwrites “Pre-Opt In Approval”). In other words: Pre-Opt-In Approval settings are used when we don’t have Previous Permissions for the user.

The key here is the format of the object, usually in a Custom Code Data Element, that Adobe is looking for:

{
   aam:true,
   aa:true,
   ecid:true,
   target:true
}

…where true/false for each category would be figured out dynamically based on the user. So your data element might have something like this:

var consent={}
if(_satellite.cookie.get("OptInInfoFromMyCMS").indexOf("0002")!=-1){
    consent.aa=true
    consent.target=true
    consent.ecid=true
}else{
    consent.aa=false
    consent.target=false
    consent.ecid=false
}
return consent

Important but often missed: updating preferences

All of the above is great for when the ECID extension first loads. But what about when there is a change in the user’s preferences? As far as I know, this can’t be done within the ECID extension interface, and it’s not particularly well documented… this is where you have to use some custom code which is kind of buried in the non-Launch optIn service documentation or directly in the Opt-In reference documentation. (I think we’re up to 4 different Adobe docs at this point?)

When your user’s preferences change, perhaps because they interacted with your banner, there are various objects on adobe.optIn that you can use:

adobe.optIn.approve(categories, shouldWaitForComplete)
adobe.optIn.deny(categories, shouldWaitForComplete)
adobe.optIn.approveAll():
adobe.optIn.denyAll():
adobe.optIn.complete(); //have adobe register your changes if you set shouldWaitForComplete to True

The “shouldWaitForComplete” bit is a true or false- if true, then Adobe won’t register your changes until you fire adobe.optIn.complete(). If false, or omitted, then Adobe will immediately recognize those changes.

So I might have a rule in Launch that fires when the user clicks “approve all” on my banner (or if I’m using oneTrust, within their OptanonWrapper function), with a custom code action that looks something like this:

adobe.optIn.approve('target',true);//opt in target 
adobe.optIn.approve('aa',true);//opt in analytics
adobe.optIn.approve('ecid',true);//opt in the visitor ID service
adobe.optIn.complete(); 

Which could also be accomplished like this:

adobe.optIn.approve(['target','aa','ecid']);

Or even just this (in my tests, this did not require adobe.optIn.complete()):

adobe.optIn.approveAll();

What if I’m Not Using Launch?

If you’re not using the Launch ECID extension, then the initial stuff- the stuff that WOULD be in the extension, as detailed above, is all handled when you instantiate the Visitor object. This is all covered Adobe’s Opt-In Service documentation, which I personally found a bit confusing, so I’ll go through it a bit here.

As far as I can tell, this bit from their documentation:

adobe.OptInCategories = {
    AAM: "aam",
    TARGET: "target",
    ANALYTICS: "aa",
    ECID: "ecid",

};

// FORMAT: Object<adobe.OptInCategories enum: boolean>
var preOptInApprovalsConfig = {};
preOptInApprovalsConfig[adobe.OptInCategories.ANALYTICS] = true;

// FORMAT: Object<adobe.OptInCategories enum: boolean>
// If you are storing the OptIn permissions on your side (in a cookie you manage or in a CMP),
// you have to provide those permissions through the previousPermissions config.
// previousPermissions will overwrite preOptInApprovals.
var previousPermissionsConfig = {};
previousPermissionsConfig[adobe.OptInCategories.AAM] = true;
previousPermissionsConfig[adobe.OptInCategories.ANALYTICS] = false;

Visitor.getInstance("YOUR_ORG_ID", {
    "doesOptInApply": true, // NOTE: This can be a function that evaluates to true or false.
    "preOptInApprovals": preOptInApprovalsConfig,
    "previousPermissions": previousPermissionsConfig,
    "isOptInStorageEnabled": true
});

Could be simplified down to this, if we skip the extra OptInCategory mapping and the preOptInApprovalsConfig and previousPermissionsConfig objects:

Visitor.getInstance("YOUR_ORG_ID", {
    "doesOptInApply": true, 
    "preOptInApprovals":{
         "aa":true,
     },
    "previousPermissions": {
         "aam":true,
         "aa":false,
     },
    "isOptInStorageEnabled": true
});

Literally, that’s it. That’s what that whole chunk of the code in the documentation is trying to get you to do.

Beyond that, updating preferences happens just like it would within Launch, as detailed above.

Side note- what’s up with adobe.OptInCategories?

I’m not sure why Adobe has that adobe.OptInCategories object in their docs. These two lines of code have the same effect, but one seems much more straightforward to me:

previousPermissionsConfig[adobe.OptInCategories.AAM] = true;
previousPermissionsConfig["aam"] = true;

The adobe.OptInCategories mapping is set by Adobe- despite it being in their code example, you don’t need to define it, it comes like this:

I’m guessing this was to make it so folks didn’t have to know that analytics is abbreviated as “aa”, but… if you find the extra layer doesn’t add much, you can bypass it entirely if you want.

Conclusion

Once I understood it, I came to really like how Adobe handles consent.

I am by no means considering myself an expert, or trying to make a comprehensive guide. But the documentation is lacking or spread out, so most of this took some trial and error to really understand, and I’m hoping my findings will prevent others from having to do that same trial and error. That said, I’d love to hear if folks have experienced something different, or have any best practices or gotchas they feel I left out.

*still not calling it “Adobe Experience Platform Data Collection Tags”

Problems with the “Autoblock” approach to Consent Management

In my post on setting up Onetrust, I mention how problematic Autoblock can be, but it comes up often enough that I decided it merited its own post, because enabling any Consent Management Platform (CMP)’s Autoblock functionality is a big deal, with far-reaching implications.


There are two approaches to consent management:

  1. Use your Consent Management Platform to create banners and keep track of user’s consent status. The CMP doesn’t affect what tags fire at all, it merely keeps track of user’s consent. Then you use your Tag Manager to get info from the CMP about what has been consented, then use conditions in your marketing tag rules, and the ECID opt-in service for Analytics and Target, to determine what tags are ok to fire.
  2. Use the Consent Management Platform to create banners, track user’s consent status, and block any files or cookies that try to load without consent. Your Tag Management System has essentially no control. You don’t have to mess with extra conditions in Launch (which some see as a perk), but someone has to maintain the CMS so it knows which scripts and cookies to allow to fire.

“Autoblock” is what we call that CMP functionality that proactively blocks any non-consented technology from loading. Instead of counting on you to manage which vendor code gets to run, it lets you run all your tag code, then it just blocks any non-consented files or cookies from actually loading.

While that post was specific to OneTrust, this is actually relevant for most Consent Management Platforms. I know for sure Cookiebot, Osano, Trustarc and Ensighten can use it, and I’m pretty sure every other CMP does too, because it lets their salespeople say that setting up their tool “takes no effort at all!”

Call it what it is: consent management is complicated

We often get ourselves in trouble in our industry by thinking a simple solution will solve a complicated problem. But simple solutions often require complicated workarounds to actually work in the real world. Going with the “simple” solution may actually create more work and more points of failure.

The alternative to using Autoblock is to use the consent preferences gathered by your CMP to create conditions in your TMS to decide which vendor’s code gets to run. This sounds like more work, and sounds more prone to human error, but after reading this post I hope you’ll understand that Autoblock doesn’t remove human effort (and the possibility for error), it just shifts it.

Part of the complexity comes from the fact that for all CMPs I’ve worked with, Autoblock doesn’t just block cookies, it blocks scripts that might be setting cookies. The theory behind this is sound enough: privacy compliance shouldn’t just be about cookies, it should be about data collection. Lots of tracking can happen without cookies, and cookie names can change even if script locations do not. If a user hasn’t consented to Analytics tracking, we don’t want to just block my Adobe AMCV cookie, we want to block the appMeasurement script from trying to identify me and build beacons about my user experience. So even though everyone talks about CMPs in terms of cookies, and even most consent banners only really mention cookies, we’re usually talking about much more. Just to drive it home, because this is key to understanding how CMPs work: CMPs affect scripts AND cookies.

Also, Autoblock is a binary thing: though you can have a “hybrid” approach that uses both TMS conditions and Autoblock, Autoblock is either on (and potentially affecting everything) or off (leaving it entirely up to your TMS).

Now we’re clear on that, let’s talk about reasons to avoid Autoblock (regardless of which Consent Management Platform you use). Each CMP implements Autoblock slightly differently, and some approaches work better than others, but they all have some common pitfalls.

Complication #1: Many Points of Failure

As I said in my other post, most problems I’ve seen with CMPs have been because of Autoblock functionality. In fact, I have only seen one deployment where it didn’t cause any major problems. Even though consent management is NOT my main role, I somehow keep hearing about Autoblock issues:

  • In a client’s Osano POC, we saw consented analytics beacons firing in duplicate or triplicate with no explanation, inflating our Analytics data.
  • That same POC caused a bunch of JavaScript errors, even on site-essential scripts, because Autoblock was changing the order things loaded in or the speed in which they loaded.
  • One org I know of lost 2 months of consented Adobe Analytics data because of a misconfiguration in Ensighten. They didn’t lose ALL data, or they would have spotted the problem sooner- they just thought they had abysmally low opt-in rates, not that 30% of their data was blocked even when users opted in.
  • Another org using Cookiebot has a consumer lawsuit on their hands because an old Adobe visitorApi.js file somehow snuck through their Autoblock.
  • Just now, trying to test out just how OneTrust would handle AudioEye, nothing I did could get my AudioEye Script to fire if Autoblock was on. (The problem may very well have been user error- I didn’t spend too much time troubleshooting.)
  • A week or two back, I was brought into a call with a non-client to discuss a timing issue that was causing cookies to be created even with Autoblock on. The vendor had suggested a manual workaround that involved writing a lot of custom code to manually manipulate cookies.
  • The next day, someone else reached out in response to my OneTrust blog post to let me know they had just discovered that because of an issue with hashing, their Autoblock file had exploded in size, causing a sudden traffic deflation issue and impacting their page performance for months. It took a lot of resources to track down the issue, and even then they might not have spotted it if they hadn’t been running regular Observepoint scans.

Time and time again, I see troubleshooting and maintaining Autoblock taking more resources than a TMS-condition-based approach would. Again, I’ll admit a number of these problems could have been from user error. But users do err.

Complication #2: Cookie/script inventory management

Regardless of the CMP, Autoblock functionality requires some sort of inventory of all the cookies and scripts your marTech vendors may be using, so that it can know what to block and what to allow. This is often where most of the “work” of maintaining a CMP comes from. There is usually some list of all possible cookies on your site that someone is going to have to sort through and maintain. Such lists can take a lot of effort and knowledge to maintain. I’ve seen these inventory lists get hundreds of items long and require weekly upkeep, forcing orgs to shift their efforts from making sure they’re using their data in compliant ways, to doing detective work on where obscure cookies like “ltkpopup-session-depth” come from and whether or not they are necessary for your site to function.

Some CMPs allow you to filter this list down to the user, so by clicking through somewhere on your banner, the user can see documentation on all the types of cookies possibly set on your site:

I am not a lawyer, but a bit of research seems to suggest this is not legally required by GDPR or any of the other major regulations (though I’ll admit there may be some reasons lawyers want you to). In fact, from my non-lawyerly view, such lists may be a liability because keeping it 100% accurate would be very difficult for most sites. So I don’t actually see this in-depth public-facing cookie documentation as a worthy benefit.

Most CMPs (including OneTrust) will scan your site and let you know what cookies and scripts it found. Some may fill in some knowledge- for example, OneTrust’s Cookiepedia knows that “_gcl_aw” is likely tied to Adwords and belongs in the Marketing category- but others may rely entirely on you to categorize each cookie appropriately. Some CMPs may only look at cookies and scripts set by your site; others (like one TMS/CMP that rhymes with “enlighten”) may make you categorize all cookies and scripts that a user’s browser extensions may be bringing with them- things that have nothing to do with your site (or what you’re liable for).

One reason keeping such an inventory is so difficult is the often-unclear relationship between scripts/cookies and what purpose they serve. The marTech vendors don’t make it easy- because of acquisitions, concern about optics, or any number of other reasons, most vendors use multiple domains and set multiple cookies which often don’t have a clear association with their vendor. As part of my Tagging Overload presentation last summer, I made an inventory of 200+ common domain/vendor associations but even that is far from comprehensive (and probably far out of date by now). And if cookies are set on your own domain, it can be even harder to know if they require consent, or if your own developers use them to make your site function properly. Miscategorization could mean a lawsuit, or it could mean broken website functionality.

Let’s take two examples:

Many sites use a tool called AudioEye, which makes a site more accessible (for instance, by making it easier for Screen Readers to describe content to blind users) to stay compliant with regulations from the American Disabilities Act (ADA). I’m not a lawyer, but a strong case could be made for AudioEye being considered Strictly Necessary and therefore not require opt-in.

To add AudioEye to my site, I essentially just add a couple lines of code that reference a single .js file (https://ws.audioeye.com/ae.js). That file then brings in a bunch of other files:

Between those scripts, a few cookies get set. OneTrust picked up these third-party cookies in its scans:

(Though it did not seem to pick up on the first-party _aeaid cookie so I may need to flag that myself.) These cookies and domain are not currently in OneTrust’s Cookiepedia, and/or are incorrectly categorized as Marketing cookies. So now, instead of keeping track of the single line of code I added to the site, I’ve got a dozen+ javascript files, 5 third-party cookies, and 1 first-party cookie, for a single vendor. I’m going to need to make sure that Autoblock knows that these are all “Strictly Necessary”, not for marketing or targeting.

Let’s look at Facebook for a Marketing Pixel example next: the pixel script they give me loads one JS file, fbevents.js. But that file actually loads another file from “connect.facebook.net/signals/config/”. On the site I’m currently looking at, between those two files, Facebook sets four third-party “.facebook.com” cookies (“usida”, “fr”, “wd”, and “datr”) and one first-party cookie (“_fbp”).

If I’m relying on my TMS conditions to manage tags based on consent (instead of Autoblock), I can nip that all in the bud by not letting the code that loads that fbevents.js file to ever run to begin with. I don’t need to know that Facebook uses multiple JS files and sets a _fbp cookie on my domain- I just don’t run the script that might set any cookies to begin with.

By contrast, with most Autoblock set ups, the CMP might let me know it found two FB scripts, 4 third-party cookies and 1 first-party cookie on my site. It might suggest categorizations for known cookies for me (OneTrust is pretty good about that; some other CMPs are not), or it may count on me to know which categories they belong to, and which cookies are “strictly necessary” or not. I’d need to keep this list updated- whether or not I’m changing the tags on my site, Facebook might start setting cookies under a different name.

From what I can tell (because it’s not transparent in the interface), OneTrust then finds the scripts setting those cookies and blocks those scripts. I can’t really categorize the scripts themselves in the OneTrust interface. While other CMPs may focus on the scripts and not the cookies, OneTrust’s cookie-centric approach leaves me with very little control in the interface of which scripts I want to have fire.

(To be fair, if Autoblock blocked the fbevents.js file, it would stop all the rest from happening… but the cookie inventories seldom understand that nuance, so you still have 5+ cookies and 2+ scripts to categorize.)

But even once this list is in place and correct, you still may need to do more work to get it to work with your site.

Some CMPs, like OneTrust, use a one-size-fits-all approach: things are either on “strictly necessary” or they get blocked. You can categorize cookies as strictly necessary or not, but as far as I can tell, there is no where within the OneTrust interface to tell it a script is necessary (and if my AudioEye javascript is blocked, then it doesn’t matter that I’m allowing AudioEye to set cookies.) Instead, if I need to allow a script, I can add a custom attribute to the html of the tag:

<script data-ot-ignore src="examplescript.com/javascript.js"></script>

(This goes to show: never believe a vendor that says you can implement their solution without developer work. It’s too good to be true that you could become compliant, not break your site, and not have to talk to a developer.)

Govern Tags, Not Cookies

You can be privacy-compliant without a complete list of third-party cookies. Don’t get me wrong, such a list could be handy, especially if you can keep it accurate and up-to-date. And by all means, you should have some sort of inventory of the tags on your site.

If you know what tags you are firing on your site, you can stop tracking at the source, by stopping those tags. Would you rather take the time to have a condition on your Salesforce tag in your TMS that stops it from ever firing, or take the time to find out that cookies on “cdn.krxd.net” come from Salesforce? There’s no magic bullet solution, either way.

Complication #3: Ownership

Most companies are struggling to find the right owner for maintaining a CMP. Lawyers know the regulations but may have no clue what an AMCV cookie does; marketers may know that they are using TradeDesk for advertising, but have no idea that it uses the domain adsrvr.org; developers may know that the third-party scripts from fonts.googleapis.com affect the visual appearance of the site and have nothing to do with marketing, but have no clue about Facebook cookeis.

I won’t pretend that moving away from Autoblock will clear up all the ownership issues. BUT, when the person deploying the tag (in the TMS) also has primary control of not firing the tag (via conditions in the TMS), it does simplify things.

Complication #4: No Nuance in Categories

For OneTrust at least, Autoblock is all-or-nothing; from what I understand, there are no categories beyond “strictly necessary” and “not strictly necessary”. Someone can’t opt for Analytics tracking but no Marketing tracking.

Complication #5: Timing

If you don’t have OneTrust’s library loading early enough, it may not be there in time to block early-loading scripts and cookies, leaving you with incomplete coverage. Whereas TMS conditions can have some defaults built in that don’t rely on the OneTrust script having been fully loaded (I write about one way to do that in my other post).

Complication #6: It can needlessly lose Adobe Data by not taking advantage of the Adobe Opt-In Service.

For Adobe Analytics tracking in a opt-in-only scenario, Autoblock limits the advantages of using the ECID Opt-In Service. The ECID Opt-In service doesn’t just kill any tracking until you have consent- it kind of holds “preconsented” tracking to the side, so once the user DOES consent, you don’t lose that data that would have been sent when the page loaded. If you’re relying only on Autoblock, you can never get that pre-consented data back. You’d need to re-run any logic that would have fired on your page view.

Complication #7: Page Performance

Autoblock has the potential to have a major impact on page performance, though this varies widely by CMP vendor. Depending on how they work, they may be adding a “gate keeping process” to every asset that loads on your site.

I haven’t tested OneTrust, but I did test another CMP’s autoblock feature and found it slowed the page’s load time up to 40%! (To be fair, I do think it was an inferior tool and hopefully other CMPs aren’t that bad.)

This is a very hard thing to test- did any changes in performance come about because of tags/scripts not firing, or because of the inherent weight of the CMP library, or because of Autoblock? I’d love to hear if any one else has found a good way to test or observed any changes in performance based on Autoblock functionality.

On the other hand, one case FOR Autoblock:

There is at least one situation in which you may need to rely on Autoblock: if you have any scripts or cookies you need to block that are NOT managed by your TMS. In order to use a condition-based approach, you have to be able to add some conditional logic to every script that needs consent. Autoblock doesn’t care if a script comes from your TMS or not- it applies to everything. If you are not confident that your TMS has full control over all tags that need consent, Autoblock may be for you.

Conclusion

I am not saying no one should use Autoblock. It has its uses. Just make the decision with a full understanding of the implications, and an awareness of what other options may look like.

I’m curious to hear other’s thoughts and experiences.