Governing your tags going forward

My first few posts in this series have established why marTech Ecosystem health matters, and shown some ways to measure the impact marTech is having on your site. Now, let’s talk about how to do a deeper audit of the tags on your site so you can start to plan a way forward.

Regardless of how much effort you can put into cleaning up current tags on your site, there are relatively simple things you can do, starting now, that will improve the future health of your marTech implementation.

Keep information ABOUT the tag WITHIN the tag

Within each tag, include info about the tag:

  • Who requested the tag (include any contact info for the agency, vendor support, internal sponsor)
  • The date the tag was deployed or updated or confirmed to be in use (and if known, date to remove)
  • Who deployed the tag

I like to do this directly in the JavaScript, but you can also include at least some of that info in the Rule, Tag or Action name, or you can use the TMS “Notes” functionality (both GTM and Adobe Launch have it)- just be aware that not everyone is good at looking at notes (they can be a bit sneaky) and they don’t show up in a lot of auditing tools and such.

However you do it, keeping this information directly with the tag makes it so easy for everyone to have context around the tag and makes it easy to keep your TMS set up healthy and current.

Use a Tag Request Intake Process

We’ve helped a lot of our clients implement a formal MarTech Tracking Request Intake process. When someone comes to you asking for a new tag to be deployed, you can have them fill out key information such as contact info, go-live date, tag vendor/account ID… This information can help in a few ways:

  • It cuts down on the back-and-forth between the implementor and the tag requester (I’ve had some email threads go 20 replies deep, trying to make sure we were all on the same page about the tag. You’d be amazed how often I get Facebook tags without an account ID, for example.)
  • It gives you a chance to set expectations with the tag requester, like the fact you need 2 weeks of turnaround time, or that you’ll follow up with them on a certain date to see if the tag is still in use or needs updates.
  • It builds an audit trail and can feed directly into documentation, like for the Tag Inventory you should be keeping.

I have an example in my my governance workbook download, but it doesn’t have to be a spreadsheet. It can be a google form, a JIRA template, or just a something you paste into an email.

Keep a “pixel menu”

I’ve found it can be a huge help to have a standard list of what user actions you usually have tags on, and what dimensions are available at those times.

At bare minimum, this can be a great internal resource for the implementors and, say, a Center of Excellence. Sometimes it can be helpful to show vendors/agencies to get on the same page about what’s worth tracking…. just be warned, sometimes if an agency sees such a list, they’ll just say they want all of it. Which brings me to my next suggestion:

Consider having some restrictions on what you deploy

It’s ok to push back on Tag Requests. Consider:

  • Do you only deploy certain types of tags? Maybe things that you already have a set up for, or that have passed a security review.
  • Do you only deploy on certain user actions (do they have to “stick to your list” in the pixel menu)? If they ask for a tag on both the click of the search button and the load of the search results page, ask them WHY. Sometimes they’ll have a reason, but much of the time they’ll say that the button click was just a nice-to-have, and they’ll tell you what items are actually a priority. Pro tip: mention to them “our turnaround time can be pretty quick if we stick to these particular user actions, but if you need something custom, that might slow things down. Maybe we can move forward with the priority tags for now, and circle back to your more custom requirements later?”
  • Do you ask them to justify the business value of the added weight and complexity that their tag brings? Some of the most complicated tags I’ve deployed turned out to be for something not providing a lot of value. Much of the time, the data a tag is supposed to be delivering can already be found other ways (like with an analytics tool).

Establish a follow-up cadence

Set up a calendar for your TMS. Each time you publish a tag, add a reminder one year out to followup:

It takes very little effort or commitment in the here-and-now, and makes it easy to followup without it being part of a massive clean up effort.

Just… do something

Don’t wait until you can get it perfect. Do what you can, when you can. Make a commitment for the little things you can do going forward that your future self will thank you for.

Auditing your marTech tags and TMS

My first few posts in this series have established why marTech Ecosystem health matters, and shown some ways to measure the impact marTech is having on your site. Now, let’s talk about how to do a deeper audit of the tags on your site so you can start to plan a way forward.

Keep a Tag Inventory

I have a template as part of my governance workbook download, but it’s not hard to get started on your own: create a spreadsheet with one row per tag/vendor account, with columns for at least the following:

  • Vendor (Facebook, Doubleclick, etc)
  • Account Number- your key to the audit*
  • Date of last update to tag
  • Internal Owner/Agency Contact Info
  • Next Steps (investigate, keep, remove)

*That account number can become a useful tool for finding more information about the tag. If I look in my email box for the word “Adwords”, the results would be hard to wade through. But If I search for my Adwords account number, “AW-123456”, it will take me directly to correspondence about that tag. This also helps when using your TMS’s internal search functionality, or to use as a filter in the network tab:

It’s ok if this sheet is a work in progress- in fact, I guarantee it almost always will be. It’s ok if the sheet only has information in three rows, or just has the first two columns filled out. It is better to have a blank workbook so you can at least have a place to store info as it becomes available, or to add new tags to, than to have nothing at all. Just fill in what you can, when you can.

Use Tools Like Observepoint

Observepoint has a free chrome extension that you can run to see which known marTech tags are on any page you access:

It will automatically map domains to known vendors and show you account numbers where applicable (again, you can see how those account numbers are handy for tying information together). Since this tool works on a page-by-page basis, I recommend running it on your home page and any key conversion points, like a purchase confirmation page, because most of your tags are likely to be represented there.

If you want a more comprehensive scan of your site, the paid Observepoint App does just that- it crawls your site and gives you a full report of everything it finds. You may need to teach it how to get to parts of the site that require interactions (like logging in, or entering credit card information) but with Observepoint, the more time you can invest in it up front, the more value you will get out of the tool.

The nice thing about the paid app is it has an under-appreciated “Tag Initiators” tool that shows you which tags are loading other tags, which is invaluable for helping figure out where your less-obvious tags are coming from.

A quick note: Observepoint may not recognize and catch some of the more obscure tags- I did come across a few that the extension didn’t pick up (though I have no doubts they have a drastically more comprehensive list than I do). I don’t think this is a shortcoming of theirs, but just the nature of the beast: there are almost 10,000 marTech vendors now, and many of them use multiple domains for tracking. But just in case, you may still want to run the webpagetest.org domains report and analysis I talked about in my post on marTech impact, and may need to supplement their mapping with information from our vendor/domain mapping database, or you may have to do some research on your own. I’ve found better.fyi to be a great resource for this type of research.

Use TMS Container Export tools

There are free tools for both Adobe Launch and GTM to help you see all your different tags (and help you get a sense of the health of your TMS set up).

For Adobe Launch, Tagtician by Jim Gordon is still one of the handiest things out there. It’s a chrome extension you can use to download an entire Launch library- toggle the library option at the top:

Then click “export”:
This will give you a spreadsheet with information on every rule, data element, and extension in your Launch environment:
I tend to move the “extensions” and “data elements” info to their own tabs, then add some columns to the rules data:
  • Tag Vendor
  • Vendor ID
  • Event
  • Potential Last Date Updated (or Date Implemented)
  • Notes

I turn it into a table and organize by Extension, add columns for Vendor, Account Id, event (What the tag does), date of last update, notes, etc… and then just start at the top and work my way down:

Sometimes, the tag is directly in the exported file in “action detail”, but much of the time custom code resides in external JS files and frankly, I haven’t found an easier way to get at the code than to just open Launch and go looking for it.

Urs Boller also has a great tool for Launch: the Launch Parser. It makes it easy to see the relationships between the different components (what data element is getting used where, etc) and also helps spot problems and make recommendations.

For GTM, Simo Ahava has a great tool for showing all the different components (tags, triggers, variables, containers) and their relationships to each other. Then if you want to dive deeper, you can export your container directly from the GTM Admin console:

This spits out a JSON file that can be a bit intimidating, but you can paste it into a tool like json2table.com to turn it into a much more manageable spreadsheet.

Do what you can

I know an audit sounds like a lot of work. At the bare minimum, set up that Tag Inventory and just start documenting things going forward, then audit past stuff when you can. I recommend just taking on one vendor at a time (eg, all your Facebook tags at once). Apply a new naming convention/meta data standard as you go to keep track of what you’ve updated/inventoried and what you haven’t.

It’s better to have something in place, even if it’s imperfect.

Why does it matter to have a “healthy” marTech ecosystem

This is part of a series of blog posts on keeping marTech tags from becoming a problem on your site. See the preceding post (which serves as the “table of contents”) or following post on the impact that marTech has on your site.

The health of your website depends on the health of your marTech ecosystem, which is decided by a combination of the following:

  1. The number of tags on your site. I’d say most companies I work with have between a dozen and 150 tags on their site, though I have seen it go as high as 1500. Each additional tag adds a little to the weight and complexity of your marTech implementation.
  2. The age of the tags on your site. This matters both because you want the most updated versions of tags, which play best with modern browsers and current regulations, but also because the older the tag, the less likely there is someone out there still getting value out of it. A few weeks ago I came across an Adwords gtag from 2018. No one had any record of it, or any idea who might be using it. And instead of that meaning they could safely delete it, it meant they didn’t know who to ask permission of to delete it, so they left in place “for now” and just keep piling tags on top. It adds up.
  3. The complexity of your Tag Management System. Duplicate rule triggers/conditions, repeated logic, poor naming, unnecessary complexity… these all can make an ecosystem unmaintainable. We’ve seen good technical employees leave companies because they felt they were spending all their time deploying tags that weren’t providing value, or worse yet, felt they were tasked with keeping a sinking ship afloat. These issues can all make a TMS library unnecessarily heavy (in another post, I show how one site I looked at had 15% of its site weight come just from the TMS library… not the tags, but the library itself… you know, the part that has to load on every page.)
  4. Documentation. Every site has quirks, workarounds, edge cases, and places where the implementation had to do something unanticipated. And every implementor has their own way of doing things. If you don’t have documentation, then every small change to your implementation has the potential to bring the whole thing tumbling down.
  5. Resource Turnover. This goes hand-in-hand with documentation. The more fingers you have in the pie (or cooks in the kitchen… whichever metaphor resonates), the higher the possibility of conflicts. The messiest implementations I see are the ones that have changed hands many times.

And these problems build on themselves: if you don’t have a place to put documentation, no one is going to document things. If someone new goes into your TMS to put a new tag on purchase confirmation, and they see eight rules that look like they could all be for purchase confirmation, they may very well make a ninth rule that they know is what they need. The whole ecosystem can slowly spiral out of control until it’s no longer sustainable.

Why ecosystem health matters: Security

Some tags inherently carry a bit of risk. Anything that allows a third party to change the code on your site has the potential for trouble. For a while a few years back, Doubleclick for Publishers had a known cross-site scripting vulnerability that could allow malicious parties to inject script onto a site. Even now, agencies can (and do) use Google Ad Manager to deploy tracking scripts.

Many tags allow other tags to “piggy back” on them. For instance, if you had a Bing tag on your site last November (2021), you may or may not have noticed a new ~23KB file, clarity.js, loading on your site. The only reason my client noticed it is that it was causing JavaScript errors on their site. Since “clarity.js” wasn’t something we ever put in place, it took some tracking down to figure out that it was tied to our Bing tag. Apparently Microsoft had rolled out an “Automatic Clarity Integration” for anyone with an existing Bing/UET tag. Once we figured out where it was coming from, we had to quickly find all our Bing tags and disable them until we figured out how to disable the automatic Clarity integration.

Why ecosystem health matters: Fragility

Even if the tag vendor doesn’t have inherent security risks or files piggybacking on it, an unhealthy ecosystem still leaves you open to JavaScript errors, which can mean lost tracking or, worse yet, ruined user experiences. I’ve seen tag conflicts take down the rest of the TMS (meaning no analytics until it was fixed), and I’ve seen a poorly deployed Doubleclick iframe turn a checkout flow into a blank white screen (lesson learned: don’t use document.write to deploy iframes). The more tags you have, the older your tags, the more undocumented your tags, the more likely you are to run into problems like this eventually.

Why ecosystem health matters: Consumer Trust

An ever-increasing amount of (negative, fear-mongering) attention is being paid to cookies, data sharing, retargeting, and third-party tracking scripts. For instance, if a user is using Safari on their desktop, they’ll see this shield next to the URL bar, which shows them all the “bad, scary” trackers that Safari is “saving” them from:

It doesn’t add any nuance, of “this tag is monetizing you and will follow you around the internet with targeted ads that you’ll think are creepy” vs “that tag is used for first-party analytics, which improves your user experience without any harm to you at all”. Users ARE paying attention to who you are sharing their data with (even if they don’t really know what they’re afraid of.)

Why ecosystem health matters: Privacy Regulation Compliance

Privacy regulations such as the CCPA (California Consumer Privacy Act), which is a pretty good representative of various laws in the US, and the General Data Protection Regulation (GDPR) in the EU, are constantly changing (or at least, they way they are interpreted and enforced is constantly changing). For instance, if you’re in the EU and have users in Austria, and Austria suddenly decides that Google tracking is no longer GDPR-compliant (and therefore “illegal”), how quickly could you disable or update all of your Google tags so you could be compliant with confidence? Many companies would really struggle with that. 

Why ecosystem health matters: Site Speed

The main thing folks think of when they think of Tag Bloat is the effect on site speed. There is an incredible amount of data out there showing that site speed affects conversion rates. I had a hard time choosing between studies, because there are just so many that have found things like:

Site speed is also a direct ranking factor in SEO (and an indirect ranking factor, if users bounce or spend less time on your site because of the slow user experience). 

Folks often discount the impact that marTech tracking has on site speed. It’s ironic that the technology we use to measure site success can actually decrease site success. Every now and then the SEO team or IT may come to the Analytics and Marketing folks and say “that TMS library is too heavy and causes all our problems”, and there may be some cleanup, and usually some back and forth, before it’s ultimately decided “sorry, we HAVE to have tracking. Leadership gave us their stamp of approval”. And that may be true, but that doesn’t mean we shouldn’t minimize the impact, because (as I discuss in my next post), it does have an impact…

Tag Management Best Practices: Only Use ONE tag container

My first few posts in this series have established why marTech Ecosystem health matters, and shown some ways to measure the impact marTech is having on your site. Now I’m going to talk about some ways to minimize any negative impact.

Often, an agency will ask you to add another tag container to your site. This is especially true if you’re on Adobe Launch, which many agencies aren’t as comfortable with as GTM. There are two reasons to NOT a second tag container to your site:

1. Impact to Page Performance. TMS files and extensions/tag templates do have some inherent weight to them- code that must be run on every page so that the TMS knows what other code to bring in. An empty GTM container with no tags, triggers, or variables in it, is around 33 KB. If you add a single Google Adwords conversion pixel using the Adwords Tag Template, that adds another 20 KB. An entirely empty Adobe Launch library is nearly 10KB. Once you start adding rules and tags, a TMS can easily become one of the single heaviest assets on many sites.

2. Loss of Control. Let’s say you notice a “clarity.js” file throwing an error on your site, but to your knowledge you aren’t deploying anything called clarity.js. If you’re spread between two Tag Management containers, it can be hard enough to establish that no one is deploying that tag directly. Once you track down that that file is being deployed by Bing, you need to find all the Bing tags and disable or modify them, and you need to do it quickly to get rid of that error. That kind of agility and knowledge of your own ecosystem becomes very difficult if you’re spread between tools. You should have one point of entry for tags to your site, and you should watch that entry point carefully.

Sometimes, adding a second tag container is unavoidable, but it should never be done without thinking about the risks.

Measure the impact that MarTech has on your site

Folks often take for granted that they “must” have analytics and martech tracking and tools on their site. And that may be true, but that doesn’t mean we shouldn’t try to minimize the impact… because it DOES have an impact. I looked at the home pages of 5 brands from the top of the Fortune 500, and found that:

  • On average, 35% of the page weight was for MarTech. (Note, this does include stuff beyond just conversion pixels… it’s tag managers, optimization tools, analytics, consent management… some of which may improve the user experience, but none of which is required for the site to function.) For the Telecom site, literally half of the site weight was marTech!
  • Tag Manager Containers alone can carry a lot of weight- for the Energy site I looked at, it was 15% of the site’s total weight! I mention this in my post about TMS Best Practices, but an EMPTY Google Tag Manager container weighs 30-40 kilobytes (not to compare, because weight-when-empty isn’t the most useful measure, but an empty Adobe Launch library is 9 KB). If you add a single Google Adwords Conversion tag using GTM’s Tag Template, that adds another 20 KB. That’s more than any script on my site, including the whole jQuery library. A “full” TMS library can easily be the “heaviest” single asset on a page.
  • All that marTech tagging also means a lot of different domains are contributing something as the page loads. The Media site I looked at had almost 200 different third-party domains adding their script, pixels or iframes to the page.
  • MarTech DOES impact page speed in a way that. That media site had a speed index of 13.2 seconds for a mobile device (meaning it took about 13 seconds before the majority of VISIBLE content on the site was loaded.)

See the impact of your TMS on YOUR site with Chrome Developer Tools

If you want to get a quick sense of the impact marTech has on your site, there is an easy way to do it using the Chrome Developer console. I find the easiest way to get to the Dev Console is to right-click anywhere on a page and choose “inspect” (you can also do Ctrl+Shift+J on Windows or Ctrl+Option+J on a Mac). Once in there, go to the network tab (#1) and find your main TMS container file. You can use a filter (#2) to get to it quickly: adobedtm.com or launch for Launch, googletagmanager.com or gtm for GTM, or utag for Tealium. Right click on the first file that matches that filter (#3), and select “Block Request URL” (#4):

This essentially gives you an “ON/OFF” switch for your TMS:

You can load any page on your site with your TMS “on”, as it is for all your users, then turn your TMS “off” and see the difference in load times and amount of files and weight on your site. The “DOMContentLoaded” and “Load” numbers at the bottom of the network window are good ones to watch, since they show the amount of time for the main structure of your site to be built, and the time for images and other content to load.

Do make sure you’re comparing apples to apples- either use a fresh incognito window for each load, or click the “Disable Cache” checkbox in the network tab (#5 in the graphic above).

For example, with the TMS in place, this particular site I tested took 3.5 seconds to load, and had 54 files (“requests”) with a total weight of 3.2 MB. (I tend to ignore the “KB transferred” and “Finish” numbers for reasons not worth going into).

After I “blocked” the TMS, those DOMContentLoaded and Load times went down considerably:

This isn’t a perfect science- every time a user loads a page they’ll get slightly different times depending on device, bandwidth, server response times, and caching- but it can give you a quick sense of how much your marTech and analytics scripts have on your site. If you want to take it a step further, you can use this “Request Blocking” trick while running a lighthouse test directly in your dev console:

Get deeper insight on the impact on your site with webpagetest.org and my marTech Governance and Impact template

If you want to dig deeper, I’ve developed an Excel template that uses webpagetest.org to export then analyze the domains that load resources on your site:

Deeper instructions are in the template itself, but at a high level: run a webpagetest.org scan of a page on your site- for instance, I ran a scan on marketingAnalyticsSummit.com. Once the scan has run, grab the URL and paste it in to cell B1, just for safekeeping (you can always refer back to it later). The “good stuff” is on the Domain tab, but if you’re interesting in overall metrics on the main page, you can throw this script in your developer console:

document.querySelectorAll("#tableResults tr")[0].remove() //clean up the table so we can copy the whole table but get only the row we care about
document.querySelectorAll("#tableResults tr")[0].remove()
copy(document.getElementById("tableResults")) //copy the table to your clipboard
document.location.href=document.location.href //refresh the page so you get the full table back

Then paste into cell A4 in the template (it’s bright yellow) to fill out this handy little table:

Next go to the domains portion of the scan:

Here you can see a nice view of all the different domains loading resources on your page.

In general, you can use the process of elimination to figure out what is from a marTech vendor… in most cases, if it isn’t a domain you own, it’s marTech (not always, but… often enough). Figuring out which marTech vendor can be a bit trickier. I mean, www.linkedin.com is pretty obvious, but LinkedIn also fires resources from licdn.com.

Over time, I’ve collected a mapping of domains to vendors that we’re making publicly available. It will never going to be comprehensive (the list of 9900+ marTech vendors is ever growing and changing) but it can at least serve as a starting point. If you have changes or additions, you can throw them on the un-curated (and publicly editable) version. Anyway, once you are on the Domains Breakdown part of your scan results, you can put this code into the developer console to copy the table on the right to your clipboard:

copy(document.querySelectorAll(".google-visualization-table-table")[1])

Then paste the result into cell A12 of the excel template (it’s red), which will fill out the table and allow it to check if domains match our list of known domain/vendor mappings, then summarize what it finds:

For instance, here we find that marketinganalyticssummit.com’s home page weight is 26% marTech (and 8% TMS). It’s a very light site to begin with, so 26% of not much is…. not much. But it’s interesting to see how much is going on behind the scenes, even on “light” pages.

Now that you’ve established a high-level view of marTech’s impact on your site, you may want to dive deeper and do a tag audit… I’ve got tips and tricks for how!

How to control tags so they don’t control you

I’m speaking at the Marketing Analytics Summit about marTech tracking and how to keep tags from causing problems on your site. In true Jenn fashion, I have way more that I want to convey than I can possibly cram into my 30-minute presentation, so I have a series of blog posts to store all the practical takeaways:

If there is one thing I hope folks take from my presentation, it’s to do something now. You don’t have to wait until you have a ton of resources to do a full audit and clean up. Do what you can, when you can. Implement standards for tagging going forward. Even if it only helps a few tags, your future self will thank you.

If you’re here in Vegas for the Marketing Analytics Summit this week, let me know, I’d love to meet up!

Summit Tips & Tricks: Fallout and Conversion

I’m presenting as part of a Rockstar Tips & Tricks session at Adobe’s Virtual Summit this year- if you’re registered, you’ll eventually able to view the recordings on-demand. But in case folks miss it or want more detail, this post walks though the tips/tricks I’m presenting. With the old Reports & Analytics (R&A) reports going away, I want to look at some of the cool fallout and pathing things we can still do in workspace.

Note, I do think this will all make more sense in a video demo than in a blog post, but I know the presentation went pretty fast, so this post can be a sort of supplemental guide for the video demo.

TL;DR

If you’re coming from my presentation and just want that way under-used TEXTJOIN excel formula from the end of Tip #2, here you go:

=TEXTJOIN(">",TRUE,A13:E13)

Also, in case I didn’t make it clear in my presentation (I didn’t), I recommend this Pivot Table filter to get to Full Paths: descending, by “Sum of Path views”, filtering on “Complete Path contains Exits”.

If that makes no sense to you, or you just want the full context, read on:

Tip #1: Fallout and Funnels

In the old R&A days, I had a few ways to track a conversion flow fallout:
The Custom Events Funnel gave a nice little graphical interface to compare numbers in. You could drag in events (no dimensions) to see them in a funnel.
screenshot of an Adobe Reports and Analytics Custom Events Funnel
A few things to note, though: first, the graphic did not change based on the data, so it could be misleading. Second, it didn’t actually pay attention to if the user did the events in sequence (notice in my screenshot that I have zero checkouts because I forgot to set it, yet 60 orders). Really, it was just a way to compare difference between events, regardless of sequence.

There was also a fallout report in R&A, which could be built off of pageName or any prop that you had the foresight to have pathing enabled on. (At least its graphic matches the data.)

Both of these are replaced in Workspace by the Flow visualization, which can be used for events (like an old Events Funnel):

Or on page names (or any other dimension- not restricted to pathing props):

This is already showing more flexibility and power than the old R&A pathing reports. But you could also mix and match events and dimensions. Let’s say I have a new campaign with 4 marketing landing pages and I want to see how those 4 pages collectively affect my Fallout. Because I’m setting page type in an eVar (which I consider essential to any implementation), I can bring in the page type to represent all four of those pages together, then continue the funnel based on page name:

I could pull events, props, and eVars all into the same report and see them in sequence and the fallout between. But I can take that a step farther and drop dimensions and segments into the area below the visualization key , where it that currently says “all visitors”, to compare how the fallout looks for just that dimension or segment. For instance, if I now want to compare those four pages, I can:

(Note, the top step has 100% for each dimension value because of course 100% of the visits with that page type have that page type.)

This visualization helps me spot something pretty immediately: the marketing page represented by the purple (sorry the names get cut off in the screenshot so you can’t see my Harry Potter references) got by far the most visits (75), but had the worst fallout. Whereas the one in pink had far viewer visits but much better retention. I’d probably go to the marketing or content team and see if we could get the best of both worlds somehow: whatever we’re doing to GET people to the purple marketing page, plus whatever content is on the pink one that seems to be getting people to STAY.

That’s just one made-up example, but there are so many ways this could be used. Let me know if you’ve found other fun ways.

Tip #2: Full Path Report and the Flow Visualization

One thing that Reports and Analytics for which Workspace doesn’t have a direct equivalent are the Full Path and Pathfinder reports. The Full Path report would show, well, the most popular full paths that users took through the site, from entry to exit:

Personally, I didn’t see a ton of value in this report because it would be so granular… if one visitor went page a>b>c>d, and another did just b>c>d, and another just a>b>c, they’d all appear as different line items in the report. However, I do know some folks like to include this in dashboards and such. But in my mind, most of the insight/analysis would be done in a Path Finder report. This helpful little tool would let you specify certain flows and patterns you were interested in, leaving room for filling in the blanks. The Bookends Pattern was a particular favorite: what happened between point A and point B? This would help analysts see unexpected user paths.

This is the functionality that currently workspace can’t directly replace. The closest we get is a flow visualization…. which may very well be my favorite Workspace tool. I should write another post about all the lesser-known great things about this report (segments! trends! pretty-and-useful colors!), but for now I’m going to focus on turning it into a full path report.

First, I bring Page in as an entry dimension. At this point, it should just be showing me my top entry pages. Since I want a more complete story, I’m going to click the “+_ more” so it includes more pages, then I’m going to right-click on that column and tell it to  “Expand Entire Column”:

That’ll add a second column. I’ll keep repeating this process until I have 5 or 6 columns as expanded as can be.

Unfortunately, you probably don’t want to expand columns too deep or too far to the right because after a while Workspace starts slowing down and struggling. So this won’t cover all paths but it will help us get the most common ones.

With all of the winding and webbing between pages, it’s still hard to get at that list of paths…. for that, I need to take the report to Excel. So I send the report to myself as a CSV (either through Project>Download CSV, or Share>Send File Now). This gives me something like this:

It’s not quite useful yet. And something kinda odd happens, where it shows me all possible combinations of paths, even ones that never happened (all the lines that don’t have a number in the “Path views” column.) But that’s ok, we can ignore those for now. What I am going to do is create a new column to the right, and put this formula in it:

=TEXTJOIN(">",TRUE,A13:E13)

(Where A13:E13 are the cells of that row that contain all of my steps).

Then I copy that formula all the way down. This gives me a single string with complete paths, like “Entry (Page)>home>internal search>product details:skiving snackboxes>checkout:view cart>Exits”. I’ll also throw a name at the top of that column like “Complete Paths”.

This still isn’t really great for analysis, so I take it one step farther, and create a pivot table by selecting all the cells in my “Path Views” and “Complete Paths” columns (except for the empty ones up top), then going to the “Insert” tab in excel and clicking “Pivot Table”:

This will create a new sheet with an empty pivot table.  I’ll set up my Pivot Table Fields with “Complete Paths” as my rows and “Sum of Path Views” as my values:

Then I’ll sort it, descending, by “Sum of Path views”, filtering on “Complete Path contains Exits” (otherwise you get incomplete portions of journeys, as the download will show the path “entry>a>b>c>exit” as “entry>a”, “entry>a>b”, “entry>a>b>c”, and “entry>a>b>c>exit”).

And there we go! A very rough approximation of Full Paths.

But there are a some nuances and caveats here, the main one being that it only goes so deep. The Flow Visualization really starts struggling when you have too many fully expanded columns, and the downloaded CSV reflects only the number of columns you had in the visualization. So paths that are longer than than the 4-6 columns are going to be excluded. Accordingly, this is ideally used for most common paths (which usually aren’t more than 5 or 6 pages anyway).

So this shows Full Paths, but where does that Path Finder stuff come in? Really, I’d expect this to serve as a starting point for a lot of further analysis in excel, where you can use filters to find the paths that interest you most (eg “begins with” or “ends with”). Hopefully before R&A goes away, Adobe will have some way to truly replicate that Path Finder functionality, but in the meantime, this at lease gives you some options. If you have a specific use case you want to achieve and don’t know how, leave a comment here or find me on #measure slack or twitter, and I’d love to help you figure it out.

If nothing else, now more of the world will know about how handy TEXTJOIN is ;).

Converting Doubleclick tags to Global Site Tag (gtag)

This is a bit anticlimactic of a post, for my triumphant return after a long while of not blogging, but this question keeps coming up so I figured I’d document it publicly so I can reference it when needed. I find many of my posts that are for my own benefit, to use as a reference, tend to be the most viewed posts, so here’s Jenn’s notes to herself about converting Doubleclick tags to gtag:

Quick Reference

If you’re generally familiar with gtag and the old Doubleclick/Floodlight/Search Ads 360 tags, and just want a quick reminder of the order of things, here is the format of the send_to variable, inspired by Google’s documentation:

‘send_to’ : ‘DC-[floodlightConfigID (src)]/[activityGroupTagString (type)]/[activityTagString (cat)]+[countingMethod-either conversion or purchase]

That’s src, type, and cat pulled from these spots:

If that doesn’t make a lot of sense to you, you need more examples, or you just want to understand it better, read on.

Walkthrough

Doubleclick’s standard tag used to look something like this (sometimes using an img instead of an iframe, but the same general idea either way):

<!--
Start of Floodlight Tag: Please do not remove
Activity name of this tag: Jenn-Kunz-Example-Tag
URL of the webpage where the tag is expected to be placed: https://www.digitaldatatactics.com/
This tag must be placed between the <body> and </body> tags, as close as possible to the opening tag.
Creation Date: 08/18/2021
-->
<script type="text/javascript">
  var axel = Math.random() + "";
  var a = axel * 10000000000000;
  document.write('<iframe src="https://12345678.fls.doubleclick.net/activityi;src=12345678;type=myagency;cat=jkunz_001;dc_lat=;dc_rdid=;tag_for_child_directed_treatment=;tfua=;npa=;gdpr=${GDPR};gdpr_consent=${GDPR_CONSENT_755};ord=' + a + '?" width="1" height="1" frameborder="0" style="display:none"></iframe>');
</script>
<noscript>
  <iframe src="https://12345678.fls.doubleclick.net/activityi;src=12345678;type=myagency;cat=jkunz_001;dc_lat=;dc_rdid=;tag_for_child_directed_treatment=;tfua=;npa=;gdpr=${GDPR};gdpr_consent=${GDPR_CONSENT_755};ord=1?" width="1" height="1" frameborder="0" style="display:none"></iframe>
</noscript>
<!-- End of Floodlight Tag: Please do not remove -->

Some agencies/deployments still use this format. You may get a request from an agency or marketer to deploy a tag documented this way. But if you already use gtag on your site, or you just want to modernize and take advantage of gtag, you can convert these old iframe tags into gtag by just mapping a few values.

Before you get started, you need to note a few things from your original tag:

1. The “src”, also known as your account ID.
2. The “type”, also known as a “Activity Group Tag String”
3. The “category”, also known as the “Activity Tag String”

Keep track of these, we’ll use them in a moment. (If you also have custom dimensions or revenue and such, hang on, we’ll get to those in a bit.)

Step 1. Global Site Tag code

First, make sure you have gtag code on your site. No matter how many tags use it, you only need to reference it once.

Note, I’m only walking through the code; if you’re in Launch, using a tool like Acronym’s Google Site Tag extension (which is great, in my opinion), hopefully this post combined with their documentation will be enough to get you what you need to know.

<!-- 
Start of global snippet: Please do not remove
Place this snippet between the <head> and </head> tags on every page of your site.
-->
<!-- Global site tag (gtag.js) - Google Marketing Platform -->
<script async src="https://www.googletagmanager.com/gtag/js?id=DC-12345678"></script>
<script>
  window.dataLayer = window.dataLayer || [];
  function gtag(){dataLayer.push(arguments);}
  gtag('js', new Date());

  gtag('config', 'DC-12345678');
</script>
<!-- End of global snippet: Please do not remove -->

The part that matters, that establishes which gtag accounts you want to set up, is in the config:

This is where you’d put #1- Your Account ID. Add a “DC-” to the front of it (DC for Doubleclick, as opposed to AW for AdWords…), and you have your config ID.

Step 2. Send an event

With that, you have the gtag library on your page, and you’ve configured it. But this merely “primes” your tracking…. it sets it up, but does not actually send the information you want. For that, you need to convert your original iframey tag into a gtag event. A traditional gtag event looks like this:

<!--
Event snippet for Jenn-Kunz-Example-Tag-Home-Page on https://www.digitaldatatactics.com: Please do not remove.
Place this snippet on pages with events that you’re tracking. 
Creation date: 08/19/2021
-->
<script>
  gtag('event', 'conversion', {
    'allow_custom_scripts': true,
    'send_to': 'DC-12345678/myagency/jkunz_001+standard'
  });
</script>
<noscript>
<img src=""https://ad.doubleclick.net/ddm/activity/src=12345678;type=myagency;cat=jkunz_001;dc_lat=;dc_rdid=;tag_for_child_directed_treatment=;tfua=;npa=;gdpr=${GDPR};gdpr_consent=${GDPR_CONSENT_755};ord=1?"" width=""1"" height=""1"" alt=""""/>
</noscript>
<!-- End of event snippet: Please do not remove -->

But the part we need (and the only part that actually sends info to google) is just this:

gtag('event', 'conversion', {
  'allow_custom_scripts': true,
  'send_to': 'DC-12345678/myagency/jkunz_001+standard'
});

Here is where we do our mapping. As stated above, the format for the “send_to” variable is:

'send_to' : 'DC-[#1 floodlightConfigID (src)]/[#2 activityGroupTagString (type)]/[#3 activityTagString (cat)]+[countingMethod]

So this in your original iFrame/pixel Doubleclick tag:

Becomes this for gtag:

There are two other things to pay attention to in that gtag event, and they don’t map directly to something in your original tag:

(Note, the “allow_custom_scripts” will always be true, so you don’t need to worry about it.)

The event type (“conversion”) and the counting method (“+standard”) will reflect whether or not your tag is for a purchase or not. For the Counting Method, Google says:

Standard counts every conversion by every user. This is the counting method for action activities generated in Search Ads 360.

Transactions counts the number of transactions that take place, and the cost of each transaction. For example, if a user visits your website and buys five books for a total of €100, one conversion is recorded with that monetary value. This is the counting method for transaction activities generated in Search Ads 360.

Unique counts the first conversion for each unique user during each 24-hour day, from midnight to midnight, Eastern Time (US).

In other words, most cases will be “conversion”/”+standard”, unless you’re tracking a purchase, in which case it will be “purchase”/”+transaction”. If your original tag has cost and qty in it, then it’s a purchase, otherwise, it’s likely just a conversion and you can use “+standard” as your counting method. On a rare occasion, you may be asked to use “+unique” so the conversion only counts once per day. I hear there may be other counting methods, but I don’t have examples of them.

If your tag is a conversion and you’re not sending custom variables, then the allow_custom_scripts and the send_to is all you need- you’re done! If it is a transaction and/or you have custom variables, read on.

Transactions

If your original iframey tag is intended for a purchase confirmation page, it probably includes a “qty”, “cost” and “ord”, with the intent for you to dynamically fill these in based on the user’s transaction:

If so, then your gtag will need a few more things, like value and transaction_id (and make sure your event type is “purchase” instead of “conversion”) :

gtag('event', 'purchase', {
  'allow_custom_scripts': true,
  'value': '[Revenue]',
  'transaction_id': '[Order ID]',
  'send_to': 'DC-12345678/myagency/jkunz_001a+transactions',
});

The mapping for these is probably fairly obvious. What was “cost” goes into “value”; what was “ord” goes into “transaction_id”:

Quantity seems to have fallen by the wayside; the old iframey tags asked for it but I’ve never seen it in a gtag purchase tag.

Maybe it could go without saying, but to be clear, the expectation is that you’d dynamically fill in those values in square brackets. So if you’re in Launch, it might look like this:

gtag('event', 'purchase', {
  'allow_custom_scripts': true,
  'value': _satellite.getVar('purchase: total'),
  'transaction_id': _satellite.getVar('purchase: confirmation number'),
  'send_to': 'DC-12345678/myagency/jkunz_001a+transactions',
});

Custom Variables

Doubleclick tags might contain parameters that are the letter u followed by a number and the name of what they want you to send there:

There are custom variables. In gtag, they get set thus:
  gtag('event', 'purchase', {
    'allow_custom_scripts': true,
    'value': '[Revenue]',
    'transaction_id': '[OrderID]',
    'u1': '[Flight Number]', 
    'u2': '[Departure Airport]', 
    'u3': '[Departure Date]',
    'send_to': 'DC-12345678/sales/jkunz_001a+transactions',
  });

Again, the expectation is that you would replace anything in square brackets with dynamic values that reflect the user’s experience. I often leave the description they provided in comments, just for reference and to make sure I’m putting stuff in the right places.

gtag('event', 'purchase', {
  'allow_custom_scripts': true,
  'value': _satellite.getVar('purchase: total'),
  'transaction_id': _satellite.getVar('purchase: confirmation number'),
  'u1': _satellite.getVar('flight number'), //'[Flight Number]'
  'u2': _satellite.getVar('departure code'), //'[Departure Airport]'
  'u3': _satellite.getVar('flight date'), //'[Departure Date]'
  'send_to': 'DC-12345678/myagency/jkunz_001a+transactions',
});

What about the serialization and other gunk?

You may be wondering about some of the other bits of code in the original iframey tag:

The “axel” stuff was a cache buster, to prevent the browser from loading the iframe or img once, then pull it from cache for later views. The cache buster would generate a new random number on every page load, so to the browser it looked like a new file rather than the file it has in its cache. The good news is, gtag handles cache busting, as well as all that other stuff for you; it’s not needed any longer. Gtag also handles GDPR differently so you don’t need to carry those parameters over.

NoScript

A quick word about the noscript tags:

These are a remnant of an older internet, where mobile devices didn’t run JavaScript, and tag management systems didn’t exist. It says “if the user doesn’t run JavaScript, then run this HTML instead”, usually followed by a hard-coded pixel.

Frankly, it’s comical that anyone includes these anywhere anymore:

  1. These days, the only devices not running JavaScript are bots. Unless you WANT inflated numbers (maybe some agencies/vendors do), you don’t want bot traffic.
  2. Sometimes vendors/agencies will ask for dynamic values within the noscript tag:

    Which is just silly- without JavaScript, the only way to dynamically set those values would be to have them set serverside. Which I’ve never seen anyone do.
  3. Finally… if you’re deploying it through a Tag Management System, and the user isn’t running JavaScript, the TMS won’t run at all, meaning the rule won’t fire, meaning the noscript tag will never get a chance to be written.

In short, delete noscript tags and their contents. Some agencies may not know these not-exactly-rocket-science nuances of the internet, and may whine at you about not including the noscript tag because they have a very specific checklist to follow. Include it if you must to appease the whiners, but… know that if it’s within a TMS, the contents of that noscript tag will never see the light of day.

How to send to multiple accounts

This isn’t tied directly to converting from old tags, but for reference…. if you want to send a gtag event to multiple accounts, instead of duplicating the tag, you can add a config in your global code:

<script async src="https://www.googletagmanager.com/gtag/js?id=DC-12345678"></script>
<script>
  window.dataLayer = window.dataLayer || [];
  function gtag(){dataLayer.push(arguments);}
  gtag('js', new Date());

  gtag('config', 'DC-12345678');
  gtag('config', 'DC-87654321');
</script>

Then in your event code, turn the “send_to” into an array:

gtag('event', 'purchase', {
  'allow_custom_scripts': true,
  'value': '[Revenue]',
  'transaction_id': '[Order ID]',
  'send_to': [
     'DC-12345678/myagency/jkunz_001a+transactions',
     'DC-87654321/agencyB/book_01a+transactions'
  ]
});

You may ask “If I only need to have the gtag library script once, and I have multiple gtags, what do I do about this id on the global script src?”

To be honest, that bit matters very little. I usually just delete it, or leave it with whatever ID was on the first gtag I added (if someone knows what purpose that ID serves and wants to tell me I shouldn’t be deleting it, please chime in). The “configs” are the important part.

Conclusion

It’s not exactly cutting-edge content, nor relaxing bedside reading, but I hope this comes in handy for those for those struggling to find answers in existing documentation. Cheers!

Disclaimer: I’m not a Doubleclick expert, but I have deployed many many Doubleclick tags- I daresay hundreds of them. If any of the above isn’t “gospel truth”, it’s “truth that worked for me”, but I’d love to hear if any of my reasoning/theories are incorrect or incomplete.

Industry Docs Series #1: the Solution Design Reference (the not-so-secret sauce)

33 Sticks Logo- Orange Circle with 3|3 in it

(This is cross-posted from the 33 Sticks blog)

Almost every organization that uses Digital Analytics has some combination of the following documents:

  • Business Requirements Document
  • Solution Design Reference/Variable Map
  • Technical Specifications
  • Validation Specifications/QA Requirements

All of the consulting agencies use them and may even have their own special, unique versions of them. They’re often heavily branded and may be closely guarded. And it can be very difficult to find a template or example as a starting point.

In my 11 years as a consultant who focuses on Digital Analytics implementation, I’ve been through MANY documentation templates. I’ve used the (sometimes vigorously enforced) templates provided by whoever was my current employing agency; I’ve used client’s existing docs; I’ve created and recreated my own templates dozens of times. I’ve used Word documents, Google Sheets, txt/js documents, Confluence/Wiki pages, and (much to my chagrin) Powerpoint presentations (there’s nothing wrong with Powerpoint for presentations, but it really isn’t a technical documentation tool). I’ve in turn shared out templates and examples both within agencies and more broadly in the industry, and I’ve now decided it’s time to stop hoarding documentation templates and examples, and share them publicly in a git repo, starting with the most foundational: the Variable Map (sometimes called a Solution Design Reference, or SDR).

The Variable Map

You may or may not have a tech spec or a business requirements document, but I bet you have a document somewhere that lays out the mapping of your custom dimensions and metrics. Since Adobe Analytics has up to 250 custom eVars, 75 custom props, and up to 1000 custom events, it’s practically essential to have a variable map. In fact, the need for a quick reference is why I created the PocketSDR app a few years ago (shameless plug) so you could use your phone to check what a variable was used for or what its settings are. But when planning/maintaining an implementation, you need a global view of all your variables. This isn’t a new idea: Adobe’s documentation discusses it at a higher level, Chetan Gaikwad covered it on his blog more recently, Jason Call blogged about this back in 2016, Analytics Demystified walked through it in 2015, and even back in 2014, Numeric Analytics discussed the importance of an SDR. Yet if you want a template, example, or starting point, you still generally have to ask colleagues and friends to pass you one under the table, use one from a previous role/org, or you just start from scratch. This is why 33 Sticks is making a generic SDR publicly available on our git repo.

The cats out of the bag

There are various reasons folks (including me) haven’t done this in the past. For practitioners, there is (understandably) a hesitance to share what you are doing- you might not want your brand to be associated with how good/not good your documentation is, and/or you may not want to give your competitors any “help” (more on that in an upcoming podcast). For agencies, there may be a desire to not show “the man behind the curtain”, or they may believe (or at least want their clients to believe) that their documentation is special and unique.

So why am I sharing it?

  • Because I don’t think the format of my variable map is actually all that special- a variable map is a variable map (but that doesn’t mean we should all have to start from scratch every time). This isn’t intended to be the end-all-be-all of SDRs, but rather, as a starting point or an example for comparison. Any aspect of it might be boring/obvious to you, overkill, or just not useful, but my hope is that there is at least a part of it that will be helpful.
  • Because while I DO think I have some tricks that make my SDRs easier to use, I recognize that most of those tricks are things I only know or do because someone in the industry has shared their knowledge with me, or a client let me experiment a bit to find what worked best.
  • Because 33 Sticks doesn’t bill by the hour and I have no incentive to hoard “easy”/low-level tasks from my clients. If a client comes to me and says “I used your blog post to get started on an SDR without you”, I can say, “Great! That leaves us more time to get into more strategic work!”
  • Because where I can REALLY offer value as a consultant isn’t in the formatting/data entry part of creating an SDR, but rather, in the thought we put into designing/documenting solutions, and in how we help clients focus on goals and getting long-term value out of data so they don’t get stuck in “maintenance mode.”
  • And finally, because I’m hoping to hear back from folks and learn from them about what they’re doing the same or differently. We all benefit from opening up about our techniques.

Enough with the pedantry, let’s get back to discussing how to create a good SDR.

SDR Best Practices

It’s vitally important you keep your variable map in a centrally-accessible location. If I update a copy of the SDR on my hard drive, that doesn’t benefit anyone but me, and by the time I merge it back into the “global” one, we may already have conflicts.

It should go without saying, but keeping it updated is also a good idea. I actually view most types of Digital Analytics documentation as fitting into one of two categories: primarily useful at the beginning of an implementation, OR an ongoing, living document. Something like a Business Requirements Document COULD be a living document, but let’s be honest: its primary value is in designing the solution, and it can have a high level of effort to keep it comprehensively up-to-date. Technical specifications for the data layer are usually a one-time deal: after it is implemented, it goes into an archive somewhere. But the simple variable map… THAT absolutely should be kept current and frequently referenced.

Tools for Creation/Tracking

If you’re already using Adobe Analytics, then you probably need to get an accurate and current list of your variables and their settings. Even if you have an SDR, you should check if it matches what’s set up in your Analytics tool. You could always export your settings from within the Admin Console, but I’ve found the format makes the output very difficult to use. I’d recommend going with one of the many other great industry tools (all of which are free):

These tools are great for getting your existing settings, but they don’t leave a lot of room for planning and documenting the full implementation aspects of your solution, so usually I use these as a source to copy my existing settings into my SDR.

What should an SDR contain?

On our git repo, you’ll see an Excel sheet that has a generic example Variable Map. Even if you have an SDR that you like already, take a look this example one- there may be some items you might get use of (plus this post will be much more interesting if you can see the columns I’m talking about).

Pretty much ALL Variable Maps have the following columns (and mine is no different):

  • Variable/Report Name (eg, “Internal Search Terms”)
  • Variable/Report Number (eg, “eVar1”)
  • Description
  • Example Value
  • Variable Settings

But over the years I’ve found a few other columns can really make the variable map much more use-able and useful (and again, this all may make more sense if you download our SDR to have an example):

A “Sorting Order” or Key

If, like me, you love using tables, sorting, and filtering in Excel, you may discover that Excel doesn’t know how to sort eVars, props and events: because it isn’t fully a number or string, it thinks that the proper order is “eVar1, eVar10, eVar2, eVar20”. So if you’ve sorted for some small task and want to get back to a sensible order, you pretty much have to do things manually. For this reason, I have a simple column that has nothing other than numbers indicating my ideal/proper/default order for my table.

Param

This is for those who live their lives in analytics beacons rather than reports, like a QA team. It’s nice to know that s.campaign is the variable, and it is named “Tracking Code” in the reports, but it’s not exactly obvious that if you’re looking in a beacon, the value for s.campaign shows in the “v0” parameter.

Variable Type

Again, I love me some Excel filtering, and I like being able to say “show me just my eVars” or “show the out of the box events”. It can also be handy for those not used to Adobe lingo (“what the heck is an eVar?”). So I have a column with something like the following possible values:

  • eVar- custom conversion dimension
  • event- custom metric (eg “event1”)
  • event- predefined metric (eg “scAdd”, “purchase”)
  • listVar- custom conversion dimension list (eg, “s.list1”)
  • predefined conversion dimension (eg, “s.campaign”)
  • predefined traffic dimension (eg, “s.server”)
  • products string
  • prop- custom traffic dimension

For things like this, where I have specific values I’ll be using repeatedly, I’ll keep an extra tab in the workbook titled “worksheet config”. Then I can use Excel’s “data validation” to pull a drop-down list from that tab.

Category

This is my personal favorite- I use it every day. It’s a way to sort/group variables and metrics that are related to each other- eg, if you are using eVar1, eVar2, prop1, event1, and event2 all in some way to track internal search, it’s nice to be able to filter by this column and get something like this:

The categories themselves are completely arbitrary and don’t need to map to anything outside of the document (though you might choose to use them in your Tech Spec or even in workspaces). Here’s a standard list I might use:

  • Content Identification
  • Internal Search
  • Products
  • Checkout
  • Visitor Segmentation
  • Traffic Sources
  • Authentication
  • Validation/Troubleshooting

Again, I could create/maintain this list in a single place on my “worksheet config” tab, then use “Data Validation” to turn it into a drop-down list in my Variable Map.

Status

This basically answers the question “is this variable currently expected to be working in production?” I usually have three possible values:

  • Implemented
  • Broken (or some euphemism for broken, like “needs work”)
  • In progress

Data Quality Notes

This is for, well, notes on Data Quality. Such as “didn’t track for month of March” or “has odd values coming from PDP”.

Last Validated

This is to keep track of how recently someone checked on the health of this variable/metric. The hope is this will help prevent variables sitting around, unused, with bad data, for months or even years. I even add conditional formatting so if it has been more than 90 days, it turns red.

Scope

Where would I expect to see this variable/metric? Is it set globally? Or maybe it happens on all cart adds?

Description

I’m certainly not unique in having this column, and I’m probably not unique in how many SDRs I’ve seen where this column has not been kept up-to-date. I’d like to stress the importance of this column, though- you may think the purpose of a variable is obvious, but almost every company I’ve worked with has at least one item on their variable map where no current member of the analytics team has any idea what the original goal was.

Ideally, the contents of this column would align with the “Description” setting within the Admin Console for each variable, so that folks viewing the reports can understand how to use the data.

We ARE all setting and using those descriptions, right? RIGHT?

Logic/Formatting and Example Value

Your Variable Map needs to have a place to detail the type of values you’d expect in this report. This:

  • helps folks looking at the document to understand what each variable does (especially if you don’t have stellar descriptions)
  • lets developers/implementers know what sort of data to send in
  • provides a place to make sure values are consistent. For instance, if I have a variable for “add to cart location”, there’s no inherent reason why the value “product details page” would be more correct than “product view”… but I still only want to see ONE of those values in my report. If folks can look in my variable map and see that “product details page” is the value already in use, they won’t go and invent a new value for the same thing).

I often find it a good exercise to run the Adobe Consulting Health Dashboard and grab the top few values from each variable to fill out this column.

Source Type

What method are we using to get the value for the data layer? I usually use Excel Data Validation to create this list:

  • query param
  • data layer
  • rule-based/trigger-based (like for most events, which are usually manually entered into a TMS rule based on certain conditions)
  • analytics library (aka, plugins)
  • duplicate of another variable

Dimension Source or Metric Trigger

This contains the details that complement the “source type” column: If it comes from query parameters, WHICH query parameter is it? If it comes from the data layer, what data layer object is it? If it’s a metric, what in the data layer determines when to fire it (for instance, a prodView event doesn’t map directly to a data layer object, but it does RELY on the data layer: we set it whenever the pageType data layer object is set to “product details page”.)

Variable Settings

This is something many SDRs have but can be a pain to keep consistent, because every variable type has different settings options:

  • events
    • type
      • counter (default)
      • numeric
      • currency
    • polarity
      • Up is good (default)
      • Up is bad
    • visibility
      • visible everywhere (default)
      • builders
      • hidden everywhere
    • serialization
      • always record event (default)
      • record once per visit
      • use serialization ID
    • participation
      • disabled (default)
      • enabled
  • eVars
    • Expiration
      • Visit (default)
      • Hit/Page View
      • Event (like purchase)
      • Custom
      • Never
    • Allocation (note: this is less relevant these days now that allocation can be decided in workspace)
      • Most Recent (last) (default)
      • Original Value (first)
      • Linear
    • Type
      • Text string (default)
      • Counter
    • Merchandising
      • Disabled (default)
      • Product Syntax
      • Conversion Syntax
    • Merchandising Binding Events
  • Props
    • List Prop
      • Disabled (default)
      • Enabled
    • List Prop Delimiter
    • Pathing
      • Disabled (default)
      • Enabled
    • Participation
      • Disabled (default)
      • Enabled

As you can see, you’d have to account for a lot of possible settings and combinations- I’ve seen some SDRs with 30 columns dedicated just to variable settings. I tend to simplify and just have one column where I only call out any setting that differs from the default, such as “Expires after 2 weeks” or “merchandising: conversion syntax, binds on internal search event, product view, and cart add.”

Classifications

This should list any classifications set up on this variable. This is a good one to keep updated, though I find many folks don’t.

GDPR Considerations

Don’t forget about government privacy regulations! Use this to flag items that will need to be accounted for in your privacy policies and procedures. Sometimes merely having the column can serve as a good reminder that privacy is something we need to consider when designing our solution.

TMS Rule and TMS Element

I find these very helpful in designing a solution, but I’ll admit, they often fall by the wayside after an implementation is launched- and I don’t even see that as a bad thing. Once configured, your TMS implementation should speak for itself. (This will be even more true when Adobe releases enhanced search functionality in Launch.)

Other tabs

Aside from the variable map, I try to always have a tab/sheet for a Change Log. If nothing else, this can help with version control when someone had a local copy of the SDR that got off sync from the “official” one. It also lets you know who to contact if you have questions about a certain change that was made. I also use this to flag which changes have been accounted for in the Report suite Settings (eg, I may have set aside eVar2 for “internal search terms”, but did I actually turn it on in the tool?)

If you have many Report Suites, it may be helpful to have a tab that lists them all- their friendly name, their report suite ID, any relevant URLs/Domains, and perhaps the business owner of that Report Suite.

Also, if you have multiple report suites, you may want to add columns to the variable map or have a whole separate tab that compares variables across suites (the Report Suite exporter and the Observepoint SDR Builder both have this built in).

What works for you?

As I said, I don’t think my template is going to be the ultimate, universal SDR. I’d love to know what has worked for other people- did I miss anything? Is there anything I’m doing that I should do differently? Do you have a template you’d like to share? I’d love to hear from you!

Enhanced logging for Direct Call Rules and Custom Events for Launch/DTM

33 Sticks Logo- Orange Circle with 3|3 in it

(This is cross-posted from the 33 Sticks blog)

UPDATE: The wonderful devs behind Adobe Launch have seen this and may be willing to build it in natively to the product. Please go upvote the idea in the Launch Forums!

As discussed previously on this blog, Direct Call Rules have gained some new abilities so you can send additional info with the _satellite.track method, but unfortunately, this can be difficult to troubleshoot. When you enabled _satellite.setDebug (which should really probably just be called “logging” since it isn’t exactly debugging) in DTM or Launch, your console will show you logs about which rules fire. For instance, if I run this JavaScript from our earlier blog post:

_satellite.track("add to cart",{name:"wug",price:"12.99",color:"red"})

I see this in my console:

Or, if I fire a DCR that doesn’t exist, it will tell me there is no match:

Unfortunately, this doesn’t tell me much about the parameters that were passed (especially if I haven’t yet set up a rule in Launch), and relies on having _satellite debugging turned on.

Improved Logging for Direct Call Rules

If you want to see what extra parameters are passed, try running this in your console before the DCR fires:

var satelliteTrackPlaceholder=_satellite.track //hold on to the original .track function
_satellite.track=function(name,details){ //rewrite it so you can make it extra special
   if(details){
      console.log("DCR NAME: '"+name+"' fired with the following additional params: ", details)
   }else{
      console.log("DCR NAME: '"+name+"' fired without any additional params")
   }
   //console.log("Data layer at this time:" + JSON.stringify(digitalData))
   satelliteTrackPlaceholder(name,details) //fire the original .track functionality
}

Now, if I fire my “add to cart” DCR, I can see that additional info, and Launch is still able to run the Direct Call Rule:

You may notice this commented-out line:

//console.log("Data layer at this time:" + JSON.stringify(digitalData))

This is for if you want to see the contents of your data layer at the time the DCR fires- you can uncomment it if that’d also be helpful to you. I find “stringifying” a JavaScript object in console logs is a good way of seeing the content of that object at that point in time- otherwise, sometimes what you see in the console reflects changes to that object over time.

Improved Logging for “Custom Event”-Based Rules

If you’re using “Custom Event” rules in DTM or Launch, you may have had some of the same debugging/logging frustrations. Logs from _satellite.setDebug will tell you a rule fired, but not what extra details were attached, and it really only tells you anything if you already have a rule set up in Launch.

For example, let’s say I have a link on my site for adding to cart:

Add To Cart!

My developers have attached a custom event to this link:

var addToCartButton = document.getElementById("cartAddButton"); 
addToCartButton.addEventListener("click", fireMyEvent, false); 
function fireMyEvent(e) { 
   e.preventDefault(); 
   var myCustomEvent = new CustomEvent("cartAdd", { detail: { name:"wug", price:"12.99", color:"red" }, bubbles: true, cancelable: true }); 
   e.currentTarget.dispatchEvent(myCustomEvent)
}

And I’ve set up a rule in Launch to listen to it:

With my rule and _satellite.setDebug in place, I see this in my console when I click that link:

But if this debugging doesn’t show up (for instance, if my rule doesn’t work for some reason), or if I don’t know what details the developers put on the custom event for me to work with, then I can put this script into my console:

var elem=document.getElementById("cartAddButton")
elem.addEventListener('cartAdd', function (e) { 
  console.log("'CUSTOM EVENT 'cartAdd' fired with these details:",e.detail)
}, false);

Note, you do need to know what element the custom event fires on (an element with the ID of “cartAddButton”), and the name of the event (“cartAdd” in this case)- you can’t be as generic as you can with the Direct Call Rules.

With that in place, it will show me this in my console:

Note, any rule set up in Launch for that custom event will still fire, but now I can also see those additional details, so I could now know I can reference the product color in my rule by referencing “event.detail.color” in my Launch rule:

Other tips

Either of these snippets will, of course, only last until the DOM changes (like if you navigate to a new page or refresh the page). You might consider adding them as code within Launch, particularly if you need them to fire on things that happen early in the page load, before you have a chance to put code into the console, but I’d say that should only be a temporary measure- I would not deploy that to a production library.

What other tricks do you use to troubleshoot Direct Call Rules and Custom Events?