New industry tool: Adobe Configuration Export

An industry friend and former coworker, Gene Jones, made me aware of an awesome new tool he’s created- a tool that exports your Report Suite info into an excel file. It can compare the variable settings of multiple report suites in one tab, then creates a tab with a deeper look at all the settings for each report suite.

This is similar to the very handy Observepoint SDR Builder– I’ll freely admit I’m likely to use both in the future. Both (free) tools show you your settings and allow for report suite comparison. The Observepoint SDR Builder uses a google sheet extension and has a little more set up involved (partially because if you’re an Observepoint customer you can expand its functionality) but it can allow you manage your settings directly from the google sheet (communicating those changes back to the Adobe Admin Console).

But sometimes all you want is a simple export of current settings in a simple, local view, in which case the Adobe Configuration Export tool is very straightforward and simple to use.

And, it’s open source– the community can add to it and make use of it for whatever situations they dream up. I’m excited to see what features get added in the future (I see a “Grade Your Config” option that intrigues me). Nice work, Gene!

Quick poll: What should I tackle next?

Each new year, I tend to dive into some new side project (this is how the Beacon Parser and the PocketSDR came about). I have quite a few things I want to tackle right now, and one main thing I’m slowly plugging away at, but in the meantime, I’m wondering what to prioritize. So, a poll:

Any other ideas (or desired improvements to existing tools)? Let me know in the comments.

Career changes and exciting opportunities

cognetik_logo-01

In an unexpected surprise for many (including myself) a few weeks ago I left Adobe and joined the team at Cognetik as a Principal Analytics Engineer. I’ll continue doing much of the same kind of work I’ve been doing- Analytics (now including Google Analytics again), Tag Management (not just DTM), data layers, governance, coding, building occasional tools on the side… the full gamut.

I love the team (and the products) at Adobe, and it wasn’t easy leaving them, but I’m content that in such a small industry, I’m bound to work with many of them again. And I’m very excited about this new opportunity: Cognetik is doing some incredible work for some exciting clients, and I’m thrilled to be in a position to offer a lot of value to my clients.

I’m also excited to be a part of the team building the Cognetik Product, a data visualization and insights tool that is unlike any other I’ve seen or worked with. I’ll be keeping up the blog, of course, and my various DTM enablement materials. I’m also on the #measure slack channel.

For those who I know because of my role at Adobe, it was a great experience, and I hope to stay in touch! Here’s to working in a fantastic and ever-evolving industry, full of smart, passionate people finding new ways to answer old questions.

Why (and why not) use a Data Layer?

What’s a Data Layer?

Tag Management Systems can get data a variety of ways. For instance in DTM you can use query string parameters, meta tags, or cookie values- but in general, data for most variables comes from one of two sources:

  • To really take advantage of a tag management system like DTM, I may choose to scrape the DOM. I’m gonna call this the MacGyver approach. This uses the existing HTML and styles on a site to For instance, DTM could use CSS selectors to pull the values out of a <div> with the class of “breadcrumb”, and end up with a value like “electronics>televisions>wide-screen”. This relies on my site having a reliable CSS structure, and there being elements on the page that include the values we need for reporting.
  • If I want even more flexibility, control and predictability, I may work with developers to create a data layer. They would create a JavaScript object, such as “universal_variable.pageName”, and give it a value based on our reporting needs, like “electronics | televisions | wide-screen > product list”. This gives greater control and flexibility for reporting, but requires developers to create JavaScript objects on the pages.

Conceptually speaking, a data layer is page-specific (but tool-agnostic) metadata that describes the page and the actions a user may take on it. Practically speaking, a data layer typically consists of a JavaScript object that contains all of the values we’d want to report on for a given page or user.

Data layers are important because they save developers time by allowing them to abstract out the metadata into a tool-agnostic syntax that a TMS like DTM can then ingest and set as data elements. Whereas once I would have told IT “please set s.prop5 and s.eVar5 to the search term on a search results page, and set s.events to event20” now I can just say “please put the search term in a javascript object such as digitalData.page.onsiteSearchTerm and tell me what object it is.” Then the TMS administrators could easily map that to the right variables from there.

You can see an example data layer if you’d like, or you can pull open a developer console for this very blog and look at the object “digitalDataDDT” to see the data layer that is automatically created from Search Discovery’s wordpress plugin.

Why a Data Layer?

My friends at 33 Sticks also have a great blog post on the subject, but I’ll list out some of the reasons I prefer clients to use a Data Layer. To me, it’s an upfront investment for a scalable, easily maintained implementation going forward. It does mean more work upfront- you have to first design the data layer to make sure it covers your reporting requirements, then you’ll need developers to add that to your site. But beyond those upfront tasks, configuration in your TMS will be much simpler, and it will save you many hours of CSS guess work and DOM scraping, and it may prevent broken reporting down the line.

    Upfront LOE Maintenance LOE
Route Amount of Control Dev Analytics Dev Analytics
Old fashioned “page on code” Medium Heavy Heavy Heavy Heavy
DTM + “Macgyver” Low Minimal Heavy Minimal Heavy
DTM + Data Layer High Heavy Medium Minimal Minimal

Another potential benefit to a Data Layer is that more and more supplementary tools know how to use them now. For instance, Observepoint’s site scanning tool can now return data on not just your Analytics and Marketing beacons, but on your Data Layer as well. And one of my favorite debugging tools, Dataslayer, can return both your beacons and your data layer to your console, so if something is breaking down, you can tell if it’s a data layer issue or a TMS issue.

Ask Yourself

Below are some questions to ask yourself when considering using a data layer:

How often does the code on the site change? If the DOM/HTML of the site changes frequently, you don’t want to rely on CSS selectors. I’ve had many clients have reports randomly break, and after much debugging we realized the problem was the developers changed the code without knowing it would affect analytics. It’s easier to tell developers to put a data layer object on a page then leave it alone, than it is to tell them to not change their HTML/CSS.

How CSS-savvy is your TMS team? If you have someone on your team who is comfortable navigating a DOM using CSS, then you may be able to get away without a data layer a little more easily… but plan on that CSS-savvy resource spending a lot of time in your TMS.  I’ll admit, I enjoy DOM-scraping, and have spent a LOT of time doing it. But I recognize that while it seems like a simple short-term fix, it rarely simplifies things in the long run.

How many pages/page types are on the site? A very complicated site is hard to manage through CSS- you have to familiarize yourself with the DOM of every page type.

How are CSS styles laid out? Are they clean, systematic, and fairly permanent? Clearly, the cleaner the DOM, the easier it is to scrape it.

How often are new pages or new site functionality released? Sites that role out new microsites or site functionality frequently would need a CSS-savvy person setting up their DTM for every change. Alternatively, relying on a data layer requires a data-layer-savvy developer on any new pages/site/functionality. It is often easier to write a solid Data Layer tech spec for developers to reference for projects going forward than to figure out CSS selectors for every new site/page/functionality.

How much link-tracking/post-page-load tracking do you have on your site? If you do need to track a lot of user actions beyond just page loads, involving IT to make sure you are tracking the right things (instead of trying to scrape things out of the HTML) can be extremely valuable. See my post on ways to get around relying on CSS for event-based rules for more info on options.

What is the turn-around time for the developers? Many clients move to DTM specifically because they can’t work easily within their dev team to set up analytics. A development-driven data layer may take many months to set up, stage, QA, and publish. Then if changes are needed, the process starts again. It may be worth going through the lengthy process initially, but if changes are frequently needed in this implementation, you may find yourself relying more on the DOM.

Are there other analytics/marketing tag vendors that may use a data layer? You may be able to hit two birds with one stone by creating a data layer that multiple tools can use.

Have you previously used another tag management system? Often, a data layer set up for a different tool can be used by DTM. Similarly, if the client ever moves away from DTM, their data layer can travel with them.

Does the site have jQuery? The jQuery library has many methods that help with CSS selectors (such as .parent, .child, .closest, .is, .closest…). A CSS-selector-based implementation may be more difficult without jQuery or a similar javascript library.

Who should create my Data Layer?

Ideally, your data layer should be created by your IT/developers… or at bare minimum, developers should be heavily involved. They may be able to hook into existing data in your CMS (for instance, if you use Adobe Experience Manager you can use the Context Hub as the basis for your data layer), or they may already have ideas for how they want to deploy. Your data layer should not be specific to just your Analytics solution; it should be seen as the basis of all things having to do with “data” on your site.

Yet frequently, for lack of IT investment, the analytics team will end up defining the data layer and dictating it to IT. These days, that’s what most Tech Specs consist of: instructions to developers on how to build a data layer. Usually, external documentation on data layers (like from consulting agencies) will be based on the W3C standard.

The W3C (with a task force including folks from Adobe, Ensighten, Microsoft, IBM…) has introduced a tool-agnostic data layer standard that can be used by many tools and vendors. The specifications for this can be found on the W3C site, and many resources exist already with examples. Adobe Consulting often proposes using the W3C as a starting point, if you don’t have any other plans. However, in my experience, generally that W3C is just a starting point. Some people don’t like the way the W3C is designed and most everyone needs to add on to it. For example, folks might ask:

  • why is “onsiteSearchTerms” part of digitalData.page? Can I put it instead in something I made up, like digitalData.search?
  • I want to track “planType”- the W3C didn’t plan for that, so can I just put it somewhere logical like digitalData.transaction?
  • I don’t need “digitalData.product” to be in an array- can I just make that a simple object.

The answer is: yes. You can tweak that standard to your heart’s delight. Just please, PLEASE, document it, and be aware that some tools will be built with the official standard in mind.

The Phased Approach

Many folks adopt a TMS specifically because they don’t want to have to go through IT release cycles to make changes to their implementation. You can still use a TMS to get a lot of what you need for reporting without a data layer and without a ton of CSS work. It may be worthwhile to put a “bare minimum” TMS deployment on your site to start getting the out of the box reports and any reports that don’t require a data layer (like something based on a plugin such as getTimeParting), then to fill in the data layer as you are able. I’d be wary though, because sometimes once that “bare minimum” reporting is in place, it can be easy to be complacent and lose some of the urgency behind getting a thorough solution implemented correctly from the start.

Conclusion

I fully understand that a properly designed data layer is a lot of work, but in my experience, there is going to be a lot of effort with or without a data layer- you can choose for that effort to be upfront in the planning and initial implementation, or you can plan on more longterm maintenance.

What the DTM “top down” approach means for your page performance

Any javascript framework, including all Tag Management Systems like DTM, have the potential to ADD more javascript weight to your page. But if you approach things the right way, this javascript weight  and its effect on your page performance can by mitigated by using DTM to optimize how your tools and tags are delivered. In a partner post, I’ll be talking about how to get the most out of DTM as far as Third Party Tags go, but I think one key concept is worth discussing explicitly.
You may have heard DTM be referred to as a “Top Down” TMS. For instance, this appears in some of the marketing slide decks:
topdown

While yes, it’s worth discussing this as a holistic approach to your digital analytics, it actually has a very real effect on how you set your rules up and how that affects page performance. That’s what I hope to discuss in this post.

In a different TMS, or even in DTM if I haven’t changed my mindset yet, I may be tempted to do something like this:

down-upRules

Where I have differing rules for different scopes as well as for different tags (we’re pretending here that “Wuggly” is a Third Party Marketing Pixel vendor).

DTM does what it can to defer code or make it non-blocking, but there are parts of the DTM library which will run as syncronous code on all pages. Some of that is because of the way the code needs to work- the Marketing Cloud ID service must run before the other Adobe tools; older Target mbox code versions need to run syncronously at top of page. But there is also the code in the library that serves as a map for when and how all of the deferred code should run. For instance, my library may include the following:

downUpCode

All of this logic exists to say “if the current pageType is “home page”, run this code non-sequentially”.  The name, conditions and event code for each rule run on each page as part of the overall library- these serve as a map for DTM to know which code must run, and which code it can ignore and not run.
You’ll notice the code for the two rules is completely identical, except for the rule name (in blue) and the source of the external script (in yellow). Most importantly, the conditions (in green) are identical. Whereas if they shared a rule, we might see the exact same thing as above accomplished with half as much code:

topDownCode

I now have ONE rule, which would be used for ALL logic that should run on the Home Page. The part of the library that runs on every page to check against conditions only has to check if the “pageType” is “home page” once, rather than twice. And DTM still loads the two scripts as separate non-sequential javascript. This doesn’t look like a major change, but when viewed across a whole implementation, where there may be dozens of rules, it can make a big difference in how many rules and conditions DTM must check on every page.

In the DTM interface, this would look like this:
topDownRules

If I want to know which rules contain my “Wuggly” codes, I can use the  “Tag Name” filter in the rules list, which will show me all rules that have a third party tag that includes “Wuggly”:
topDownFilter

This is filtering based on the Tag Name specified when you add the tag code:
topDownTagName

Using this approach, where your rules are based on their scope (or condition) and contain all logic- Analytics, Target, third party- that applies to that scope can not only lighten your library weight, but it can also keep your DTM implementation organized and scalable, but it may also require a change of mindset for how DTM is organized- if someone else needed to deploy a tag on Product Detail Pages, they wouldn’t need to create a rule, but rather, they could see a “Product Detail Page” rule already exists with the scope they need, and they need only add the third party tag.

There is one potential downside to consider, though- the approval and publication flow is based on Rules in their entirety. You can’t say “publish my changes to the analytics portion of this tool, but not to the third party tag section”. Still, if you are aware of this as you plan the publication of new features, this potential drawback rarely overrides the advantages.

Beacon Parser round 3

First, you’ll notice a new look and feel for both the blog and the parser- I’m trying to give it a more uniform look.

Functionally, the biggest changes in this go-around are:

  • Variables and parameter should keep a clean, sorted order.
  • You can highlight values in older beacons that don’t match the most current beacon.
  • You can change display options before submitting your first beacon.
  • Display options now include friendly names and variable descriptions, taken largely from this Adobe help article.
  • I fixed some bugs with exporting to CSV and the products string.

Beacon Parser: Round One

I’ve had two major frustrations while troubleshooting and doing QA for clients lately:
1. The new POST beacons don’t split up prettily in debuggers (like Charles proxy).
2. Even if they did, it can be hard to compare beacons to each other if they don’t have the same number of variables.
So, as a pet project in the evenings, I’ve been working on a tool to solve for these problems. Today I introduce: the Beacon Parser!

I don’t consider it done yet, but it’s ready for Beta testing, perhaps. I am positive there are probably better ways to accomplish what I’ve done- I’m no development genius- but I must say i’m pretty happy with the result. I would love for a few folks to provide feedback, if possible.

Current Features:

  • If you enter multiple beacons, It aligns rows for similar variables across beacons.
  • You can export the table to a “csv” file. Note, in certain browsers (including firefox), you will need to add the “.csv” extension to the downloaded filename.
  • If your beacon includes “varName.”/”.varName” variables (clickmap, context data), it will concatenate all the currently “open” variables to output one variable/value pair:
    parserExample

Features I’m still working on (in order of ambitiousness):

  • I definitely still need to work out some bugs (see bottom of this post).
  • It doesn’t currently work great with incomplete beacons.
  • A “clear button”.
  • I’d like to sort the variables back to their original order- currently as you add new beacons that have variables the previous beacons did not, it just adds them to the end.
  • I’d like to have columns for notes on the purpose of the variable, and data quality if there’s anything clearly broken (like character limits being exceeded).
  • I’d love to connect to the Admin API and return the names of custom variables.

I’d also love to create a W3C data layer parser.

Known bugs (already). I will try to fix sometime this evening:

  • It breaks when the current or referring URL has query params in it. Kind of a big problem.
  • It struggles with some products strings.