Adobe DTM Launch: Improvements for Single Page Apps

For those following the new release of Adobe’s DTM, known as Launch, I have a new blog post up at the Cognetik blog, cross-posted below:

It’s finally here! Adobe released the newest version of DTM, known as “Launch”. There are already some great resources out there going over some of the new features (presumably including plenty of “Launchey Launch” puns), which includes:

  • Extensions/Integrations
  • Better Environment Controls/Publishing Flow
  • New, Streamlined Interface

But there is one thing I’ve been far more excited about than any other: Single Page App compatibility. I’ve mentioned on my personal blog some of the problems the old DTM has had with Single Page Apps:

  • Page Load Rules (PLRs) can’t fire later than DOMready
  • Event-Based Rules (EBRs) and Direct Call Rules (DCRs) can’t “stack” (unlike PLRs, there’s a 1:1 ratio between rules and analytics beacons, so you can’t have one rule set your global variables, and another set section-specific variable, and another set page-specific variables, and have them all wrap into a single beacon)
  • It can be difficult to fire s.clearVars at the right place (and impossible without some interesting workarounds)
  • Firing a “Virtual Page Load” EBR at the right time (after your data layer has updated, for instance) can be tricky.

So much of this is solved with the release of DTM Launch.

  • You can have one rule that fires EITHER on domReady OR on a trigger (Event-based or Direct Call).
  • You have a way to fire clearVars.
  • You can add conditions/exclusions to Direct Call rules

There are other changes coming that will improve things even further, but for now, these changes are pretty significant for Single Page apps.

Multiple Triggers on a Single Rule

If I have a Single Page App, I’ll want to track when the user first views a page, the same as for a “traditional” non-App page. So if I’m setting EBRs or DCRs for my “Virtual Page Views”, I’d need to account for this “Traditional Page Load” page view for the user’s initial entry to my app.
In the past, I’d either have a Page Load Rule do this (if I could be sure my Event-Based Rules wouldn’t also run when the page first loaded), or I could do all my tracking with Event-Based Rules, and I’d have to suppress that initial page view beacon. I may end up with an identical set of rules- one for when my page truly loads, and one for “Virtual Page Views”.

Now, I can do this in a single rule:

Where my “Core- Page Bottom” event fires when the page first loads (like an old Page Load Rule):

…and another “Page Name Changed” event that fires when my “page name” Data Element changes (like an old Event-Based Rule):

No more need to keep separate sets of rules for Page Load Rules and Virtual page views!

Clearing variables with s.clearVars()

Anyone who has worked on a Single Page App, or on any Adobe Analytics implementation with multiple s.t() beacons on a single DOM, has felt the pain of variables carrying over from beacon to beacon. Once an “s” variable (like s.prop1) exists on the page, it will hang around and be picked up by any subsequent page view beacon on that page.

Page 1

Page 2

Page 3

Page 4

s.pageName

Landing

Search Results

PDP > Red Wug

Product List

s.events

(blank)

event14

prodView

prodView

s.eVar1 (search term)

(blank)

Red Wug

Red Wug

Red Wug

My pageName variable is fine because I’m overwriting it on each page, but my Search Term eVar value is hanging around past my Search Results page! And on pages where I don’t write a new events string, the most recent event hangs around!

In the old DTM, I had a few options for solving this. I could do some bizarre things to daisy-chain DCRs to make sure I could get the right order of setting variables, firing beacons, then clearing variables. Or, I could use a hack in the “Custom Code” conditions of an Event-Based Rule, to ensure s.clearVars would run before I started setting beacons. Or, more recently, I could use s.registerPostTrackCallback to run the s.clearVars function after the s_code detected an s.t function was called.

Now, it’s as simple as specifying that my rule should set my variables, then send the beacon, then clear my variables:

Directly in the rule- no extra rules, no custom code, no workarounds!

Rule Conditions on ALL Rule Types (including Direct Call)

If I were using Direct Call Rules for my SPA, in the past, I’d have to account for Direct Call Rules having a 1:1 relationship with their trigger. If I had some logic I needed to fire on Search Results pages, and other logic to fire on Purchase Confirmation pages, I could have my developers fire a different “_satellite.track” function on every page:

Then in each of those rules, I’d maintain all my global variables as well as any logic specific to that beacon. This could be difficult to maintain and introduces extra work and many possible points of failure for developers.

Or, I could have my developers fire a global _satellite.track(“page view”) on every page, and in that one rule, maintain a ridiculous amount of custom code like this:

This would take me entirely out of the DTM interface, and make some very code-heavy rules (not ideal for end-user page performance, or for DTM user experience — here’s hoping your developer leaves nice script comments!)

Now, I can still have my developers set a single _satellite.track(“page view”) (or similar), and set a myriad of rules in Launch, each using that same “page view” trigger, but each with a condition so you can set different variables in different rules directly in the interface when your developers fire _satellite.track(“page view”) on your Search Results versus when they fire _satellite.track(“page view”) on your Purchase Confirmation page:

I’d love to say all my SPA woes were solved with this release, but to show I haven’t entirely drunk the Kool-aid, I will admit some of my most wished-for features (and extensions) aren’t in this first release of Launch. I know they’re coming, though- future releases of Launch will add additional features that will make implementing on a Single Page App even simpler, but for now, it still feels like Christmas came early this year.

Coming to Adobe Summit 2017

adobe-su-1024x454I’ll be at Adobe Summit in Las Vegas next week, Monday March 19th through Friday March 24th. If you happen to be out that way, shoot me a comment here and hopefully we can meet up! I’ll be attending a lot of the DTM sessions and will be ready to help folks understand what the DTM updates mean for them.
I’ll also be presenting at Un-Summit at UNLV on Monday, speaking about Marketing Innovation and Cognetik’s new tableau data connector. Come check it out!

Building a Strong Analytics Practice: #3- Putting Processes in Place

This post was originally posted on the Cognetik blog as part of a series on Building a Strong Analytics Practice.

An analytics practice has some unique challenges as far as project management goes. They are accountable for delivering quality data, but there are many elements out of their control:

  • once you deliver technical specifications, you have to “hurry up and wait” until developers have questions or are ready for validation
  • documentation and validation often happen on “moving targets”, where the site map or functionality may be in flux right up until they are released
  • release cycles rarely include a window of time with a stable site for the Analytics team to perform validation
  • projects rarely exist in a vacuum- they usually need to meld with a global solution which is itself often a work-in-progress
  • an analytics project may involve many deliverables, with many audiences:
    • Site Map/wireframes
    • Business Requirements for reporting
    • Solution design
    • Technical Specifications for IT
    • TMS Engineering
    • IT Implementation
    • IT QA/Validation
    • Report QA/Validation
    • Push to production
    • Distribute reports, provide insights, and take action

Without an official process or flow in place, it can be easy for things to slip through the cracks.

Because of these external variables, both Agile/SCRUM and Waterfall methodologies have some major drawbacks. You may need to be writing technical specifications before a design is complete; taking an iterative approach may be too resource-intensive; the analytics team may not be deeply embedded enough to collaborate with developers in real time. Some of these difficulties can be alleviated by improving communication within your org, as discussed in the second post in this series, but the most significant thing you can do to help streamline your initiatives is to have an established process in place, to be sure that all the necessary tasks are completed in the right order by the right people. You may not be able to always adhere to it, so plan on some flexibility, but it can be a good exercise to merely look at your process and document “this current project is an exception because [fill in reason] and we’re going to account for the deviation from our process by [fill in alternative]”.

Take this example user flow. In grey are examples of deliverables or decisions following a single sample reporting requirement (track a new form’s ID):

When visualized in this way, it may become easier to establish what kind of timelines you need when working with developers, clarify who is going to update the global documentation, or ensure that QA/validation procedures are followed. Cognetik can help set up and document these governances practices, but we’d also love to hear from you what you’ve found works well, or what struggles you’ve encountered.

Building a Strong Analytics Practice: #2 Connecting your Organization

This post was originally posted on the Cognetik blog as part of a series on Building a Strong Analytics Practice.

Once you have clear ownership within your core team, you need to get a global view of how data is used at your company. Once you’ve accounted for all the different moving pieces, it can be easier to:

  • Communicate clearly to the right people
  • Represent your team’s priorities to the rest of the org
  • Involve the right people in relevant decision-making processes
  • Have the right scope when planning new projects
  • Get more use out of your data by increasing its audience
  • Ensure that org-wide critical tasks and relationships have clear ownership within your Core Team
  • Keep leadership informed about the value your data provides, and the level of effort it takes to maintain it
  • Enlist resources to fill in gaps in your organization

To do this, I recommend mapping out the rest of your ecosystem. This will help break down those silos and give the individuals at your company who use the data the direction and support they need to get value out of the data.

Map out your ecosystem

This task can be surprisingly revealing, and may require some creative thinking. First, map out the obvious ones: marketers, analysts, developers and consultants. Don’t forget personalization, optimization, web development, privacy, project managers, data scientists, product owners and so on. Make sure to include executive sponsors and leadership.

List your company’s data tools

Next, list out the tools your company uses that touches data: your digital analytics tool of choice (Adobe or Google Analytics for instance), Optimization (eg Adobe Target or Optimizely), Content Management (eg Demandware), Customer Relationship Management (eg Saleforce), marketing (eg Kochava, Floodlight, Adwords), User Experience (eg Clicktale), Voice of Customer (eg ForeSee, Opinionlab)… feeling overwhelmed yet? Don’t worry, you can use this as a sort of head start:

Define responsibilities for each point of contact

For each component, figure out a point of contact- for instance, for your CRM, who will your Core team be working with? Reach out to the appropriate parties in your org. At bare minimum, send them an email, highlighting how they fit into the “Big Picture” for data at your company. If you are just now establishing a governance model, it may be worthwhile to even schedule a quick touch-base with each key person/team in your ecosystem to:

  • Make sure they know how they fit into the bigger Data-driven scheme and seek out feedback for what they’d love to get out of analytics
  • Establish who on your team is their main point of contact. Encourage them to keep you in the loop for any changes they are aware of that might impact (or benefit from) analytics
  • Ensure they have access to tools and resources (like variable maps or documentation on processes) in some centrally-located repository (like Sharepoint, Confluence, or Google Drive if need be)
  • Establish reasonable expectations and scope on new initiatives. If they understand that you have a queue for analytics initiatives, and a process to follow that may take __ weeks/months to change the solution or kick off something new.
  • Give them visibility into the type of work you currently have on your roadmap and how that fits into company priorities.
  • Ask if there are any areas in the company not currently using analytics that might get value out of being included in these conversations.

You may or may not want a regular meeting with them, but it’s important to make sure the relationship always exists, that they see the active role Analytics has in your org, they feel involved, and have a clear line of communication with you.

“Mapping takes too much of our time!”

I understand that this may require an investment of resources to get up and running, and that it may exceed the current scope of analytics at many companies. But, similar to establishing a strong data core team, this upfront investment of time and resources will, at bare minimum, help a company get more value out of its data, and may actually reduce the amount of resources needed in the long run. Establishing communication and relationships will give focus to analytics initiatives, reduce rework, and include analytics in conversations sooner (getting rid of the pre-release scramble to get analytics added and validated).

Building a Strong Analytics Practice: #1- Your Core Team

This post was originally posted on the Cognetik blog as part of a series on Building a Strong Analytics Practice. 

Imagine this conversation:
Joe: “I just got the wireframes for the new site filtering tool. We need an analytics BRD and Tech Spec so developers can begin work.”
Anna: “But my team of developers is working on a priority project through November then goes into 3-month code freeze!”
Joe: “K, well, this new site feature is also a priority, and we need tracking on the new filtering tool. “
Mike: “Our reporting needs to focus on the KBOs that just came down from the top. How does the new filtering tool relate to conversions? What business decisions can we make if we know it’s being used?”
Susan: “Speaking of, we know that conversion tracking on mobile is broken- has been since September. Can we prioritize getting THAT fixed?”
Dan: “But, we’ve been grading our personalization efforts using that report! We need to get that fixed, like… yesterday!”

As painful as that conversation feels, can you imagine how much more painful it is in places where those conversations are NOT happening? I know that no one wants more meetings in their schedule, but a regular, FOCUSED check-in between key stakeholders can make all the difference. But who should attend such meetings?

We’ve seen a lot of value in establishing an Analytics Model- some folks may call it a Center of Excellence (CoE), for others it may still just be called the “analytics team”. Whatever you call it, the important thing is to really think out the roles, goals, processes, and responsibilities so that this team- and their data- can really drive the conversation, rather than “be driven”. I’m going to call this the Data Core Team.

To start, figure out who is going to be on your Data Core Team. I’ve seen this filled by a single person, and I’ve seen a team of 6 or more. Either way, with however many people you have, you’ll need to fill these roles:

Solution Owner

The Solution Owner is the “business requirements” gatekeeper. Their world is one of Key Business Objectives (KBOs) and Key Performance Indicators (KPIs). They:

  • gather reporting requirements
  • give focus to the solution by running reporting requests through a value-driven filter, prioritizing work that will provide truly actionable data to their organization
  • interface with executives, product managers, and analysts to make sure their data practice aligns with their company’s business objectives and roadmap
  • work with the Implementation Architect to design a solution that will suit their reporting needs
  • are in charge of keeping implementation documentation in a centrally-accessible place

Implementation Architect

The Implementation Architect owns the technical side of the solution. The Solution Owner says if something is worth tracking; the Implementation Architect figures out how to make it happen. They:

  • know the tools of their trade- for instance, for an Adobe Analytics implementation, they’d know when to use an eVar instead of a prop, or how to set the products string. For a Google Analytics implementation, they know when to use an event or a custom metric, and the best practices behind event categories, actions and labels
  • make decisions and enforce standards for variable maps and data architecture. Often, the decisions they make are a bit arbitrary- for most folks, it doesn’t REALLY matter if you identify your pageName in a JavaScript object named “digitalData.page.pageInfo.pageName” or in “universal_variable.navigation.page”, or if you use eVar41 or eVar42- the important thing is that someone is in a position to make that decision and keep it standard.
  • administer any Tag Management Solution their company uses, perhaps just controlling access and settings standards, or perhaps going so far as to be the editor and publisher of changes.
  • work with the Data Steward to document what is needed from site developers.

Data Steward

The Data Steward works with site developers to apply the analytics solution to the site. As the person charged with owning the data for your site(s), they have more of a technical understanding of analytics and how it fits into site development. They:

  • may not be a developer themselves, but they need to understand the processes developers use, the overall way the site works, and to be able to make informed decisions about data layers, tag management, JavaScript frameworks, SDKs…
  • work closely with the Implementation Architect to design and deploy a solution that works, given your site’s architecture and developer resources.
  • interface with site engineers and developers and represent their interests to the rest of the Core Data Team.
  • own Data Quality- they run the QA processes and help maintain implementation health with regular audits.

Report Administrator

The Report Administrator does a lot of the housekeeping needed to get data to the end users within their org. They:

  • interface with the report users, ensuring they have the access and training they need
    distribute reports, create logins, and provide access to training
  • may serve a PM role within the Data Core team, keeping track of upcoming initiatives and timelines.

Conclusion

I’d say it’s rare for these responsibilities to actually be split among 4 people as I’ve described here. The important thing is that you have clear ownership of each responsibility, and that this Core Team works closely together as the single source of the “Big Picture”.

Each role may need to pull on other resources freely- for instance, if your company doesn’t have an Adobe Analytics implementation expert, then your Implementation Architect may hire outside consultants to help them. I’d say in general, this doesn’t mean outsourcing the OWNERSHIP of your implementation architecture- each company still needs an internal resource with motivation and access to resources to move the solution in the right direction. No matter how excellent your consultants are, they will never be able to own your implementation as well as someone internal could. A good consultant, however, will support that internal resource, providing industry knowledge and guidance, and investing in the future of your org’s analytics practice by training internal resources. Basically, outside consultants should be tasked with making internal owners look like Rock Stars.

If this seems like a bit much, or it’s hard to sell your organization on the idea of such an investment of resources, consider this: Each of the bullet points above- as well as other, more specific tasks- aren’t negotiable. They are all things that inherently need to happen to have an analytics solution. What we frequently see happen is that when not enough resources are assigned to supporting these tasks, reporting can still happen, but the net amount of effort is higher (because there was no forward-thinking master plan and folks have to make it up as they go) and the value of the reporting is lower. I promise, you will get a return if you invest in getting the right resources and support for your Analytics Practice.

This, of course, doesn’t cover everything you’d see done in a healthy Analytics Practice. I’d love to hear from readers if I left off anything they view as critical, and what they’ve seen work well or not work well!

Cross-post: Intro to Building a Strong Analytics Practice

I’m blogging again! I’m doing a series over on the Cognetik blog on how to build a Successful Analytics Practice.  Here is a cross-post of the intro:

Who’s driving this thing?

Our industry is full of intelligent, motivated people. Yet it feels like so often, for the amount of effort and thought we put into our Analytics solutions, we never quite get the full value that we know is there. As an analytics/data engineer, most of the work that comes across my desk is very tactical: deep-dive audits, technical specifications, configuring variables, setting up dashboards… these are all very valid and worthy activities, yet I still often hear frustration from my clients such as:

  • We have a hard time getting others within our company to see the value and potential in our analytics.
  • I want to use new tool features but upgrading will take too much effort.
  • Many teams in my organization interact with data, but they all work in silos.
  • It takes too long to get access to requested data.
  • My organization’s report usage is scattered and doesn’t align with global KPIs.
  • I need to apply my existing solution to a new site but I can’t find documentation on my current solution.
  • We’re not collecting the data I actually need for analysis.
  • We have so many new initiatives and works-in-progress, I don’t know which data I can trust.
  • Training users and developers on our implementation or toolset uses too many resources.
  • We collect a lot of data but I rarely get to see a report.

So what’s missing? For all the effort we put into designing solutions, implementing code, and configuring dashboards, what is stopping us from providing more value with our data?

I think often the problem is a lack of central leadership providing a foundation to work on. Now, I don’t mean to say our industry is lacking in leaders… far from it. But the problem is those leaders often aren’t given the resources or the permission to transform their org. So we end up with “lots of people in the car, but no one in the driver’s seat”. Because of how fast our industry has grown, Analytics practices have popped up in every organization, often organically and without much long-term planning. This leads to all those intelligent and motivated people working in silos, without a united focus or the resources to apply a global vision.

What’s the answer?

Each of these problems could be solved with the right Governance Model in place. That means consciously establishing roles, ownership, accountability, processes, and communication. Analytics should be a pro-active part of your organization, not an after thought. I’ll be posting a three-party series on how to get the ball rolling on establishing a healthy Analytics Practice:

Career changes and exciting opportunities

cognetik_logo-01

In an unexpected surprise for many (including myself) a few weeks ago I left Adobe and joined the team at Cognetik as a Principal Analytics Engineer. I’ll continue doing much of the same kind of work I’ve been doing- Analytics (now including Google Analytics again), Tag Management (not just DTM), data layers, governance, coding, building occasional tools on the side… the full gamut.

I love the team (and the products) at Adobe, and it wasn’t easy leaving them, but I’m content that in such a small industry, I’m bound to work with many of them again. And I’m very excited about this new opportunity: Cognetik is doing some incredible work for some exciting clients, and I’m thrilled to be in a position to offer a lot of value to my clients.

I’m also excited to be a part of the team building the Cognetik Product, a data visualization and insights tool that is unlike any other I’ve seen or worked with. I’ll be keeping up the blog, of course, and my various DTM enablement materials. I’m also on the #measure slack channel.

For those who I know because of my role at Adobe, it was a great experience, and I hope to stay in touch! Here’s to working in a fantastic and ever-evolving industry, full of smart, passionate people finding new ways to answer old questions.

Deploying Google Marketing Tags Asyncronously through DTM

I had posted previously about how to deploy marketing tags asynchronously through DTM, but Google Remarketing tags add an extra consideration: Google actually has a separate script to use if you want to deploy asynchronously. The idea is, you could reference the overall async script at the top of your page, then at any point later on, you would fire google_trackConversion to send your pixel. However, this is done slightly differently when you need your reference to that async script file to happen in the same code block as your pixel… you have to make sure the script has had a chance to load before you fire that trackConversion method, or you’ll get an error that “google_trackConversion is undefined”.

Below is an example of how I’ve done that in DTM.

//first, get the async google script, and make sure it has loaded
var dtmGOOGLE = document.createElement('SCRIPT');
var done = false;

dtmGOOGLE.setAttribute('src', '//www.googleadservices.com/pagead/conversion_async.js');
dtmGOOGLE.setAttribute('type','text/javascript');

document.body.appendChild(dtmGOOGLE);
dtmGOOGLE.onload = dtmGOOGLE.onreadystatechange = function () {
 if(!done && (!this.readyState || this.readyState === "loaded" || this.readyState === "complete")) {
 done = true;
 callback();

 // Handle memory leak in IE
 dtmGOOGLE.onload = dtmGOOGLE.onreadystatechange = null;
 document.body.removeChild(dtmGOOGLE);
 }
};

//then, create that pixel
function callback(){
 if(done){ 
 /* <![CDATA[ */
 window.google_trackConversion({
 google_conversion_id : 12345789,
 google_custom_params : window.google_tag_params,
 google_remarketing_only : true
 });
 //]]> 
 }
}

Why (and why not) use a Data Layer?

What’s a Data Layer?

Tag Management Systems can get data a variety of ways. For instance in DTM you can use query string parameters, meta tags, or cookie values- but in general, data for most variables comes from one of two sources:

  • To really take advantage of a tag management system like DTM, I may choose to scrape the DOM. I’m gonna call this the MacGyver approach. This uses the existing HTML and styles on a site to For instance, DTM could use CSS selectors to pull the values out of a <div> with the class of “breadcrumb”, and end up with a value like “electronics>televisions>wide-screen”. This relies on my site having a reliable CSS structure, and there being elements on the page that include the values we need for reporting.
  • If I want even more flexibility, control and predictability, I may work with developers to create a data layer. They would create a JavaScript object, such as “universal_variable.pageName”, and give it a value based on our reporting needs, like “electronics | televisions | wide-screen > product list”. This gives greater control and flexibility for reporting, but requires developers to create JavaScript objects on the pages.

Conceptually speaking, a data layer is page-specific (but tool-agnostic) metadata that describes the page and the actions a user may take on it. Practically speaking, a data layer typically consists of a JavaScript object that contains all of the values we’d want to report on for a given page or user.

Data layers are important because they save developers time by allowing them to abstract out the metadata into a tool-agnostic syntax that a TMS like DTM can then ingest and set as data elements. Whereas once I would have told IT “please set s.prop5 and s.eVar5 to the search term on a search results page, and set s.events to event20” now I can just say “please put the search term in a javascript object such as digitalData.page.onsiteSearchTerm and tell me what object it is.” Then the TMS administrators could easily map that to the right variables from there.

You can see an example data layer if you’d like, or you can pull open a developer console for this very blog and look at the object “digitalDataDDT” to see the data layer that is automatically created from Search Discovery’s wordpress plugin.

Why a Data Layer?

My friends at 33 Sticks also have a great blog post on the subject, but I’ll list out some of the reasons I prefer clients to use a Data Layer. To me, it’s an upfront investment for a scalable, easily maintained implementation going forward. It does mean more work upfront- you have to first design the data layer to make sure it covers your reporting requirements, then you’ll need developers to add that to your site. But beyond those upfront tasks, configuration in your TMS will be much simpler, and it will save you many hours of CSS guess work and DOM scraping, and it may prevent broken reporting down the line.

    Upfront LOE Maintenance LOE
Route Amount of Control Dev Analytics Dev Analytics
Old fashioned “page on code” Medium Heavy Heavy Heavy Heavy
DTM + “Macgyver” Low Minimal Heavy Minimal Heavy
DTM + Data Layer High Heavy Medium Minimal Minimal

Another potential benefit to a Data Layer is that more and more supplementary tools know how to use them now. For instance, Observepoint’s site scanning tool can now return data on not just your Analytics and Marketing beacons, but on your Data Layer as well. And one of my favorite debugging tools, Dataslayer, can return both your beacons and your data layer to your console, so if something is breaking down, you can tell if it’s a data layer issue or a TMS issue.

Ask Yourself

Below are some questions to ask yourself when considering using a data layer:

How often does the code on the site change? If the DOM/HTML of the site changes frequently, you don’t want to rely on CSS selectors. I’ve had many clients have reports randomly break, and after much debugging we realized the problem was the developers changed the code without knowing it would affect analytics. It’s easier to tell developers to put a data layer object on a page then leave it alone, than it is to tell them to not change their HTML/CSS.

How CSS-savvy is your TMS team? If you have someone on your team who is comfortable navigating a DOM using CSS, then you may be able to get away without a data layer a little more easily… but plan on that CSS-savvy resource spending a lot of time in your TMS.  I’ll admit, I enjoy DOM-scraping, and have spent a LOT of time doing it. But I recognize that while it seems like a simple short-term fix, it rarely simplifies things in the long run.

How many pages/page types are on the site? A very complicated site is hard to manage through CSS- you have to familiarize yourself with the DOM of every page type.

How are CSS styles laid out? Are they clean, systematic, and fairly permanent? Clearly, the cleaner the DOM, the easier it is to scrape it.

How often are new pages or new site functionality released? Sites that role out new microsites or site functionality frequently would need a CSS-savvy person setting up their DTM for every change. Alternatively, relying on a data layer requires a data-layer-savvy developer on any new pages/site/functionality. It is often easier to write a solid Data Layer tech spec for developers to reference for projects going forward than to figure out CSS selectors for every new site/page/functionality.

How much link-tracking/post-page-load tracking do you have on your site? If you do need to track a lot of user actions beyond just page loads, involving IT to make sure you are tracking the right things (instead of trying to scrape things out of the HTML) can be extremely valuable. See my post on ways to get around relying on CSS for event-based rules for more info on options.

What is the turn-around time for the developers? Many clients move to DTM specifically because they can’t work easily within their dev team to set up analytics. A development-driven data layer may take many months to set up, stage, QA, and publish. Then if changes are needed, the process starts again. It may be worth going through the lengthy process initially, but if changes are frequently needed in this implementation, you may find yourself relying more on the DOM.

Are there other analytics/marketing tag vendors that may use a data layer? You may be able to hit two birds with one stone by creating a data layer that multiple tools can use.

Have you previously used another tag management system? Often, a data layer set up for a different tool can be used by DTM. Similarly, if the client ever moves away from DTM, their data layer can travel with them.

Does the site have jQuery? The jQuery library has many methods that help with CSS selectors (such as .parent, .child, .closest, .is, .closest…). A CSS-selector-based implementation may be more difficult without jQuery or a similar javascript library.

Who should create my Data Layer?

Ideally, your data layer should be created by your IT/developers… or at bare minimum, developers should be heavily involved. They may be able to hook into existing data in your CMS (for instance, if you use Adobe Experience Manager you can use the Context Hub as the basis for your data layer), or they may already have ideas for how they want to deploy. Your data layer should not be specific to just your Analytics solution; it should be seen as the basis of all things having to do with “data” on your site.

Yet frequently, for lack of IT investment, the analytics team will end up defining the data layer and dictating it to IT. These days, that’s what most Tech Specs consist of: instructions to developers on how to build a data layer. Usually, external documentation on data layers (like from consulting agencies) will be based on the W3C standard.

The W3C (with a task force including folks from Adobe, Ensighten, Microsoft, IBM…) has introduced a tool-agnostic data layer standard that can be used by many tools and vendors. The specifications for this can be found on the W3C site, and many resources exist already with examples. Adobe Consulting often proposes using the W3C as a starting point, if you don’t have any other plans. However, in my experience, generally that W3C is just a starting point. Some people don’t like the way the W3C is designed and most everyone needs to add on to it. For example, folks might ask:

  • why is “onsiteSearchTerms” part of digitalData.page? Can I put it instead in something I made up, like digitalData.search?
  • I want to track “planType”- the W3C didn’t plan for that, so can I just put it somewhere logical like digitalData.transaction?
  • I don’t need “digitalData.product” to be in an array- can I just make that a simple object.

The answer is: yes. You can tweak that standard to your heart’s delight. Just please, PLEASE, document it, and be aware that some tools will be built with the official standard in mind.

The Phased Approach

Many folks adopt a TMS specifically because they don’t want to have to go through IT release cycles to make changes to their implementation. You can still use a TMS to get a lot of what you need for reporting without a data layer and without a ton of CSS work. It may be worthwhile to put a “bare minimum” TMS deployment on your site to start getting the out of the box reports and any reports that don’t require a data layer (like something based on a plugin such as getTimeParting), then to fill in the data layer as you are able. I’d be wary though, because sometimes once that “bare minimum” reporting is in place, it can be easy to be complacent and lose some of the urgency behind getting a thorough solution implemented correctly from the start.

Conclusion

I fully understand that a properly designed data layer is a lot of work, but in my experience, there is going to be a lot of effort with or without a data layer- you can choose for that effort to be upfront in the planning and initial implementation, or you can plan on more longterm maintenance.