Any javascript framework, including all Tag Management Systems like DTM, have the potential to ADD more javascript weight to your page. But if you approach things the right way, this javascript weight and its effect on your page performance can by mitigated by using DTM to optimize how your tools and tags are delivered. In a partner post, I’ll be talking about how to get the most out of DTM as far as Third Party Tags go, but I think one key concept is worth discussing explicitly.
You may have heard DTM be referred to as a “Top Down” TMS. For instance, this appears in some of the marketing slide decks:
While yes, it’s worth discussing this as a holistic approach to your digital analytics, it actually has a very real effect on how you set your rules up and how that affects page performance. That’s what I hope to discuss in this post.
In a different TMS, or even in DTM if I haven’t changed my mindset yet, I may be tempted to do something like this:
Where I have differing rules for different scopes as well as for different tags (we’re pretending here that “Wuggly” is a Third Party Marketing Pixel vendor).
DTM does what it can to defer code or make it non-blocking, but there are parts of the DTM library which will run as syncronous code on all pages. Some of that is because of the way the code needs to work- the Marketing Cloud ID service must run before the other Adobe tools; older Target mbox code versions need to run syncronously at top of page. But there is also the code in the library that serves as a map for when and how all of the deferred code should run. For instance, my library may include the following:
All of this logic exists to say “if the current pageType is “home page”, run this code non-sequentially”. The name, conditions and event code for each rule run on each page as part of the overall library- these serve as a map for DTM to know which code must run, and which code it can ignore and not run.
You’ll notice the code for the two rules is completely identical, except for the rule name (in blue) and the source of the external script (in yellow). Most importantly, the conditions (in green) are identical. Whereas if they shared a rule, we might see the exact same thing as above accomplished with half as much code:
I now have ONE rule, which would be used for ALL logic that should run on the Home Page. The part of the library that runs on every page to check against conditions only has to check if the “pageType” is “home page” once, rather than twice. And DTM still loads the two scripts as separate non-sequential javascript. This doesn’t look like a major change, but when viewed across a whole implementation, where there may be dozens of rules, it can make a big difference in how many rules and conditions DTM must check on every page.
In the DTM interface, this would look like this:
If I want to know which rules contain my “Wuggly” codes, I can use the “Tag Name” filter in the rules list, which will show me all rules that have a third party tag that includes “Wuggly”:
This is filtering based on the Tag Name specified when you add the tag code:
Using this approach, where your rules are based on their scope (or condition) and contain all logic- Analytics, Target, third party- that applies to that scope can not only lighten your library weight, but it can also keep your DTM implementation organized and scalable, but it may also require a change of mindset for how DTM is organized- if someone else needed to deploy a tag on Product Detail Pages, they wouldn’t need to create a rule, but rather, they could see a “Product Detail Page” rule already exists with the scope they need, and they need only add the third party tag.
There is one potential downside to consider, though- the approval and publication flow is based on Rules in their entirety. You can’t say “publish my changes to the analytics portion of this tool, but not to the third party tag section”. Still, if you are aware of this as you plan the publication of new features, this potential drawback rarely overrides the advantages.