Exciting News: Self-Employment!

Bilbo Going on an Adventure

I’ve finally made the leap, and am now consulting as my own independent entity. I’ve worked at many wonderful consulting agencies over the years and happily still have a good relationship with each of them, but for some time now I’ve wanted to move more and more into building products. Unfortunately, thus far no one has wanted to hire me as a junior Product Manager or Developer for anywhere near the same salary I’ve been getting as a Principal Consultant, so in order to pursue my product dreams, I needed to reduce my commitment to consulting and find a more flexible arrangement.
I will continue consulting, because I want to stay informed and have current practical experience with implementation (plus I got to keep paying my bills). But without an agency as a “go between”, I can work fewer billable hours and have more time to work on products and projects. Don’t get me wrong: agencies as a “go between” provide a lot of value: I won’t pretend to not be daunted by marketing, sales contracts, benefits and taxes. But thus far, it’s been a great growing experience for me. And I’m lucky to have a very supportive network as I branch into the unknown.

So now I have a chance to work on some other projects, like fixing up/modernizing the beacon parser, and other projects I’ll post about shortly (stay tuned!) I’ll also continue working with Cognetik for a few exciting initiatives they have going on, so you will see me on their blog still occasionally. And there are some other agencies I’m eager to work with still if it doesn’t interfere with my product dreams, so this may or may not last long.

I already have a good amount of independent work to keep me busy for the next few months, so this post isn’t me necessarily soliciting for more work (unless you happen to have the PERFECT project for me, in which case, let’s talk!) But if you want to talk about products and opportunities, please reach out! I’m now at jenn@digitalDataTactics.com.

Adobe DTM Launch: Improvements for Single Page Apps

For those following the new release of Adobe’s DTM, known as Launch, I have a new blog post up at the Cognetik blog, cross-posted below:

It’s finally here! Adobe released the newest version of DTM, known as “Launch”. There are already some great resources out there going over some of the new features (presumably including plenty of “Launchey Launch” puns), which includes:

  • Extensions/Integrations
  • Better Environment Controls/Publishing Flow
  • New, Streamlined Interface

But there is one thing I’ve been far more excited about than any other: Single Page App compatibility. I’ve mentioned on my personal blog some of the problems the old DTM has had with Single Page Apps:

  • Page Load Rules (PLRs) can’t fire later than DOMready
  • Event-Based Rules (EBRs) and Direct Call Rules (DCRs) can’t “stack” (unlike PLRs, there’s a 1:1 ratio between rules and analytics beacons, so you can’t have one rule set your global variables, and another set section-specific variable, and another set page-specific variables, and have them all wrap into a single beacon)
  • It can be difficult to fire s.clearVars at the right place (and impossible without some interesting workarounds)
  • Firing a “Virtual Page Load” EBR at the right time (after your data layer has updated, for instance) can be tricky.

So much of this is solved with the release of DTM Launch.

  • You can have one rule that fires EITHER on domReady OR on a trigger (Event-based or Direct Call).
  • You have a way to fire clearVars.
  • You can add conditions/exclusions to Direct Call rules

There are other changes coming that will improve things even further, but for now, these changes are pretty significant for Single Page apps.

Multiple Triggers on a Single Rule

If I have a Single Page App, I’ll want to track when the user first views a page, the same as for a “traditional” non-App page. So if I’m setting EBRs or DCRs for my “Virtual Page Views”, I’d need to account for this “Traditional Page Load” page view for the user’s initial entry to my app.
In the past, I’d either have a Page Load Rule do this (if I could be sure my Event-Based Rules wouldn’t also run when the page first loaded), or I could do all my tracking with Event-Based Rules, and I’d have to suppress that initial page view beacon. I may end up with an identical set of rules- one for when my page truly loads, and one for “Virtual Page Views”.

Now, I can do this in a single rule:

Where my “Core- Page Bottom” event fires when the page first loads (like an old Page Load Rule):

…and another “Page Name Changed” event that fires when my “page name” Data Element changes (like an old Event-Based Rule):

No more need to keep separate sets of rules for Page Load Rules and Virtual page views!

Clearing variables with s.clearVars()

Anyone who has worked on a Single Page App, or on any Adobe Analytics implementation with multiple s.t() beacons on a single DOM, has felt the pain of variables carrying over from beacon to beacon. Once an “s” variable (like s.prop1) exists on the page, it will hang around and be picked up by any subsequent page view beacon on that page.

Page 1

Page 2

Page 3

Page 4



Search Results

PDP > Red Wug

Product List






s.eVar1 (search term)


Red Wug

Red Wug

Red Wug

My pageName variable is fine because I’m overwriting it on each page, but my Search Term eVar value is hanging around past my Search Results page! And on pages where I don’t write a new events string, the most recent event hangs around!

In the old DTM, I had a few options for solving this. I could do some bizarre things to daisy-chain DCRs to make sure I could get the right order of setting variables, firing beacons, then clearing variables. Or, I could use a hack in the “Custom Code” conditions of an Event-Based Rule, to ensure s.clearVars would run before I started setting beacons. Or, more recently, I could use s.registerPostTrackCallback to run the s.clearVars function after the s_code detected an s.t function was called.

Now, it’s as simple as specifying that my rule should set my variables, then send the beacon, then clear my variables:

Directly in the rule- no extra rules, no custom code, no workarounds!

Rule Conditions on ALL Rule Types (including Direct Call)

If I were using Direct Call Rules for my SPA, in the past, I’d have to account for Direct Call Rules having a 1:1 relationship with their trigger. If I had some logic I needed to fire on Search Results pages, and other logic to fire on Purchase Confirmation pages, I could have my developers fire a different “_satellite.track” function on every page:

Then in each of those rules, I’d maintain all my global variables as well as any logic specific to that beacon. This could be difficult to maintain and introduces extra work and many possible points of failure for developers.

Or, I could have my developers fire a global _satellite.track(“page view”) on every page, and in that one rule, maintain a ridiculous amount of custom code like this:

This would take me entirely out of the DTM interface, and make some very code-heavy rules (not ideal for end-user page performance, or for DTM user experience — here’s hoping your developer leaves nice script comments!)

Now, I can still have my developers set a single _satellite.track(“page view”) (or similar), and set a myriad of rules in Launch, each using that same “page view” trigger, but each with a condition so you can set different variables in different rules directly in the interface when your developers fire _satellite.track(“page view”) on your Search Results versus when they fire _satellite.track(“page view”) on your Purchase Confirmation page:

I’d love to say all my SPA woes were solved with this release, but to show I haven’t entirely drunk the Kool-aid, I will admit some of my most wished-for features (and extensions) aren’t in this first release of Launch. I know they’re coming, though- future releases of Launch will add additional features that will make implementing on a Single Page App even simpler, but for now, it still feels like Christmas came early this year.

Coming to Adobe Summit 2017

adobe-su-1024x454I’ll be at Adobe Summit in Las Vegas next week, Monday March 19th through Friday March 24th. If you happen to be out that way, shoot me a comment here and hopefully we can meet up! I’ll be attending a lot of the DTM sessions and will be ready to help folks understand what the DTM updates mean for them.
I’ll also be presenting at Un-Summit at UNLV on Monday, speaking about Marketing Innovation and Cognetik’s new tableau data connector. Come check it out!

Building a Strong Analytics Practice: #3- Putting Processes in Place

This post was originally posted on the Cognetik blog as part of a series on Building a Strong Analytics Practice.

An analytics practice has some unique challenges as far as project management goes. They are accountable for delivering quality data, but there are many elements out of their control:

  • once you deliver technical specifications, you have to “hurry up and wait” until developers have questions or are ready for validation
  • documentation and validation often happen on “moving targets”, where the site map or functionality may be in flux right up until they are released
  • release cycles rarely include a window of time with a stable site for the Analytics team to perform validation
  • projects rarely exist in a vacuum- they usually need to meld with a global solution which is itself often a work-in-progress
  • an analytics project may involve many deliverables, with many audiences:
    • Site Map/wireframes
    • Business Requirements for reporting
    • Solution design
    • Technical Specifications for IT
    • TMS Engineering
    • IT Implementation
    • IT QA/Validation
    • Report QA/Validation
    • Push to production
    • Distribute reports, provide insights, and take action

Without an official process or flow in place, it can be easy for things to slip through the cracks.

Because of these external variables, both Agile/SCRUM and Waterfall methodologies have some major drawbacks. You may need to be writing technical specifications before a design is complete; taking an iterative approach may be too resource-intensive; the analytics team may not be deeply embedded enough to collaborate with developers in real time. Some of these difficulties can be alleviated by improving communication within your org, as discussed in the second post in this series, but the most significant thing you can do to help streamline your initiatives is to have an established process in place, to be sure that all the necessary tasks are completed in the right order by the right people. You may not be able to always adhere to it, so plan on some flexibility, but it can be a good exercise to merely look at your process and document “this current project is an exception because [fill in reason] and we’re going to account for the deviation from our process by [fill in alternative]”.

Take this example user flow. In grey are examples of deliverables or decisions following a single sample reporting requirement (track a new form’s ID):

When visualized in this way, it may become easier to establish what kind of timelines you need when working with developers, clarify who is going to update the global documentation, or ensure that QA/validation procedures are followed. Cognetik can help set up and document these governances practices, but we’d also love to hear from you what you’ve found works well, or what struggles you’ve encountered.

Building a Strong Analytics Practice: #2 Connecting your Organization

This post was originally posted on the Cognetik blog as part of a series on Building a Strong Analytics Practice.

Once you have clear ownership within your core team, you need to get a global view of how data is used at your company. Once you’ve accounted for all the different moving pieces, it can be easier to:

  • Communicate clearly to the right people
  • Represent your team’s priorities to the rest of the org
  • Involve the right people in relevant decision-making processes
  • Have the right scope when planning new projects
  • Get more use out of your data by increasing its audience
  • Ensure that org-wide critical tasks and relationships have clear ownership within your Core Team
  • Keep leadership informed about the value your data provides, and the level of effort it takes to maintain it
  • Enlist resources to fill in gaps in your organization

To do this, I recommend mapping out the rest of your ecosystem. This will help break down those silos and give the individuals at your company who use the data the direction and support they need to get value out of the data.

Map out your ecosystem

This task can be surprisingly revealing, and may require some creative thinking. First, map out the obvious ones: marketers, analysts, developers and consultants. Don’t forget personalization, optimization, web development, privacy, project managers, data scientists, product owners and so on. Make sure to include executive sponsors and leadership.

List your company’s data tools

Next, list out the tools your company uses that touches data: your digital analytics tool of choice (Adobe or Google Analytics for instance), Optimization (eg Adobe Target or Optimizely), Content Management (eg Demandware), Customer Relationship Management (eg Saleforce), marketing (eg Kochava, Floodlight, Adwords), User Experience (eg Clicktale), Voice of Customer (eg ForeSee, Opinionlab)… feeling overwhelmed yet? Don’t worry, you can use this as a sort of head start:

Define responsibilities for each point of contact

For each component, figure out a point of contact- for instance, for your CRM, who will your Core team be working with? Reach out to the appropriate parties in your org. At bare minimum, send them an email, highlighting how they fit into the “Big Picture” for data at your company. If you are just now establishing a governance model, it may be worthwhile to even schedule a quick touch-base with each key person/team in your ecosystem to:

  • Make sure they know how they fit into the bigger Data-driven scheme and seek out feedback for what they’d love to get out of analytics
  • Establish who on your team is their main point of contact. Encourage them to keep you in the loop for any changes they are aware of that might impact (or benefit from) analytics
  • Ensure they have access to tools and resources (like variable maps or documentation on processes) in some centrally-located repository (like Sharepoint, Confluence, or Google Drive if need be)
  • Establish reasonable expectations and scope on new initiatives. If they understand that you have a queue for analytics initiatives, and a process to follow that may take __ weeks/months to change the solution or kick off something new.
  • Give them visibility into the type of work you currently have on your roadmap and how that fits into company priorities.
  • Ask if there are any areas in the company not currently using analytics that might get value out of being included in these conversations.

You may or may not want a regular meeting with them, but it’s important to make sure the relationship always exists, that they see the active role Analytics has in your org, they feel involved, and have a clear line of communication with you.

“Mapping takes too much of our time!”

I understand that this may require an investment of resources to get up and running, and that it may exceed the current scope of analytics at many companies. But, similar to establishing a strong data core team, this upfront investment of time and resources will, at bare minimum, help a company get more value out of its data, and may actually reduce the amount of resources needed in the long run. Establishing communication and relationships will give focus to analytics initiatives, reduce rework, and include analytics in conversations sooner (getting rid of the pre-release scramble to get analytics added and validated).

Building a Strong Analytics Practice: #1- Your Core Team

This post was originally posted on the Cognetik blog as part of a series on Building a Strong Analytics Practice. 

Imagine this conversation:
Joe: “I just got the wireframes for the new site filtering tool. We need an analytics BRD and Tech Spec so developers can begin work.”
Anna: “But my team of developers is working on a priority project through November then goes into 3-month code freeze!”
Joe: “K, well, this new site feature is also a priority, and we need tracking on the new filtering tool. “
Mike: “Our reporting needs to focus on the KBOs that just came down from the top. How does the new filtering tool relate to conversions? What business decisions can we make if we know it’s being used?”
Susan: “Speaking of, we know that conversion tracking on mobile is broken- has been since September. Can we prioritize getting THAT fixed?”
Dan: “But, we’ve been grading our personalization efforts using that report! We need to get that fixed, like… yesterday!”

As painful as that conversation feels, can you imagine how much more painful it is in places where those conversations are NOT happening? I know that no one wants more meetings in their schedule, but a regular, FOCUSED check-in between key stakeholders can make all the difference. But who should attend such meetings?

We’ve seen a lot of value in establishing an Analytics Model- some folks may call it a Center of Excellence (CoE), for others it may still just be called the “analytics team”. Whatever you call it, the important thing is to really think out the roles, goals, processes, and responsibilities so that this team- and their data- can really drive the conversation, rather than “be driven”. I’m going to call this the Data Core Team.

To start, figure out who is going to be on your Data Core Team. I’ve seen this filled by a single person, and I’ve seen a team of 6 or more. Either way, with however many people you have, you’ll need to fill these roles:

Solution Owner

The Solution Owner is the “business requirements” gatekeeper. Their world is one of Key Business Objectives (KBOs) and Key Performance Indicators (KPIs). They:

  • gather reporting requirements
  • give focus to the solution by running reporting requests through a value-driven filter, prioritizing work that will provide truly actionable data to their organization
  • interface with executives, product managers, and analysts to make sure their data practice aligns with their company’s business objectives and roadmap
  • work with the Implementation Architect to design a solution that will suit their reporting needs
  • are in charge of keeping implementation documentation in a centrally-accessible place

Implementation Architect

The Implementation Architect owns the technical side of the solution. The Solution Owner says if something is worth tracking; the Implementation Architect figures out how to make it happen. They:

  • know the tools of their trade- for instance, for an Adobe Analytics implementation, they’d know when to use an eVar instead of a prop, or how to set the products string. For a Google Analytics implementation, they know when to use an event or a custom metric, and the best practices behind event categories, actions and labels
  • make decisions and enforce standards for variable maps and data architecture. Often, the decisions they make are a bit arbitrary- for most folks, it doesn’t REALLY matter if you identify your pageName in a JavaScript object named “digitalData.page.pageInfo.pageName” or in “universal_variable.navigation.page”, or if you use eVar41 or eVar42- the important thing is that someone is in a position to make that decision and keep it standard.
  • administer any Tag Management Solution their company uses, perhaps just controlling access and settings standards, or perhaps going so far as to be the editor and publisher of changes.
  • work with the Data Steward to document what is needed from site developers.

Data Steward

The Data Steward works with site developers to apply the analytics solution to the site. As the person charged with owning the data for your site(s), they have more of a technical understanding of analytics and how it fits into site development. They:

  • may not be a developer themselves, but they need to understand the processes developers use, the overall way the site works, and to be able to make informed decisions about data layers, tag management, JavaScript frameworks, SDKs…
  • work closely with the Implementation Architect to design and deploy a solution that works, given your site’s architecture and developer resources.
  • interface with site engineers and developers and represent their interests to the rest of the Core Data Team.
  • own Data Quality- they run the QA processes and help maintain implementation health with regular audits.

Report Administrator

The Report Administrator does a lot of the housekeeping needed to get data to the end users within their org. They:

  • interface with the report users, ensuring they have the access and training they need
    distribute reports, create logins, and provide access to training
  • may serve a PM role within the Data Core team, keeping track of upcoming initiatives and timelines.


I’d say it’s rare for these responsibilities to actually be split among 4 people as I’ve described here. The important thing is that you have clear ownership of each responsibility, and that this Core Team works closely together as the single source of the “Big Picture”.

Each role may need to pull on other resources freely- for instance, if your company doesn’t have an Adobe Analytics implementation expert, then your Implementation Architect may hire outside consultants to help them. I’d say in general, this doesn’t mean outsourcing the OWNERSHIP of your implementation architecture- each company still needs an internal resource with motivation and access to resources to move the solution in the right direction. No matter how excellent your consultants are, they will never be able to own your implementation as well as someone internal could. A good consultant, however, will support that internal resource, providing industry knowledge and guidance, and investing in the future of your org’s analytics practice by training internal resources. Basically, outside consultants should be tasked with making internal owners look like Rock Stars.

If this seems like a bit much, or it’s hard to sell your organization on the idea of such an investment of resources, consider this: Each of the bullet points above- as well as other, more specific tasks- aren’t negotiable. They are all things that inherently need to happen to have an analytics solution. What we frequently see happen is that when not enough resources are assigned to supporting these tasks, reporting can still happen, but the net amount of effort is higher (because there was no forward-thinking master plan and folks have to make it up as they go) and the value of the reporting is lower. I promise, you will get a return if you invest in getting the right resources and support for your Analytics Practice.

This, of course, doesn’t cover everything you’d see done in a healthy Analytics Practice. I’d love to hear from readers if I left off anything they view as critical, and what they’ve seen work well or not work well!

Cross-post: Intro to Building a Strong Analytics Practice

I’m blogging again! I’m doing a series over on the Cognetik blog on how to build a Successful Analytics Practice.  Here is a cross-post of the intro:

Who’s driving this thing?

Our industry is full of intelligent, motivated people. Yet it feels like so often, for the amount of effort and thought we put into our Analytics solutions, we never quite get the full value that we know is there. As an analytics/data engineer, most of the work that comes across my desk is very tactical: deep-dive audits, technical specifications, configuring variables, setting up dashboards… these are all very valid and worthy activities, yet I still often hear frustration from my clients such as:

  • We have a hard time getting others within our company to see the value and potential in our analytics.
  • I want to use new tool features but upgrading will take too much effort.
  • Many teams in my organization interact with data, but they all work in silos.
  • It takes too long to get access to requested data.
  • My organization’s report usage is scattered and doesn’t align with global KPIs.
  • I need to apply my existing solution to a new site but I can’t find documentation on my current solution.
  • We’re not collecting the data I actually need for analysis.
  • We have so many new initiatives and works-in-progress, I don’t know which data I can trust.
  • Training users and developers on our implementation or toolset uses too many resources.
  • We collect a lot of data but I rarely get to see a report.

So what’s missing? For all the effort we put into designing solutions, implementing code, and configuring dashboards, what is stopping us from providing more value with our data?

I think often the problem is a lack of central leadership providing a foundation to work on. Now, I don’t mean to say our industry is lacking in leaders… far from it. But the problem is those leaders often aren’t given the resources or the permission to transform their org. So we end up with “lots of people in the car, but no one in the driver’s seat”. Because of how fast our industry has grown, Analytics practices have popped up in every organization, often organically and without much long-term planning. This leads to all those intelligent and motivated people working in silos, without a united focus or the resources to apply a global vision.

What’s the answer?

Each of these problems could be solved with the right Governance Model in place. That means consciously establishing roles, ownership, accountability, processes, and communication. Analytics should be a pro-active part of your organization, not an after thought. I’ll be posting a three-party series on how to get the ball rolling on establishing a healthy Analytics Practice:

Career changes and exciting opportunities


In an unexpected surprise for many (including myself) a few weeks ago I left Adobe and joined the team at Cognetik as a Principal Analytics Engineer. I’ll continue doing much of the same kind of work I’ve been doing- Analytics (now including Google Analytics again), Tag Management (not just DTM), data layers, governance, coding, building occasional tools on the side… the full gamut.

I love the team (and the products) at Adobe, and it wasn’t easy leaving them, but I’m content that in such a small industry, I’m bound to work with many of them again. And I’m very excited about this new opportunity: Cognetik is doing some incredible work for some exciting clients, and I’m thrilled to be in a position to offer a lot of value to my clients.

I’m also excited to be a part of the team building the Cognetik Product, a data visualization and insights tool that is unlike any other I’ve seen or worked with. I’ll be keeping up the blog, of course, and my various DTM enablement materials. I’m also on the #measure slack channel.

For those who I know because of my role at Adobe, it was a great experience, and I hope to stay in touch! Here’s to working in a fantastic and ever-evolving industry, full of smart, passionate people finding new ways to answer old questions.

Deploying Google Marketing Tags Asyncronously through DTM

I had posted previously about how to deploy marketing tags asynchronously through DTM, but Google Remarketing tags add an extra consideration: Google actually has a separate script to use if you want to deploy asynchronously. The idea is, you could reference the overall async script at the top of your page, then at any point later on, you would fire google_trackConversion to send your pixel. However, this is done slightly differently when you need your reference to that async script file to happen in the same code block as your pixel… you have to make sure the script has had a chance to load before you fire that trackConversion method, or you’ll get an error that “google_trackConversion is undefined”.

Below is an example of how I’ve done that in DTM.

//first, get the async google script, and make sure it has loaded
var dtmGOOGLE = document.createElement('SCRIPT');
var done = false;

dtmGOOGLE.setAttribute('src', '//www.googleadservices.com/pagead/conversion_async.js');

dtmGOOGLE.onload = dtmGOOGLE.onreadystatechange = function () {
 if(!done && (!this.readyState || this.readyState === "loaded" || this.readyState === "complete")) {
 done = true;

 // Handle memory leak in IE
 dtmGOOGLE.onload = dtmGOOGLE.onreadystatechange = null;

//then, create that pixel
function callback(){
 /* <![CDATA[ */
 google_conversion_id : 12345789,
 google_custom_params : window.google_tag_params,
 google_remarketing_only : true