All posts by Jenn Kunz

Participation and Linear Allocation in Adobe Analytics- behavior I expected, and some I did not

Despite clearly remembering learning about it my first week on the job at Omniture in 2006, I realized recently that I did not have a lot of confidence in what participation and linear allocation would do in certain situations in Adobe Analytics. So I put a good amount of effort into testing it to confirm my theories, and I figured I’d pass along what I discovered.

First, the Basics: eVar Allocation

You may already know this part, so feel free to skip this section if you do. Allocation is a setting for Conversion Variables (eVars) in Adobe Analytics, with three options:

Let’s take a simple example to show what how this effects things. Let’s say a user visits my site with this flow:

Page A Page B Page C Page D Form Submit- Signup
s.eVar5=”Page A” s.eVar5=”Page B” s.eVar5=”Page C” s.eVar5=”Page D””event1″

Most Recent (Last)

Most eVars have the “defaultiest” allocation of “Most Recent (Last)”, meaning in an event1 report broken down by eVar5, “Page D” would get full credit for the event1 that happened, since it was the last value we saw before event1. So far, pretty simple.

Original Value (First)

But maybe I want to know which LANDING page contributed the most to my event1s (there are other ways of doing this, but for the sake of my example, I’m gonna stick with using allocation). In that case, I might have the allocation for that eVar set to “Original Value (First)” so then “Page A” would get full credit for this event1, since it was the first value we saw for that variable. If my eVar is set to expire on visit, then it’s still nice and straightforward. If it’s set to never expire, then the first value we ever saw for that user will always get credit for any of that user’s metrics. If it’s set to expire in two weeks, then we’ll see the first value that was passed within the last two weeks.

This setting is frequently used for Marketing Campaigns (it’s not uncommon to see s.campaign be used for “Most Recent Campaign in the last 30 days” and then another eVar capture the exact same values, but be set to “Original Campaign in the last 30 days”).

Linear Allocation

If I’m feeling a bit more egalitarian, and want to know ALL the values for an eVar that contributed to success events, I would choose linear allocation. In this scenario, all four values would split the credit for the one event, so they’d each get one fourth of the metric:

(Though it may not actually look like this in the report- by default it would round down to 0. But I’ll talk about decimals later on).

So, that’s allocation.

Then what is participation?

Participation is a setting you can apply to a prop, so that if you bring a Participation-enabled metric into the prop’s report, you can see which values were set at some point before that event took place. Repeat: to see participation you must have a prop that is set to “Display Participation Metrics”:

And the metric you want to see needs to have participation enabled (without this, in the older Reports and Analytics interface, that event won’t be able to be brought into the prop report):

Unlike linear allocation for an eVar, participation for a prop means all the values for that prop get full credit for an event that happened. So, given this flow:

Page A Page B Page C Page D Form Submit- Signup
s.prop1=”Page A” s.prop1=”Page B” s.prop1=”Page C” s.prop1=”Page D””event1″

You would see a report like this, because each value participated in the single instance of that event:

New Learnings (for me): Content Velocity

One thing these settings can be used for is measuring content velocity: that is, how much a certain value contributed to more content views later on. For instance, if I have a content site, and I want to know how much one piece of content tends to lead to the reading of MORE content, I might use a participation-metric-enabled prop with a participation-enabled Page View custom event, or I might use an eVar with linear allocation against a Page View custom event (whether or not the event has participation enabled doesn’t matter for the eVar). For my test, I did both:

Page A Page B Page C Page D
s.prop1=”Page 1″
s.eVar1=”Page 1″”event1″
s.prop1=”Page 2″
s.eVar1=”Page 2″”event1″
s.prop1=”Page 3″
s.eVar1=”Page 3″”event1″
s.prop1=”Page 4″
s.eVar1=”Page 4″”event1″

The prop

The prop version of this report would show me that Page 1 contributed to 4 views (its own, and 3 more “downstream”). Whereas Page 2 contributed to 3 (its own, and two more downstream), etc…

The eVar

Alternatively, the eVar would show me some thing pretty odd:

Those weird numbers don’t make sense on this small scale (how could 0 get 6.3%?), because it is rounding, and not showing me decimals. If I want to see the decimals, I can create a really simple calculated metric that brings in my custom Page View event (event1) and tells it to show decimals:

The report then makes a little more sense and show us where the rounded numbers came from (and how Page 4, with “0” Page Views, got 6.3% of the credit), but may still seem mysterious:

Those are some odd numbers, right? Here’s the math:

 Value Credit Why?   Explanation
Page 1 2.08 1+0.5+0.33+0.25 It got full credit for its own view, then half the credit (shared with page 2) for the event on Page 2, then a third of the credit (shared with Page 2 and Page 3) on Page 3…
Page 2 1.08 0.5+0.33+0.25 It only got half credit for the event that took place on its page (shared with Page 1), then a third of the credit (shared with Page 1 and Page 3) on Page 3, etc…
Page 3 0.58 .33+.25 It only gets a third of the credit that took place on its page, and a quarter of the credit for the fourth page.
Page 4 0.25 0.25 The event that happened on this page is shared with all four pages.

Crazy, right? I’m not going to tell you which an analyst should prefer, but as always, you should ask the question: “What will you DO with this information?”

What happens when multiple values appear in the same flow?

Let’s say the user does something like this, where they hit one value a couple page views in a row (Page B in this example), or they hit a value 2 separate times (Page A in this example):

Page A Page B Page B
Page C Page D Page A
Conversion event
“Page A”
“Page A”
“Page B”
“Page B”
“Page B”
“Page B”
“Page C”
“Page C”
“Page D”
“Page D”
“Page A”
“Page A”

For the prop, it’s pretty straightforward. This will look like 6 event1s, where Page A gets value for all 6, and Page D gets credit for just 2 (itself, and the Page A that came afterwards):

For the eVar, it gets a little more complicated (I added in a calculated metric so you can see the decimals). Page A (accessed twice at separate times) got double credit for the conversion (which I might have predicted), but Page B (accessed twice in a row) ALSO gets double credit for the conversion (which I didn’t predict, probably because I’m too used to thinking in terms of the CVP plugin):


A couple things to be aware of:

  • Settings for participation and allocation don’t apply retroactively- you can’t apply them to existing data. If you want to start using it, you need to change your settings and you’ll see it applied to future data. However, this can mess with existing data, so be careful.
  • There appears to be a bug in Analysis Workspace for both. I’m going to follow up with Adobe, but I basically can’t get either Prop participation or Linear Allocation to work in Workspace. I’ll come back and update this post if I get more info from Adobe about this.


Both participation and linear allocation aren’t used often, but they can uniquely solve some reporting requirements and can provide a lot of insight, if you know how to read the data. I hope my experimentation and results here help make it clearer how you might be able to use and interpret data from these settings. Let me know if you have other use cases for using these settings, and how it has worked out for you!

Quick poll: What should I tackle next?

Each new year, I tend to dive into some new side project (this is how the Beacon Parser and the PocketSDR came about). I have quite a few things I want to tackle right now, and one main thing I’m slowly plugging away at, but in the meantime, I’m wondering what to prioritize. So, a poll:

Any other ideas (or desired improvements to existing tools)? Let me know in the comments.

New tool: PocketSDR Mobile App for Adobe Analytics

I mentioned in my previous post that one of the reason I’m going “independent” is to have more time to work on products and pet projects. One of those ongoing projects of mine has been a mobile app you can use to keep your Adobe Analytics SDR/Variable Map easily accessible on your phone.
Get it on Google Play
The first release of this actually went out in August 2016, but I didn’t let anyone know because I felt it was still too “beta” and I wanted to clean it up before making it more publicly known. A year and a half later (and various framework upgrades that required redoing the whole thing… each time learning and applying those learnings), I got it to a point where I don’t feel ashamed to share it, though of course there is always room for more improvement.

To use the app, you will need your Adobe Analytics Web Services API key. And since no one wants to enter their 32-digit API Key into their mobile device, this newest version of the app allows you to enter your API key on the web (meaning you can copy-and-paste on your desktop machine) then get a link that will allow your mobile device to open the app with those credentials already entered. I highly recommend using that API Shortcut tool before diving into the app.

I created the app for a few reasons:

  • As with the beacon parser, the main reason was because I wished a tool like this existed and figured if I was going to make it for myself, I may as well let other people use it too.
  • I have a web development background, and wanted to learn more about developing for Mobile Apps. I’ll admit on this front, I cheated a bit: rather than learning multiple native app languages (like Swift or C++), I used the Ionic Framework, which let me program the app using Angular (which fits with my JS/HTML background better than native languages), then use Cordova to turn it into a Mobile App. Still, I did get to learn a lot about mobile development in general, analytics options within mobile, and the release cycle for mobile development (I can’t just save a file and FTP it to my server), not to mention Angular 1/2 and typescript.
  • I needed a situation in which I could test out analytics tracking in various Single-Page App scenarios (yay Angular).
  • Because at heart, I am a developer. While I enjoy helping clients sort out governance and documentation issues, sometimes I just want to retreat to my basement and do some coding, for that straight-forward validation of seeing your code work in real-time. It’s good to keep those skills alive.

All in all, I learned so much. And I’ve already used the app quite a bit to keep track of my client’s Variable Maps (“what did we use event40 for? Oh yeah!”) However, I’m not a professional mobile developer, and this project was done entirely in my evenings/PTO as a learning exercise that happened to create a useable product. So please be forgiving of any thing in the app that is less-than-ideal; there is a reason I’m not charging for the app at all. I will continue working on improvements, particularly with an eye for performance (I’m looking at you, Android….) and I’m already aware of potential aesthetic issues on iPhone X. Please let me know of any other feedback or suggestions- I’d love to hear what you think!

P.S. Before anyone asks why I didn’t use the Adobe API OAuth2 Authentication, I thought about it, and may yet move to using that, but have concerns about how that works for marketing cloud AND non-marketing cloud logins. That, and the API documentation is… lacking… so I decided for now to stick with what I know. If anyone has experience with OAuth2 authentication and wants to discuss, please reach out. 

P.P.S. A special thanks to my beta testers!

Exciting News: Self-Employment!

Bilbo Going on an Adventure

I’ve finally made the leap, and am now consulting as my own independent entity. I’ve worked at many wonderful consulting agencies over the years and happily still have a good relationship with each of them, but for some time now I’ve wanted to move more and more into building products. Unfortunately, thus far no one has wanted to hire me as a junior Product Manager or Developer for anywhere near the same salary I’ve been getting as a Principal Consultant, so in order to pursue my product dreams, I needed to reduce my commitment to consulting and find a more flexible arrangement.
I will continue consulting, because I want to stay informed and have current practical experience with implementation (plus I got to keep paying my bills). But without an agency as a “go between”, I can work fewer billable hours and have more time to work on products and projects. Don’t get me wrong: agencies as a “go between” provide a lot of value: I won’t pretend to not be daunted by marketing, sales contracts, benefits and taxes. But thus far, it’s been a great growing experience for me. And I’m lucky to have a very supportive network as I branch into the unknown.

So now I have a chance to work on some other projects, like fixing up/modernizing the beacon parser, and other projects I’ll post about shortly (stay tuned!) I’ll also continue working with Cognetik for a few exciting initiatives they have going on, so you will see me on their blog still occasionally. And there are some other agencies I’m eager to work with still if it doesn’t interfere with my product dreams, so this may or may not last long.

I already have a good amount of independent work to keep me busy for the next few months, so this post isn’t me necessarily soliciting for more work (unless you happen to have the PERFECT project for me, in which case, let’s talk!) But if you want to talk about products and opportunities, please reach out! I’m now at

Coming to Adobe Summit 2017

adobe-su-1024x454I’ll be at Adobe Summit in Las Vegas next week, Monday March 19th through Friday March 24th. If you happen to be out that way, shoot me a comment here and hopefully we can meet up! I’ll be attending a lot of the DTM sessions and will be ready to help folks understand what the DTM updates mean for them.
I’ll also be presenting at Un-Summit at UNLV on Monday, speaking about Marketing Innovation and Cognetik’s new tableau data connector. Come check it out!

Cross-post: Intro to Building a Strong Analytics Practice

I’m blogging again! I’m doing a series over on the Cognetik blog on how to build a Successful Analytics Practice.  Here is a cross-post of the intro:

Who’s driving this thing?

Our industry is full of intelligent, motivated people. Yet it feels like so often, for the amount of effort and thought we put into our Analytics solutions, we never quite get the full value that we know is there. As an analytics/data engineer, most of the work that comes across my desk is very tactical- deep-dive audits, technical specifications, configuring variables, setting up dashboards… these are all very valid and worthy activities, yet I still often hear frustration from my clients such as:

  • We have a hard time getting others within our company to see the value and potential in our analytics.
  • I want to use new tool features but upgrading will take too much effort.
  • Many teams in my organization interact with data, but they all work in silos.
  • It takes too long to get access to requested data.
  • My organization’s report usage is scattered and doesn’t align with global KPIs.
  • I need to apply my existing solution to a new site but I can’t find documentation on my current solution.
  • We’re not collecting the data I actually need for analysis.
  • We have so many new initiatives and works-in-progress, I don’t know which data I can trust.
  • Training users and developers on our implementation or toolset uses too many resources.
  • We collect a lot of data but I rarely get to see a report.

So what’s missing? For all the effort we put into designing solutions, implementing code, and configuring dashboards, what is stopping us from providing more value with our data?

I think often the problem is a lack of central leadership providing a foundation to work on. Now, I don’t mean to say our industry is lacking in leaders… far from it. But the problem is those leaders often aren’t given the resources or the permission to transform their org. So we end up with “lots of people in the car, but no one in the driver’s seat”. Because of how fast our industry has grown, Analytics practices have popped up in every organization, often organically and without much long-term planning. This leads to all those intelligent and motivated people working in silos, without a united focus or the resources to apply a global vision.

What’s the answer?

Each of these problems could be solved with the right Governance Model in place. That means consciously establishing roles, ownership, accountability, processes, and communication. Analytics should be a pro-active part of your organization, not an after thought. I’ll be posting a three-party series on how to get the ball rolling on establishing a healthy Analytics Practice:

Career changes and exciting opportunities


In an unexpected surprise for many (including myself) a few weeks ago I left Adobe and joined the team at Cognetik as a Principal Analytics Engineer. I’ll continue doing much of the same kind of work I’ve been doing- Analytics (now including Google Analytics again), Tag Management (not just DTM), data layers, governance, coding, building occasional tools on the side… the full gamut.

I love the team (and the products) at Adobe, and it wasn’t easy leaving them, but I’m content that in such a small industry, I’m bound to work with many of them again. And I’m very excited about this new opportunity: Cognetik is doing some incredible work for some exciting clients, and I’m thrilled to be in a position to offer a lot of value to my clients.

I’m also excited to be a part of the team building the Cognetik Product, a data visualization and insights tool that is unlike any other I’ve seen or worked with. I’ll be keeping up the blog, of course, and my various DTM enablement materials. I’m also on the #measure slack channel.

For those who I know because of my role at Adobe, it was a great experience, and I hope to stay in touch! Here’s to working in a fantastic and ever-evolving industry, full of smart, passionate people finding new ways to answer old questions.

Deploying Google Marketing Tags Asyncronously through DTM

I had posted previously about how to deploy marketing tags asynchronously through DTM, but Google Remarketing tags add an extra consideration: Google actually has a separate script to use if you want to deploy asynchronously. The idea is, you could reference the overall async script at the top of your page, then at any point later on, you would fire google_trackConversion to send your pixel. However, this is done slightly differently when you need your reference to that async script file to happen in the same code block as your pixel… you have to make sure the script has had a chance to load before you fire that trackConversion method, or you’ll get an error that “google_trackConversion is undefined”.

Below is an example of how I’ve done that in DTM.

//first, get the async google script, and make sure it has loaded
var dtmGOOGLE = document.createElement('SCRIPT');
var done = false;

dtmGOOGLE.setAttribute('src', '//');

dtmGOOGLE.onload = dtmGOOGLE.onreadystatechange = function () {
 if(!done && (!this.readyState || this.readyState === "loaded" || this.readyState === "complete")) {
 done = true;

 // Handle memory leak in IE
 dtmGOOGLE.onload = dtmGOOGLE.onreadystatechange = null;

//then, create that pixel
function callback(){
 /* <![CDATA[ */
 google_conversion_id : 12345789,
 google_custom_params : window.google_tag_params,
 google_remarketing_only : true