An entirely too honest/frank look at lessons learned from independent consulting

I’ve been so happy the last 6 weeks or so, working at 33 Sticks. Now that the dust has settled, I want to document some of the lessons I learned from my mere 5 months of independent consulting– it’s been a very enlightening experience, even though I’ve been a salaried-but-hourly-billable analytics implementation consultant for 10 of the last 12 years.

Here are a few other things being an independent consultant (taking primarily short-term work) has taught me:

  • The medical benefits system in the US is absolutely awful if you’re self-employed. Our only option was the exchange markets (aka obamacare)- only two insurance providers were available and one would require ditching all of our current healthcare providers. It ended up being about $1600/month to insure my relatively healthy family of 4, and that was a fairly mediocre plan. This doesn’t include the extra money/hassle we had to go through for our medications.
  • Setting up an LLC was really easy. Setting up a business bank account so I could sign checks made out to my LLC took a bit more effort, but it wasn’t bad (though it did catch me off guard- I should have known that that would be needed).
  • I haven’t had to do self-employment taxes yet, but I chose a weird year to start, what with Trump changing the tax plan (the IRS took a while to get their “how much income to withhold” calculator working for the new tax plan).
  • There are a lot of free/cheap tools for single-person companies out there- I use Asana (free), Everhour (free), Zoho Invoicing (it’s free to a point, and I prefered it to Everhour’s invoicing options), and Google Business (5/month- warning, the google business sync utility for Google Drive is even worse than the one for personal Google Drive accounts).
  • There is such a gap of implementation expertise in the digital analytics industry, there is no shortage of work out there to do. Work wasn’t hard to find. Finding the RIGHT work is the harder part- so many organizations are so short-handed that they look to outside consultants to fill some of those gaps, but it can be really hard to provide value in some of those situations. If you’re after a paycheck, there is plenty of that to go around… but if you’re fulfillment-needy like me, and need to know you are making a difference and providing value, you have to be a bit pickier about what work you take on.
  • Becoming truly profitable, and having the type of projects I want to be doing, would take time. Companies  looking for a full digital transformation are far less likely to come to a single independent consultant (though for many companies, a digital transformation is needed before the data could be really valuable).
  • Financially, it pays to remove the middle man, but not as much as you’d think. I was working in a wide range of rates, depending on the project, but $160-$225/hr seems a fairly normal rate for folks with my background. Of course, that doesn’t count what I spent on administration, branding/marketing, paperwork, etc… not to mention the lack of benefits (medical/dental/vision, time off, 401k). In the end, to keep the same income I was used to, I needed to do about two thirds the billable work (and had to deal with the unpredictable flow of money).
  • Sales/procurement processes are always slow. It doesn’t matter if the client is eager to start next Monday, and you’re ready to start next Monday- the client’s org will slow things down by at least 2 weeks- and even that is only if the client has put a fire under them.
  • Payment comes slowly. If a contract is “net-45” (ie, the client has 45 days to pay after being invoiced), it really means “the check will hopefully be in the mail by the 45th day”. I didn’t get my first check until 2.5 months into working, and I will continue getting checks until probably June for work wrapping up early in April.
  • Planning vacations or major future expenses is really hard. My husband and I are not exactly financial risk-takers, and since we never knew what checks would come in when, or when projects would start/end, it was very difficult for us to commit to a vacation a few months out.
  • Scoping projects and forecasting is hard. I’ve never been good at scoping. At Adobe, I’d be asked for my opinion on how long something would take to do, and I’d say “uh, 20 hours?” Then I’d see the final estimate that went to the client was for 120 hours, not 20. Turns out, though, I really am fairly efficient (heaven knows I’ve done this long enough), and rarely came even close to the amount of time the client was prepared to pay me for. The hourly billing model penalizes efficient work, and isn’t tied to value provided. On paper, I had 50+ hours of work I could do each week. In practice, unless I fudged the numbers (I didn’t), I was able to fill all of my client’s needs and then some, in maybe half of the expected time.
    • This confirms something the whole industry should keep in mind: you might pay more for senior/principal consultants, but odds are they will get through work much faster than their less-experienced peers, so you may save on hours billed. That is, if you are stuck on that pesky hourly billing model (see my thoughts on that model on the 33 sticks blog).
  • Even with that added flexibility, there are still not enough hours in the day.  I didn’t get even close to having the time to do all the productive things I wanted to do.
  • When you switch from salaried to hourly, and you are in charge of your own schedule, you start to see opportunity cost everywhere. “I slept 7 hours last night?! If I had been working instead, I could have made $1470 instead!!!” I’ve always had to do weekly timesheets and keep up my utilization rates (I have two awards on my shelf for being one of the most utilized consultants in Adobe consulting), but it had never made some a tangible difference to my family’s well-being.
  • I’d miss being around my peers. I had my clients, sure, but if I accomplished something I was proud of, I’d rush downstairs to tell the only people around to hear about it: my family. They’ve long since learned to not ask what I was excited about, they just say “yay, you did the thing!” It’s not quite the same as sharing with a peer. Thank heavens for twitter and #measure slack, so I can still bounce ideas off of peers and interact with humans who aren’t related to me.
  • Independence is hard for the anxiety-ridden (and I do have plenty of anxiety). I like to think I am a fairly independent/low-maintenance employee, and hope my previous employers would agree. But having absolutely no oversight was different. There was no one tell to me that the thing I was focusing on was indeed the best use of my time; no one to tell me that my work was stellar, satisfactory or still needed improvement; no one to justify things to if it didn’t go the way I hoped.

Conclusion

I had two main reasons for going independent:

  1. Freeing myself up so that if/when 33 Sticks was ready, I’d be available. Seriously, we’ve been trying to make this happen for years, and the timing was just never right. I wanted to make sure I didn’t miss a chance again.
  2. Having the flexibility/time to work on product ideas.

On both fronts, I’d say: mission accomplished! Clearly, the 33 Sticks thing is happening. And while on the product side, I haven’t released anything new since December, I was able to learn more server-side skills, so I could prototype out a few new product ideas. So progress has been made, even if I don’t have anything to publicly show for it.

But mostly, it was a very eye-opening experience: it’s nice to know now that it is an option for me, but that it probably won’t ever be my ideal working scenario. I’m very glad I had this short window of a new experience.

Why I’m so excited about joining 33 Sticks

(Cross posted from the 33 sticks blog)

33 Sticks formed shortly after I had to part ways with Hila and Jason about 5 years ago. Since the beginning, I’ve followed their story and cheered them on, excited about what they were accomplishing and hoping I’d get to be a part of it someday. Unfortunately the timing never lined up- they’d finally be ready to add someone like me to the team, but I’d have just started a commitment elsewhere. In October when I went independent, a large part of that decision was that I wanted to be free and ready when 33 Sticks was ready, and I’m thrilled that things finally lined up just in time for Summit.

I’ve either been employed or done contract work for a dozen different agencies since 2006. After this much time as an implementation consultant, I’ll admit I’m experiencing some burnout. Some folks have already heard me swear off consulting- it can just be so hard to really provide value. So why is 33 Sticks an exception?

The people

I’ve worked with Hila, Jason, and Jon before and know how awesome they are, and I can already see that I have much to learn from Jim Driscoll. There isn’t a member of the team that isn’t a principal-level consultant with years of experience with all different levels of projects. There is no offshore team we’ve committed to delegate work to. Every single member of this team is the type of person to go over the top to see clients succeed, yet they all have a rich life outside of work too. It’s a rare and incredible thing, to join a team where you already know and respect each of your coworkers, and genuinely enjoy spending time with each of them.

The model

33 Sticks contracts aren’t based on hours billed, but rather on value provided. This is a difficult model to get to work- you have to really trust the consultants and the clients to manage scope and be on the same page. It probably wouldn’t work at larger agencies, nor would it work for staff augmentation projects. It only works if the consultants can really build a relationship with the client, and have the experience to focus and drive engagements towards whatever will provide the most value.

In recent years, as I’ve been more exclusively on large enterprise projects, I’ve seen the consulting industry struggle more and more with keeping a cohesive vision for a project. You may have a dozen consultants spread between optimization, implementation, analysis, project management… then on the client-side, you may also have over a dozen folks on different parts of the project. It can feel like there are a lot of people in the car but no one is driving. With the 33 Sticks model, we can work with clients to get that project-wide focus and build a cohesive data ecosystem. You can’t truly consult and provide strategic guidance if you are just taking orders from whoever signed the contract. 33 Sticks can partner with our clients and use the experiences we have from touching hundreds of projects over the years to offer unique guidance, helping focus the engagement on what will provide the most long-term value.

The goal

I feel like Jason and Hila’s goals for 33 Sticks wouldn’t work for everyone, but they align with my own goals well. We aren’t going to take over the world. The goal is not to sell a lot of contracts, grow a lot of staff, influence a lot of projects, and build up wealth. There is no “exit strategy”. Instead, the goal is to do things that provide value, and do those things well. Which isn’t to say there isn’t a financial goal, but even that is much more focused on quality of life, having flexibility not only with how we spend our non-work hours, but also being able to do the type of work we want. For me, that means continuing to work remotely from Atlanta with a flexible schedule, and also have time to keep working on the product ideas and documentation I’m passionate about.

I so appreciate all the well-wishes and congratulations- hopefully after reading this, folks can fully understand why I am so excited about this opportunity. And while we’re not looking to take over the world, I do hope I can help 33 Sticks spread their value even further.

 

New industry tool: Adobe Configuration Export

An industry friend and former coworker, Gene Jones, made me aware of an awesome new tool he’s created- a tool that exports your Report Suite info into an excel file. It can compare the variable settings of multiple report suites in one tab, then creates a tab with a deeper look at all the settings for each report suite.

This is similar to the very handy Observepoint SDR Builder– I’ll freely admit I’m likely to use both in the future. Both (free) tools show you your settings and allow for report suite comparison. The Observepoint SDR Builder uses a google sheet extension and has a little more set up involved (partially because if you’re an Observepoint customer you can expand its functionality) but it can allow you manage your settings directly from the google sheet (communicating those changes back to the Adobe Admin Console).

But sometimes all you want is a simple export of current settings in a simple, local view, in which case the Adobe Configuration Export tool is very straightforward and simple to use.

And, it’s open source– the community can add to it and make use of it for whatever situations they dream up. I’m excited to see what features get added in the future (I see a “Grade Your Config” option that intrigues me). Nice work, Gene!

Adobe Launch’s Rule Ordering is a big deal for Single Page Apps

In November, I posted about some of the ways that Launch will make it easier to implement on Single Page Apps (SPAs), but I hinted that a few things were still lacking.
In mid-January, the Launch team announced a feature I’ve been eagerly awaiting: the ability to order your rules. With this ability, we finally have a clean and easy way to implement Adobe Analytics on a Single Page App.

The historical problem

As I mentioned in my previous post, one of the key problems we’ve seen in the past was that Event-Based Rules (EBRs) and Direct Call Rules (DCRs) can’t “stack”. Let me explain what I mean by that.

Not a single page app? Rule Stacking rocks!

For example, let’s say I have an internal search “null results” page, where the beacon that fires should include:

  • Global Variables, like “s.server should always be set to document.hostname”
  • Variables specific to the e-commerce/product side of my site with a common data layer structure (pageName should always be set to %Content ID: Page Name%)
  • Search Results variables (like my props/eVars for Search Term and Number of Search Results, and a custom event for Internal Searches)
  • Search Results when a filter is applied (like a listVar for Filter Applied and an event for User applied Search Filter)
  • Null Results Variables (another event for Null Internal Searches and a bit of logic to rewrite my Number of Search Results variable from “0” to “zero” (because searching in the reports for “0” would show me 10, 20, 30… whereas “zero” could easily show me my null results)

With a non-SPA, when a new page load loads, DTM would run through all of my page load rules and see which had conditions that were matched by the current page. It would then set the variables from those rules, then AFTER all the rules were checked and variables were set, DTM would send the beacon, happily combining variables from potentially many rules.

Would become this beacon:

If you have a Page Load Rule-based implementation, this allows you to define your rules by their scope, and can really use the power of DTM to only apply code/logic when needed.

Single Page App? Not so much.

However, if I were in a Single Page App, I’d either be using a Direct Call Rule or an Event-Based Rule to determine a new page was viewed and fire a beacon. DCRs and EBRs have a 1:1 ratio with beacons fired- if a rule’s conditions were met, it would fire a beacon. So I would need to figure out a way to have my global variables fire on every beacon, and set site-section-specific and user-action-specific variables, for every user action tracked. This would either mean having a lot of DCRs and EBRs for all the possible combos of variables (meaning a lot of repeat effort in setting rules, and repeated code weight in the DTM library), or a single massive rule with a lot of custom code to figure out which user-action-specific variables to set:

Or leaving the Adobe Analytics tool interface altogether, and doing odd things in Third Party Tag blocks. I’ve seen it done, and it makes sad pandas sad.

The Answer: Launch

Launch does two important things that solve this:

  1. Rules that set Adobe Analytics Variables do not necessarily have to fire a beacon. I can tell my rule to just set variables, to fire a beacon, or to clear variables, or any combination of those options.
  2. I can now order my rules to be sure that the rule that fires my beacon goes AFTER all the rules that set my variables.

So I set up my 5 rules, same as before. All of my rules have differing conditions, and use two similar triggers: one set to fire on Page Bottom (if the user just navigated to my site or refreshes a page, loading a fresh new DOM) and one on Data Element Changed (for Single Page App “virtual page views”, looking at when the Page Name is updated in the Data Layer).

When I create those triggers, I can assign a number for that trigger’s Order:


One rule, my global rule, has those triggers set to fire at “50” (which is the default number, right in the middle of the range it is recommended that I use, 1-100). The rule with this trigger not only sets my global variables, it also fires my beacon then clears my variables:

Most of my other rules, I give an Order number of “25” (again, fairly arbitrary, but it gives me flexibility to have other rules fire before or after as needed). One rule, my “Internal Search: Null Results” rule is set to the Order number “30”, because I want it to come AFTER the “Internal Search: Search Results” rule, since it needs to overwrite my Number of Search Results variable from “0” (which it got from the data layer) to “Zero”.

This gives me a chance to set all the variables in my custom rules, and have my beacon and clearVars fire at the end in my global rule (the rule’s Order number is in the black circles):

I of course will need to be very careful about using my Order numbers consistently- I’m already thinking about how to fit this into existing documentation, like my SDR.

Conclusion

This doesn’t just impact Single Page Apps- even a traditional Page Load Rule implementation sometimes needs to make sure one rule fires after another, perhaps to overwrite the variables of another, or to check a variable another rule set (maybe I’m hard coding s.channel in one rule, and based on that value, want to fire another rule). I can even think of cases where this would be helpful for third party tags. This is a really powerful feature that should give a lot more control and flexibility to your tag management implementation.

Let me know if you think of new advantages, use cases, or potential “gotchas” for this feature!

Followup Post: Allocation in Analysis Workspace

I recently posted about some of the implications and use cases of using Linear Allocation (on eVars) and participation (props/events) and in my research, I thought I had encountered a bug in Analysis Workspace. After all, for this flow:

Page A Page B Page C Page D Newstletter Signup event (s.tl)
prop1
eVar1
events
“Page A”
“Page A”
“event1”
“Page B”
“Page B”
“event1”
“Page C”
“Page C”
“event1”
“Page D”
“Page D”
“event1”
“”
“”
“event2”

I saw this in Reports and Analytics (so far, so good):

But then in Analysis Workspace for that prop, trying to recreate the same report, I saw this, where the props were only getting credited for events that happened on their beacon (none got credit for the newsletter signup):

Basically, I lost that participation magic.

Similarly, for the eVar, I saw this report in Reports and Analytics:

And in Workspace, it behaved exactly like a “Most Recent” eVar:

Again, it lost that linear magic.

Calculated Metrics to the Rescue

With the help of some industry friends (thanks, Jim Kultgen at Kohler and Seth Burke at Adobe) I learned that this is not a bug, necessarily- it’s the future! Analysis Workspace has a different way of getting at that data (one that doesn’t require changing the backend settings for your variables and metrics).
In Analysis Workspace reports, allocation can be decided by a Calculated Metric, instead of the variable’s settings. In the calculated metric builder, you can specify an allocation by clicking the gear box next to the metric in the Calculated Metric Definition:

A Note On “Default” Allocation here

On further testing, in Analysis Workspace, it seems that eVars with the back-end settings of either “Most Recent” and “Linear” allocation are treated the same: both will act like “Most Recent” with a metric brought in, and both will act like “Linear” when you bring in a calculated metric where you specified to have Linear Allocation. One might say, if you use Analysis Workspace exclusively, you no longer would need to ever set an eVar to “Linear”.

“Default” does still seem to defer to the eVar settings when it comes to Most Recent or Original (just not Linear). So in an eVar report where the eVar’s backend setting is “Original”, whether I used my “normal” Newsletter Signups event (column 2), or my Calculated one with Linear Allocation (column 3), credit went to the first page:

So, the Calculated Metric allocation did NOT overwrite my eVar setting of “Original”.

So how do I replicate my Linear eVar report?

To get back that Linear Allocation magic, I would create a new Calculated Metric, but I would specify “Linear Allocation” for it in the Calculated Metric Definitions. Then I can see that linear metric applied to that eVar (the original metric in blue, the new calculated one with linear allocation in purple) :

Note that it’s 40-20-20-20, rather than 25-25-25-25. I’ll admit, this isn’t what I expected and makes me want to do more testing. I suspect that it’s looking at my FIVE beacons (four page views, one success event) and giving that Page D double credit- one for its page view beacon, and one for the success event beacon (even though it wasn’t set on that beacon, it WAS still persisting). So it isn’t perfectly replicating my R&A version of the report, but it is helping me spread credit out between my four values.

And my participation prop?

Similarly, with the prop, when I bring in my new “Linear Allocation” calculated metrics I just set up for my eVar (in blue), I now see it behave like participation for my Newsletter Signup metric, unlike the original non-calculated metrics (in green):

…but those Page View numbers look just like linear allocation in an eVar would (2.08, 1.08, .58, .25), not the nice clean numbers (4, 3, 2, 1) I’d get for a prop with participation. At this point, I still don’t have my Content Velocity prop report, but I’m getting closer.

So how do I get my Content Velocity?

Analysis Workspace has a “Page Velocity” Calculated metric built into its Content Consumption template, which reports the same data as my Content Velocity (participation-enabled) prop did in Reports & Analytics.

 If I want to recreate this calculated metric for myself, I use the formula “Page Views (with Visit Participation)/Page Views”:

Though my friend Jim Kultgen suggested a metric he prefers:

((Page Views 'Visit Participation')/(Visits))-1

This shows you how a page contributed to later page views, discounting how it contributed to itself (because obviously it did that much- every page does), and looking at visits to that page (so repeat content views don’t count for much).

These two calculated metrics would show in an AW report like this:

Conclusion

If I use Analysis Workspace exclusively, I may no longer need to enable participation on metrics or props- I could just build a Calculated Metric off of existing metrics, and change their allocation accordingly, and that would work the same with either my eVars or my Props.

Knowing a few of these quirks and implications, I can see a future with simpler variable maps (no more need for multiple eVars receiving the same values but with different allocation settings) and the ability to change allocation without tweaking the original data set (my “Newsletter Signups” metric retains its original reporting abilities, AND I can build as many Calculated Metrics off of it as I want). I’m excited to see how Adobe will keep building more power/flexibility into Workspace!

Participation and Linear Allocation in Adobe Analytics- behavior I expected, and some I did not

I recently posted about some of the implications and use cases of using Linear Allocation (on eVars) and participation (props/events) and in my research, I thought I had encountered a bug in Analysis Workspace. After all, for this flow:

Page A Page B Page C Page D Newstletter Signup event (s.tl)
prop1
eVar1
events
“Page A”
“Page A”
“event1”
“Page B”
“Page B”
“event1”
“Page C”
“Page C”
“event1”
“Page D”
“Page D”
“event1”
“”
“”
“event2”

I saw this in Reports and Analytics (so far, so good):

But then in Analysis Workspace for that prop, trying to recreate the same report, I saw this, where the props were only getting credited for events that happened on their beacon (none got credit for the newsletter signup):

Basically, I lost that participation magic.

Similarly, for the eVar, I saw this report in Reports and Analytics:

And in Workspace, it behaved exactly like a “Most Recent” eVar:

Again, it lost that linear magic.

Calculated Metrics to the Rescue

With the help of some industry friends (thanks, Jim Kultgen at Kohler and Seth Burke at Adobe) I learned that this is not a bug, necessarily- it’s the future! Analysis Workspace has a different way of getting at that data (one that doesn’t require changing the backend settings for your variables and metrics).
In Analysis Workspace reports, allocation can be decided by a Calculated Metric, instead of the variable’s settings. In the calculated metric builder, you can specify an allocation by clicking the gear box next to the metric in the Calculated Metric Definition:

A Note On “Default” Allocation here

On further testing, in Analysis Workspace, it seems that eVars with the back-end settings of either “Most Recent” and “Linear” allocation are treated the same: both will act like “Most Recent” with a metric brought in, and both will act like “Linear” when you bring in a calculated metric where you specified to have Linear Allocation. One might say, if you use Analysis Workspace exclusively, you no longer would need to ever set an eVar to “Linear”.

“Default” does still seem to defer to the eVar settings when it comes to Most Recent or Original (just not Linear). So in an eVar report where the eVar’s backend setting is “Original”, whether I used my “normal” Newsletter Signups event (column 2), or my Calculated one with Linear Allocation (column 3), credit went to the first page:

So, the Calculated Metric allocation did NOT overwrite my eVar setting of “Original”.

So how do I replicate my Linear eVar report?

To get back that Linear Allocation magic, I would create a new Calculated Metric, but I would specify “Linear Allocation” for it in the Calculated Metric Definitions. Then I can see that linear metric applied to that eVar (the original metric in blue, the new calculated one with linear allocation in purple) :

Note that it’s 40-20-20-20, rather than 25-25-25-25. I’ll admit, this isn’t what I expected and makes me want to do more testing. I suspect that it’s looking at my FIVE beacons (four page views, one success event) and giving that Page D double credit- one for its page view beacon, and one for the success event beacon (even though it wasn’t set on that beacon, it WAS still persisting). So it isn’t perfectly replicating my R&A version of the report, but it is helping me spread credit out between my four values.

And my participation prop?

Similarly, with the prop, when I bring in my new “Linear Allocation” calculated metrics I just set up for my eVar (in blue), I now see it behave like participation for my Newsletter Signup metric, unlike the original non-calculated metrics (in green):

…but those Page View numbers look just Despite clearly remembering learning about it my first week on the job at Omniture in 2006, I realized recently that I did not have a lot of confidence in what participation and linear allocation would do in certain situations in Adobe Analytics. So I put a good amount of effort into testing it to confirm my theories, and I figured I’d pass along what I discovered.

First, the Basics: eVar Allocation

You may already know this part, so feel free to skip this section if you do. Allocation is a setting for Conversion Variables (eVars) in Adobe Analytics, with three options:

Let’s take a simple example to show what how this effects things. Let’s say a user visits my site with this flow:

Page A Page B Page C Page D Form Submit- Signup
s.eVar5=”Page A” s.eVar5=”Page B” s.eVar5=”Page C” s.eVar5=”Page D” s.events=”event1″

Most Recent (Last)

Most eVars have the “defaultiest” allocation of “Most Recent (Last)”, meaning in an event1 report broken down by eVar5, “Page D” would get full credit for the event1 that happened, since it was the last value we saw before event1. So far, pretty simple.

Original Value (First)

But maybe I want to know which LANDING page contributed the most to my event1s (there are other ways of doing this, but for the sake of my example, I’m gonna stick with using allocation). In that case, I might have the allocation for that eVar set to “Original Value (First)” so then “Page A” would get full credit for this event1, since it was the first value we saw for that variable. If my eVar is set to expire on visit, then it’s still nice and straightforward. If it’s set to never expire, then the first value we ever saw for that user will always get credit for any of that user’s metrics. If it’s set to expire in two weeks, then we’ll see the first value that was passed within the last two weeks.

This setting is frequently used for Marketing Campaigns (it’s not uncommon to see s.campaign be used for “Most Recent Campaign in the last 30 days” and then another eVar capture the exact same values, but be set to “Original Campaign in the last 30 days”).

Linear Allocation

If I’m feeling a bit more egalitarian, and want to know ALL the values for an eVar that contributed to success events, I would choose linear allocation. In this scenario, all four values would split the credit for the one event, so they’d each get one fourth of the metric:

(Though it may not actually look like this in the report- by default it would round down to 0. But I’ll talk about decimals later on).

So, that’s allocation.

Then what is participation?

Participation is a setting you can apply to a prop, so that if you bring a Participation-enabled metric into the prop’s report, you can see which values were set at some point before that event took place. Repeat: to see participation you must have a prop that is set to “Display Participation Metrics”:

And the metric you want to see needs to have participation enabled (without this, in the older Reports and Analytics interface, that event won’t be able to be brought into the prop report):

Unlike linear allocation for an eVar, participation for a prop means all the values for that prop get full credit for an event that happened. So, given this flow:

Page A Page B Page C Page D Form Submit- Signup
s.prop1=”Page A” s.prop1=”Page B” s.prop1=”Page C” s.prop1=”Page D” s.events=”event1″

You would see a report like this, because each value participated in the single instance of that event:

New Learnings (for me): Content Velocity

One thing these settings can be used for is measuring content velocity: that is, how much a certain value contributed to more content views later on. For instance, if I have a content site, and I want to know how much one piece of content tends to lead to the reading of MORE content, I might use a participation-metric-enabled prop with a participation-enabled Page View custom event, or I might use an eVar with linear allocation against a Page View custom event (whether or not the event has participation enabled doesn’t matter for the eVar). For my test, I did both:

Page A Page B Page C Page D
s.prop1=”Page 1″
s.eVar1=”Page 1″
s.events=”event1″
s.prop1=”Page 2″
s.eVar1=”Page 2″
s.events=”event1″
s.prop1=”Page 3″
s.eVar1=”Page 3″
s.events=”event1″
s.prop1=”Page 4″
s.eVar1=”Page 4″
s.events=”event1″

The prop

The prop version of this report would show me that Page 1 contributed to 4 views (its own, and 3 more “downstream”). Whereas Page 2 contributed to 3 (its own, and two more downstream), etc…

The eVar

Alternatively, the eVar would show me some thing pretty odd:

Those weird numbers don’t make sense on this small scale (how could 0 get 6.3%?), because it is rounding, and not showing me decimals. If I want to see the decimals, I can create a really simple calculated metric that brings in my custom Page View event (event1) and tells it to show decimals:

The report then makes a little more sense and show us where the rounded numbers came from (and how Page 4, with “0” Page Views, got 6.3% of the credit), but may still seem mysterious:

Those are some odd numbers, right? Here’s the math:

 Value Credit Why?   Explanation
Page 1 2.08 1+0.5+0.33+0.25 It got full credit for its own view, then half the credit (shared with page 2) for the event on Page 2, then a third of the credit (shared with Page 2 and Page 3) on Page 3…
Page 2 1.08 0.5+0.33+0.25 It only got half credit for the event that took place on its page (shared with Page 1), then a third of the credit (shared with Page 1 and Page 3) on Page 3, etc…
Page 3 0.58 .33+.25 It only gets a third of the credit that took place on its page, and a quarter of the credit for the fourth page.
Page 4 0.25 0.25 The event that happened on this page is shared with all four pages.

Crazy, right? I’m not going to tell you which an analyst should prefer, but as always, you should ask the question: “What will you DO with this information?”

What happens when multiple values appear in the same flow?

Let’s say the user does something like this, where they hit one value a couple page views in a row (Page B in this example), or they hit a value 2 separate times (Page A in this example):

Page A Page B Page B
(again)
Page C Page D Page A
(again)
Conversion event
prop1
eVar1
events
“Page A”
“Page A”
“event1”
“Page B”
“Page B”
“event1”
“Page B”
“Page B”
“event1”
“Page C”
“Page C”
“event1”
“Page D”
“Page D”
“event1”
“Page A”
“Page A”
“event1”
events=”event2″

For the prop, it’s pretty straightforward. This will look like 6 event1s, where Page A gets value for all 6, and Page D gets credit for just 2 (itself, and the Page A that came afterwards):

For the eVar, it gets a little more complicated (I added in a calculated metric so you can see the decimals). Page A (accessed twice at separate times) got double credit for the conversion (which I might have predicted), but Page B (accessed twice in a row) ALSO gets double credit for the conversion (which I didn’t predict, probably because I’m too used to thinking in terms of the CVP plugin):

Caveats

A couple things to be aware of:

  • Settings for participation and allocation don’t apply retroactively- you can’t apply them to existing data. If you want to start using it, you need to change your settings and you’ll see it applied to future data. However, this can mess with existing data, so be careful.
  • Analysis Workspace does some unexpected behavior for both participation and allocation. I’ll have a followup post on that.

Conclusion

Both participation and linear allocation aren’t used often, but they can uniquely solve some reporting requirements and can provide a lot of insight, if you know how to read the data. I hope my experimentation and results here help make it clearer how you might be able to use and interpret data from these settings. Let me know if you have other use cases for using these settings, and how it has worked out for you!like linear allocation in an eVar would (2.08, 1.08, .58, .25), not the nice clean numbers (4, 3, 2, 1) I’d get for a prop with participation. At this point, I still don’t have my Content Velocity prop report, but I’m getting closer.

So how do I get my Content Velocity?

Analysis Workspace has a “Page Velocity” Calculated metric built into its Content Consumption template, which reports the same data as my Content Velocity (participation-enabled) prop did in Reports & Analytics.

 If I want to recreate this calculated metric for myself, I use the formula “Page Views (with Visit Participation)/Page Views”:

Though my friend Jim Kultgen suggested a metric he prefers:

((Page Views 'Visit Participation')/(Visits))-1

This shows you how a page contributed to later page views, discounting how it contributed to itself (because obviously it did that much- every page does), and looking at visits to that page (so repeat content views don’t count for much).

These two calculated metrics would show in an AW report like this:

Conclusion

If I use Analysis Workspace exclusively, I may no longer need to enable participation on metrics or props- I could just build a Calculated Metric off of existing metrics, and change their allocation accordingly, and that would work the same with either my eVars or my Props.

Knowing a few of these quirks and implications, I can see a future with simpler variable maps (no more need for multiple eVars receiving the same values but with different allocation settings) and the ability to change allocation without tweaking the original data set (my “Newsletter Signups” metric retains its original reporting abilities, AND I can build as many Calculated Metrics off of it as I want). I’m excited to see how Adobe will keep building more power/flexibility into Workspace!

Quick poll: What should I tackle next?

Each new year, I tend to dive into some new side project (this is how the Beacon Parser and the PocketSDR came about). I have quite a few things I want to tackle right now, and one main thing I’m slowly plugging away at, but in the meantime, I’m wondering what to prioritize. So, a poll:

Any other ideas (or desired improvements to existing tools)? Let me know in the comments.

New tool: PocketSDR Mobile App for Adobe Analytics

I mentioned in my previous post that one of the reason I’m going “independent” is to have more time to work on products and pet projects. One of those ongoing projects of mine has been a mobile app you can use to keep your Adobe Analytics SDR/Variable Map easily accessible on your phone.
   
Get it on Google Play
The first release of this actually went out in August 2016, but I didn’t let anyone know because I felt it was still too “beta” and I wanted to clean it up before making it more publicly known. A year and a half later (and various framework upgrades that required redoing the whole thing… each time learning and applying those learnings), I got it to a point where I don’t feel ashamed to share it, though of course there is always room for more improvement.

To use the app, you will need your Adobe Analytics Web Services API key. And since no one wants to enter their 32-digit API Key into their mobile device, this newest version of the app allows you to enter your API key on the web (meaning you can copy-and-paste on your desktop machine) then get a link that will allow your mobile device to open the app with those credentials already entered. I highly recommend using that API Shortcut tool before diving into the app.

I created the app for a few reasons:

  • As with the beacon parser, the main reason was because I wished a tool like this existed and figured if I was going to make it for myself, I may as well let other people use it too.
  • I have a web development background, and wanted to learn more about developing for Mobile Apps. I’ll admit on this front, I cheated a bit: rather than learning multiple native app languages (like Swift or C++), I used the Ionic Framework, which let me program the app using Angular (which fits with my JS/HTML background better than native languages), then use Cordova to turn it into a Mobile App. Still, I did get to learn a lot about mobile development in general, analytics options within mobile, and the release cycle for mobile development (I can’t just save a file and FTP it to my server), not to mention Angular 1/2 and typescript.
  • I needed a situation in which I could test out analytics tracking in various Single-Page App scenarios (yay Angular).
  • Because at heart, I am a developer. While I enjoy helping clients sort out governance and documentation issues, sometimes I just want to retreat to my basement and do some coding, for that straight-forward validation of seeing your code work in real-time. It’s good to keep those skills alive.

All in all, I learned so much. And I’ve already used the app quite a bit to keep track of my client’s Variable Maps (“what did we use event40 for? Oh yeah!”) However, I’m not a professional mobile developer, and this project was done entirely in my evenings/PTO as a learning exercise that happened to create a useable product. So please be forgiving of any thing in the app that is less-than-ideal; there is a reason I’m not charging for the app at all. I will continue working on improvements, particularly with an eye for performance (I’m looking at you, Android….) and I’m already aware of potential aesthetic issues on iPhone X. Please let me know of any other feedback or suggestions- I’d love to hear what you think!

P.S. Before anyone asks why I didn’t use the Adobe API OAuth2 Authentication, I thought about it, and may yet move to using that, but have concerns about how that works for marketing cloud AND non-marketing cloud logins. That, and the API documentation is… lacking… so I decided for now to stick with what I know. If anyone has experience with OAuth2 authentication and wants to discuss, please reach out. 

P.P.S. A special thanks to my beta testers!

Exciting News: Self-Employment!

Bilbo Going on an Adventure

I’ve finally made the leap, and am now consulting as my own independent entity. I’ve worked at many wonderful consulting agencies over the years and happily still have a good relationship with each of them, but for some time now I’ve wanted to move more and more into building products. Unfortunately, thus far no one has wanted to hire me as a junior Product Manager or Developer for anywhere near the same salary I’ve been getting as a Principal Consultant, so in order to pursue my product dreams, I needed to reduce my commitment to consulting and find a more flexible arrangement.
I will continue consulting, because I want to stay informed and have current practical experience with implementation (plus I got to keep paying my bills). But without an agency as a “go between”, I can work fewer billable hours and have more time to work on products and projects. Don’t get me wrong: agencies as a “go between” provide a lot of value: I won’t pretend to not be daunted by marketing, sales contracts, benefits and taxes. But thus far, it’s been a great growing experience for me. And I’m lucky to have a very supportive network as I branch into the unknown.

So now I have a chance to work on some other projects, like fixing up/modernizing the beacon parser, and other projects I’ll post about shortly (stay tuned!) I’ll also continue working with Cognetik for a few exciting initiatives they have going on, so you will see me on their blog still occasionally. And there are some other agencies I’m eager to work with still if it doesn’t interfere with my product dreams, so this may or may not last long.

I already have a good amount of independent work to keep me busy for the next few months, so this post isn’t me necessarily soliciting for more work (unless you happen to have the PERFECT project for me, in which case, let’s talk!) But if you want to talk about products and opportunities, please reach out! I’m now at jenn@digitalDataTactics.com.

Adobe DTM Launch: Improvements for Single Page Apps

For those following the new release of Adobe’s DTM, known as Launch, I have a new blog post up at the Cognetik blog, cross-posted below:

It’s finally here! Adobe released the newest version of DTM, known as “Launch”. There are already some great resources out there going over some of the new features (presumably including plenty of “Launchey Launch” puns), which includes:

  • Extensions/Integrations
  • Better Environment Controls/Publishing Flow
  • New, Streamlined Interface

But there is one thing I’ve been far more excited about than any other: Single Page App compatibility. I’ve mentioned on my personal blog some of the problems the old DTM has had with Single Page Apps:

  • Page Load Rules (PLRs) can’t fire later than DOMready
  • Event-Based Rules (EBRs) and Direct Call Rules (DCRs) can’t “stack” (unlike PLRs, there’s a 1:1 ratio between rules and analytics beacons, so you can’t have one rule set your global variables, and another set section-specific variable, and another set page-specific variables, and have them all wrap into a single beacon)
  • It can be difficult to fire s.clearVars at the right place (and impossible without some interesting workarounds)
  • Firing a “Virtual Page Load” EBR at the right time (after your data layer has updated, for instance) can be tricky.

So much of this is solved with the release of DTM Launch.

  • You can have one rule that fires EITHER on domReady OR on a trigger (Event-based or Direct Call).
  • You have a way to fire clearVars.
  • You can add conditions/exclusions to Direct Call rules

There are other changes coming that will improve things even further, but for now, these changes are pretty significant for Single Page apps.

Multiple Triggers on a Single Rule

If I have a Single Page App, I’ll want to track when the user first views a page, the same as for a “traditional” non-App page. So if I’m setting EBRs or DCRs for my “Virtual Page Views”, I’d need to account for this “Traditional Page Load” page view for the user’s initial entry to my app.
In the past, I’d either have a Page Load Rule do this (if I could be sure my Event-Based Rules wouldn’t also run when the page first loaded), or I could do all my tracking with Event-Based Rules, and I’d have to suppress that initial page view beacon. I may end up with an identical set of rules- one for when my page truly loads, and one for “Virtual Page Views”.

Now, I can do this in a single rule:

Where my “Core- Page Bottom” event fires when the page first loads (like an old Page Load Rule):

…and another “Page Name Changed” event that fires when my “page name” Data Element changes (like an old Event-Based Rule):

No more need to keep separate sets of rules for Page Load Rules and Virtual page views!

Clearing variables with s.clearVars()

Anyone who has worked on a Single Page App, or on any Adobe Analytics implementation with multiple s.t() beacons on a single DOM, has felt the pain of variables carrying over from beacon to beacon. Once an “s” variable (like s.prop1) exists on the page, it will hang around and be picked up by any subsequent page view beacon on that page.

Page 1

Page 2

Page 3

Page 4

s.pageName

Landing

Search Results

PDP > Red Wug

Product List

s.events

(blank)

event14

prodView

prodView

s.eVar1 (search term)

(blank)

Red Wug

Red Wug

Red Wug

My pageName variable is fine because I’m overwriting it on each page, but my Search Term eVar value is hanging around past my Search Results page! And on pages where I don’t write a new events string, the most recent event hangs around!

In the old DTM, I had a few options for solving this. I could do some bizarre things to daisy-chain DCRs to make sure I could get the right order of setting variables, firing beacons, then clearing variables. Or, I could use a hack in the “Custom Code” conditions of an Event-Based Rule, to ensure s.clearVars would run before I started setting beacons. Or, more recently, I could use s.registerPostTrackCallback to run the s.clearVars function after the s_code detected an s.t function was called.

Now, it’s as simple as specifying that my rule should set my variables, then send the beacon, then clear my variables:

Directly in the rule- no extra rules, no custom code, no workarounds!

Rule Conditions on ALL Rule Types (including Direct Call)

If I were using Direct Call Rules for my SPA, in the past, I’d have to account for Direct Call Rules having a 1:1 relationship with their trigger. If I had some logic I needed to fire on Search Results pages, and other logic to fire on Purchase Confirmation pages, I could have my developers fire a different “_satellite.track” function on every page:

Then in each of those rules, I’d maintain all my global variables as well as any logic specific to that beacon. This could be difficult to maintain and introduces extra work and many possible points of failure for developers.

Or, I could have my developers fire a global _satellite.track(“page view”) on every page, and in that one rule, maintain a ridiculous amount of custom code like this:

This would take me entirely out of the DTM interface, and make some very code-heavy rules (not ideal for end-user page performance, or for DTM user experience — here’s hoping your developer leaves nice script comments!)

Now, I can still have my developers set a single _satellite.track(“page view”) (or similar), and set a myriad of rules in Launch, each using that same “page view” trigger, but each with a condition so you can set different variables in different rules directly in the interface when your developers fire _satellite.track(“page view”) on your Search Results versus when they fire _satellite.track(“page view”) on your Purchase Confirmation page:

I’d love to say all my SPA woes were solved with this release, but to show I haven’t entirely drunk the Kool-aid, I will admit some of my most wished-for features (and extensions) aren’t in this first release of Launch. I know they’re coming, though- future releases of Launch will add additional features that will make implementing on a Single Page App even simpler, but for now, it still feels like Christmas came early this year.