Differences between DTM and Launch to be Aware of

33 Sticks Logo- Orange Circle with 3|3

(cross-posted from the 33 Sticks blog)

There’s a lot of talk about how Adobe Launch is backwards-compatible- that, aside from a few _satellite methods that may not still work (that were probably not supported to begin with), anything you had in DTM should still work in Launch. But, well, not EVERYTHING in DTM is still going to work in Launch, and some things in Launch may catch you off guard. Here are some things you should be aware of:

Far fewer things happen automatically. For instance, Adobe Analytics no longer automatically fires a beacon on page load (which I view as a wonderful thing, but you still need to be aware of it). You need to set it up (and things like loading Target or firing Mboxes) in a rule.

 The following _satellite methods (among others, but these are the most common) are no longer supported (or, in some cases, may never have been supported but now simply won’t work).

  • _satellite.getQueryParam/_satellite.getQueryParamCaseInsensitive
  • _satellite.notify (this still technically works, but you should migrate to _satellite.logger)
  • _satellite.URI
  • _satellite.cleanText
  • _satellite.setCookie (which is now _satellite.cookie.set) and _satellite.readCookie (which is now _satellite.cookie.get)

 There is some interface functionality in DTM that is not yet in Launch:

  • There is no “notes” functionality currently (though I hear that is coming soon)
  • It’s not easy to do a revision comparison (diff compare) currently (though again, I hear that is in the works).

 Launch still has console debugging, but it no longer alerts you to what “SATELLITE DETECTED” (which I used a lot to troubleshooting bubbling issues)- it merely tells you what rules are firing, etc.

 Some tools like Tagtician or Disruptive Advertising’s DTM Debugger are not yet fully Launch-compatible. (Tagtician supports Launch but is working on improving how it handles it; I don’t know if the DTM Debugger has any plans to become Launch-compatible).

 The Adobe Analytics extension does not support multiple Adobe instances, nor can you have multiple Adobe Analytics extensions installed. (Multi-suite tagging is still ok).

 The Google Analytics extension does not support multiple GA instances.

 Some things have been renamed in a way that may throw you off- for instance, you can still easily have a Rule condition be based on a Data Element value- it’s just named “Value Comparison” now.

 While Launch gives you much more control over the order things happen in, be aware that while actions within a rule will START in the specified sequence, they may not COMPLETE in sequence: Action 1 will start, then Action 2 will start whether Action 1 is finished or not. This is particularly significant if the actions are just code (for instance, I had my first action try to pull information from an API, and my second action then use that info to fire a pixel… but the pixel kept firing before the API had done its thing). I hear that users may eventually get more control over this, but for now this is how it is.

 Adapters can be confusing (fortunately Jimalytics clears it up nicely on his blog). These days, Adobe automatically creates a “Managed by Adobe” adapter, and that single adapter should work for multiple environments.

None of these are necessarily a reason to not upgrade- especially since Adobe now has a plan for sunsetting DTM. But hopefully you won’t be caught unaware by any of these items. Has anything else surprised you about Launch? Let us know!

How to self-host a Launch Library using the download option

33 Sticks logo- Orange Circle with 3|3

(Cross-posted from the 33 Sticks Blog)

As mentioned in my series on migrating from DTM to Launch, DTM had a few deployment options:

  • An Adobe/Akamai-hosted library (ie, your embed code starts with “//assets.adobedtm.com”)
  • An FTP self-hosted library (DTM would push changes through FTP to a location on your own servers)
  • A downloaded self-hosting option (you would manually download after changes and put onto your servers).

Technically, all three of these options also exist in Launch, though the approach is slightly different. Since I ended up having to get some clarification from Adobe on how to use Launch to copy these methods, I figured I’d document my findings here . When creating an adapter, you have the option of Managed by Adobe or SFTP:

If you select SFTP, it’s slightly different from in DTM, but the effect is the same.

How To Use the “Download” Method

If you want to go the download route, you still can, but it’s a bit hidden, so I’ll walk through it. Choose “Managed by Adobe” here, but then when setting up the corresponding environment, choose “Create Archive” and specify where the file will live on your servers (this is important because each file within the library package needs to know how to reference other files within the library package):

(You can even encrypt the file if you’d like extra security, so that a password would be required to open/view the archive).

Then, once you’ve built the library (and you MUST build it AFTER you’ve set it to “create archive”, or there won’t be anything to download), when viewing your environments click on the “install” icon:

This should give you a popup where you have the ability to “Download Latest Archive”:

This should download a .zip to your browser, the contents of which you can now put on your server. Be aware that the folder(s) within this zip may change names between builds (like the “BL1f0491fb5eb14ad3b60996dd31aedaa6” folder in my image below, in a previous build had been “BL92309a949e564f269ce6719b1136910f”), so if you are trying to merely paste one build over another, you may want to clean out the old subfolders afterwards to keep the overall folder clean.

Hopefully this helps fill some of the documentation gaps out there. Please let me know if you have any additional insight or questions!

Presenting at Observepoint Virtual Summit

(Cross-posted from the 33 Sticks blog)

Join us for Observepoint’s Virtual Analytics Summit on October 25th! I’ll have the opportunity to speak about measuring and improving not just your data quality, but also the value you get OUT of the data: your data ecosystem, your processes, and your team’s roles.

In our industry, there is (deservedly) a lot of attention given to the health and quality of our data. Yet many organizations aren’t getting value out of their data, not (just) because the data is unhealthy, but because the org doesn’t have the right processes, people or overall mindset to be truly data-driven. Many orgs are still REactive about their data, rather than PROactive- the product team announces a new site feature, and the analytics team has to squeeze in some tracking at the last moment. Or the analytics team is so busy managing pixels and dashboard requests that they don’t actually get to dive in and gather insight to inform business decisions.

This became even more apparent with the roll out of GDPR (General Data Protection Regulation) last Spring- many companies were not (and perhaps are still not) ready to comply by the May 25th roll out. Our industry’s ability to be proactive and get in front of such initiatives isn’t a data quality issue (though certainly, rolling it out will be easier if your solution is well-documented and reliable)- it’s a matter of ownership, support, governance, and priority.

My presentation will walk through specific questions you can ask yourself to measure where there are opportunities for improvement in your ecosystem, with your processes, and with your roles/resources/ownership, as well as specific recommendations for next steps you can take. It will end with a Q&A session- I’d love to hear from you! Sign up for free on the event site, and check out the other amazing speakers and topics for the day.


Cross-post from 33 sticks: Direct Call Rules in Launch have a new power: passing additional info in _satellite.track

(FYI, this is a cross-post from 33 Sticks’ blog. )

Adobe’s Dynamic Tag Manager has always given developers a chance to define exactly when a rule was called, by firing _satellite.track("insert rule name here"). This is called a Direct Call Rule (or DCR). They didn’t always get a ton of product love- after all, Event Based Rules don’t require work from developers and have so many more options- but many DTM users used them heavily because of the control they provided and how incredibly straightforward they were.

From my view, they historically had a few major downsides:

  1. Multiple DCRs couldn’t “stack” to form a single Adobe Analytics beacon, meaning you couldn’t have one DCR set your global variables and another set more user-action-specific variables.
  2. You couldn’t apply additional conditions (e.g. “don’t fire on the page with this URL”)
  3. There was no good way to clear out your variables so they wouldn’t persist from beacon to beacon
  4. You couldn’t pass additional information specifically scoped for Direct Call Rule. For example, if you fired _satellite.track(“add to cart”), you had to make sure your overall data layer/data elements were already set up properly to show WHICH product was added to cart.

I’ve talked about how happy I am that Launch solved the first three points (here and here) but I’ve finally had a reason to try out how Launch handles #4.

You can now pass extra parameters into your _satellite.track function, like this:

_satellite.track("add to cart",{name:"wug",price:"12.99",color:"red"})

Then, when you set up a rule that fires off that direct call:

You can access the information on those parameters like you would access a data element, by referencing %event.detail.yourObjectHere%:

Or, if needed, in your custom code for that rule by just accessing event.detail:

You could even have a multi-leveled object:

_satellite.track("add to cart",{product:{name:"wug",price:"12.99",color:"red"},location:"cart recommendations"})

In which case you could reference %event.detail.product.name% or %event.detail.location%.

That’s all there is to it! Go ahead, fire this off in your console, and see our rule at work:

_satellite.track("add to cart",{name:"wug",price:"12.99",color:"red"})

I’ve seen this work in DTM recently, too, though I’m under the impression that may not be fully supported, perhaps. Either way, this great enhancement can simplify data layers and Launch implementations and removes the need for a lot of previous workarounds.

Cross-post from 33 Sticks: Setting up Adobe Analytics for GDPR

I recently posted on the 33 Sticks blog; I figured I’d copy the post here for posterity’s sake ;). 

There is so much documentation out there for Adobe Analytics and GDPR, it’s hard to see how it all fits together (though I do feel like Adobe’s documentation on the GDPR workflow is a good place to start). Note, I am NOT claiming to be an expert on this- I’ll defer to Adobe staff for their expertise. And I am NOT offering advice on what/how to regulate- I’ll defer to your legal/privacy team for that. But since I just had to muddle through all this, and learned a lot in the process, I figured I’d share my learnings and hopefully help others who are also muddling through.

I’ve found that in general, when folks are talking about changes in Adobe Analytics to account for GDPR, they’re talking about one of three things:

  1. Obfuscating/removing User IP addresses
  2. Adobe Data Retention Settings
  3. Client Opt-out

Obfuscating/Removing IP Addresses

This is pretty straightforward, though the documentation is a bit tricky to find. This is simply a setting you can set in the Admin Console of Adobe Analytics within General Account Settings for each Report Suite:

Adobe’s General Settings Documentation has good info on these settings. To me, the important take-aways here are:

  • Replace the last octet of IP address with 0 is basically like taking the street number off of my house’s address- you may still be able to know my general location, but you no longer have the specifics. This change applies BEFORE data is processed, meaning it WILL affect Adobe’s ability to do Bot/IP Filtering, might affect VISTA rules, and will make it so Adobe’s Geo-segmentation will have less info to work with and will therefore be at least a little less accurate.
  • IP Obfuscation affects what analysts/admins can view of the IP address, like in Data Warehouse. You can choose to leave the IP address as-is, to obfuscate it so it becomes a unique string that can’t be used to identify the user, or to replace it with “x.x.x.x” (which is the default option for EMEA suites gong forward). The obfuscation or deletion happens further along in data processing, after VISTA rules and Bot/IP filtering.

Adobe Retention Settings

After May 25, 2018, Adobe may start deleting data older than 25 months, unless you specifically work with your Adobe Account reps to extend this to up to 37 months (at a cost). Unlike Google Analytics (which will keep standard reports but just delete user/event data), Adobe truly is just deleting all data older than your retention window. When thinking about this, I’d encourage you to consider:

  1. the rareness of a user who hasn’t reset their cookies/changed devices/changed browsers in over 2 years
  2. if your site and/or implementation hasn’t significantly changed in 2+ years, then we may have bigger issues than data retention

Basically, if you’re heavily using data that is over two years old, I’m fairly certain that you’re already not looking at data that could be compared as apples-to-apples with your current site/implementation.

You can view your current data retention by going to the Data Governance interface mentioned later in this post (note, my Report Suites say anywhere from 37 months to 121 months, even though I have definitely not worked to extend it beyond 25 months- I suspect that since I have not explicitly extended it, I can’t count on it staying this way):

Client Opt-Out

This is definitely the most involved piece of GDPR compliance. Again, Adobe’s documentation on the GDRP Workflow has some good information, but here is my take on what you need to do (assuming you are already on the Experience Cloud):

Label what data needs to be “governed”

Here, on a per-Report-Suite basis, I can go through all my dimensions and metrics and flag what things should be affected by data governance. Many of my dimensions and metrics don’t NEED to be governed- for instance, browser type can probably just be left alone (Disclaimer: seriously, talk to your legal team about what to govern). Other things, like geo-location, Adobe may have automatically already applied appropriate labels to, which you just need to review/confirm:

But my own organization’s policies may dictate that I go even more stringent and also label things like US States, which Adobe didn’t auto-apply a label to. The more likely scenario is that I need to pop open the subtle drop-down menu that says “Standard Dimensions” and go to my custom Events and Dimensions so I can find my eVar that captures User ID and label it so Adobe knows how to govern it:

The labels are, unfortunately, not super straight-forward, but basically, these are your options for each dimension/metric:

Adobe will use these labels to decide what to do when it receives a request from you about a user access/deletion.

Set Up Your Privacy Portal for Capturing Adobe ID Requests

Before Adobe can “govern” anything, you need to give users a way of opting out of tracking. This means setting up a Privacy Portal on your site, and using it as a means of collecting information about who is requesting to access their data or opt out. Adobe has provided some tools to help find out about the WHO and WHAT, but then it’s up to your Data Regulator (whoever in your org is assigned to do this stuff) to pass that information along to Adobe.

1. The User Visits the Privacy Portal

adobePrivacy.js (or the Adobe Experience Cloud Privacy Launch extension) can put all the tracking identifiers we have for the current user into a JSON object.

Our user might request to merely view what data is being kept on him, in which case, he’ll have to wait- adobePrivacy.js can show us his IDs, but not much more than that. But I could at least show him the identifiers if I want. He may request to delete all past data (and/or get a copy of what was deleted). For that, I need to take that JSON object from adobePrivacy.js and pass it along to whatever mechanisms my Org has in place to organize data governance requests with with Adobe GDPR API.

For example-driven learners like me, I have an extremely unattractive example page showing how to use adobePrivacy.js.
This is what the “retrieve” response might look like:

  "company": "adobe",
  "namespace": "visitorId",
  "type": "analytics",
  "name": "s_fid",
  "description": "Fallback Visitor ID",
  "value": "64F04470FAKE04E9-1DADD8FAKE65B7C2"
  "company": "adobe",
  "namespace": "CORE",
  "namespaceId": 0,
  "type": "standard",
  "name": "AAM UUID",
  "description": "Adobe Audience Manager UUID",
  "value": "610212449467061254000504ALSOFAKE"
  "company": "adobe",
  "namespace": "ECID",
  "namespaceId": 4,
  "type": "standard",
  "name": "Experience Cloud ID",
  "description": "This is the ID generated by Visitor and set in 1st party cookie.",
  "value": "6080944537973STILLFAKE359908301249"

2. I Submit the Request Through the GDPR API/API Portal

I can use either the Privacy UI Portal (which I can get to from my Adobe Experience Cloud Admin Console) or the GDPR API (after I’ve set up an adobe.io integration- see Appendix on this post).

Here, I can take the JSON object I got from my portal (shown to the right in blue), batch it up with other user’s info (if desired), and let Adobe know who has made an access/delete request. Requests take 1-2 weeks. For access requests, you get a CSV that returns the status of your requests.

I happen to use Postman for my request, which is a handy UI for API requests. This is what my request might look like:

POST API request to https://platform.adobe.io/data/privacy/gdpr/

x-gw-ims-org-id : DCF779195968NOTREAL@AdobeOrg
x-api-key: 5a7105dNotARealAPIKeyc735355
Authorization: Bearer eyJ4NXUiOiJpbXNfbmExLWtleS0xLmNlciIsImFsZyI6IlJTMjU2In0.eyJpZCI6IjE1Mjc2MTI3MjQwNTdfYjRkNjg4YTUtOThhMi00MzM2LWIwNjgtNDkwYjYzZThiMTIThisIsntARealTokenI6IjVhNzEwNWQ0YmNiMjQwZjQ4NDBmZmNmYTBjNzM1MzU1IiwidXNlcl9pZCI6IkQ2MjgzNjJDNUFGQzU1REQwQTQ5NUMxMEB0ZWNoYWNjdC5hZG9iZS5jb20iLCJ0eXBlIjoiYWNjZXNzX3Rva2VuIiwiYXMiOiJpbXMtbmExIiwiZmciOiJTTzZOSUY1UEZMTjdDSEFPQUFBQUFBQUFRND09PT09PSIsIm1vaSI6IjU5MzFhZmM5IiwiYyI6IlZFTE5iN1JHcEhhN0h5dkNYSi9SNFE9PSIsImV4cGlyZXNfaW4iOiI4NjQwMDAwMCIsInNjb3BlIjoib3BlbmlkLEFkb2JlSUQscmVhZF9vcmdhbml6YXRpb25zLGFkZGl0aW9uYWxfaW5mby5wcm9qZWN0ZWRFakeFakeFake0Q29udGV4dCIsImNyZWF0ZWRfYXQiOiIxNTI3NjEyNzI0MDU3In0.WBPyKnis4BN1sAmFFSCM1Lazg51z2rnuaniZYPcATOSscfVOB-6L-yWvo1kTjfxxMVvzBLLr9H6pNr2ZzA8PzUDbcYjzzjRvmSqVEII3vW0KFTmG5cO5fmi8j0e662WXg0cp4hUhOhr0MvGa5vRPXBKr7NmtaU0d5_bsKs_5AJBfDsUCnJ5ZcGnK_8DFKb9VIqmxFdLl_dQzKl2dMaEqsK-98cUTT32Th0nC5rQ96-N8TsuYD2fmqzSCiOCQRhXQeQ1U97UvlYOobgKTAF41WDt3gsa786ouV668YZN9-J3tVaejGosEUcHYTKpRKmpKS_jwElfA0ptNV3PCS-aBNg
Content-Type: application/json

Body (JSON):

 "companyContexts": [
   "namespace": "imsOrgID",
   "value": "DCF77919596885950A495D3E@AdobeOrg"
   "namespace": "analytics",
 "users": [
   "key": "GDPR-1234",
   "action": ["access","delete"],
   "userIDs": [
     "company": "adobe",
     "namespace": "visitorId",
     "type": "analytics",
     "name": "s_fid",
     "description": "Fallback Visitor ID",
     "value": "64F04470FAKE04E9-1DADD8FAKE65B7C2"
     "company": "adobe",
     "namespace": "CORE",
     "namespaceId": 0,
     "type": "standard",
     "name": "AAM UUID",
     "description": "Adobe Audience Manager UUID",
     "value": "610212449467061254000504ALSOFAKE"
     "company": "adobe",
     "namespace": "ECID",
     "namespaceId": 4,
     "type": "standard",
     "name": "Experience Cloud ID",
     "description": "This is the ID generated by Visitor and set in 1st party cookie.",
     "value": "6080944537973STILLFAKE359908301249"
 "expandIds": true

3. Adobe Acts Based on Data Governance Labels

Adobe sees a request to access/delete the data for ECID 64F04470FAKE04E9-1DADD8FAKE65B7C2 and sees what data we have for that user. Let’s look at three dimensions and their settings for an example:

If we have data for that user in the Domains dimension, it will see that that data has a data governance label of “ACC-PERSON” which, according to the tooltip means it “will never be returned for a GDPR access request, unless an ID-PERSON label is applied on a variable in this report suite”. I am keeping tracking of an ID for this user in one of my eVars, so the user’s access request will show what Adobe knows their domain to be.
Entry Page doesn’t have any data governance labels applied, so the Entry Page data for this user is left alone.
Entry Page Original has both a “DEL-DEVICE” and a “DEL-PERSON” label on it, meaning Entry Page Original data for this user will be anonymized.

Next Steps

I’ve submitted a few user access/deletion requests so I can see how it affects the data and what the access report looks like, so I’ll have a follow up post in a few weeks with my findings.

Appendix I: Passing along my own Identifications for Users

If I have an eVar (or prop) that I use to identify users (for example, capturing a hashed user ID), then in my data governance labels, I would check the “ID-PERSON” radio button.

Then I need to specify which NAMESPACE I’m going to keep that value in for my API requests. Basically, my API JSON objects already have the IDs that Adobe sets and knows about:

 "company": "adobe",
 "namespace": "ECID",
 "namespaceId": 0,
 "type": "standard",
 "name": "Experience Cloud ID",
 "description": "This is the ID generated by Visitor and set in 1st party cookie.",
 "value": "6080944537973STILLFAKE359908301249"

So now in my API requests I can add in the IDs that I have for that user:

 "namespace": "myuserid",
 "value": "malReynolds1234",
 "type": "analytics",
 "isDeletedClientSide": false

Then Adobe’s Data Governance tools can make the connection that IDs sent to the “myuserid” namespace in my API requests correspond to the IDs in my custom dimension that I’m labelling as “ID-PERSON”.

Appendix II: Setting Yourself Up for the API

So, that all seems simple enough, right (ha!)? For me, one of the trickier parts of getting this all set up was setting myself up to use the GDPR API through an Adobe.io integration. I had an advantage because I’ve used a similar integration for Adobe Launch Extensions, but even then for the GDPR API I had to have at least one support ticket (first through Adobe Client Care, then through the adobe.io support team- turns out the ever-evolving documentation didn’t have the right endpoint for me to use yet, but that has since been fixed.)

Pulling largely from the Adobe.io Experience Cloud and GDPR whitepaper, here are the steps I took:

    1. You will need to generate a public and private key. I find the easiest way to do this is to open up a Terminal (aka Command Prompt), navigate to a sensible folder (eg, “cd analytics/gdpr”) and type in the following:
      openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout private.key -out certificate_pub.crt

      It will prompt you to fill in some information about yourself and your org- complete the prompts, and you should now have two files in your folder: “certificate_pub.crt” and “private.key”. You’ll use these in a moment.

    2. If you don’t already have one, you’ll need to create an adobe.io account (with the same email you use for the experience cloud). Sign in to the adobe.io console.
    3. Create a new integration. On the second screen, select “Access an API”. On the third screen, select the service “GDPR API”.
    4. On the final screen, give it a name (like “GDPR API for Acme, Inc”) and description. Take the “certificate_pub.crt” you created in step 1 and upload it to the “Public keys certificates” field. Click “Create Integration” then “Continue to Integration Details”.
    5. On the Integration Details screen, note your Organization ID (eg “DCF7791959688FAKEID495D3E@AdobeOrg”)- this should match your Experience Cloud Org ID for your company. You’ll need this for the “x-gw-ims-org-id” field in your API Request Headers.
    6. Also on the Integration Details Screen, note your API Key (Client ID) (eg, “765f21b62606FAKEapiKEYb3e656048a910e”). You’ll need this for the “x-api-key” field in your API Request Headers.
    7. On the Integration Details screen, click the “JWT” tab. It will have generated a JWT that you can basically ignore. Open the “private.key” file you created in step 1 in a text editor, copy the contents (including the “——BEGIN PRIVATE KEY——“ and “——END PRIVATE KEY——“ lines) and paste into the “Paste Private Key” field.
    8. Copy the “Sample CURL Command” value and paste it into your Terminal/Command Prompt and hit enter. This should return something like this:

      The portion in purple is your API Authorization Token for the next 24 hours. After that, you need to repeat steps 7 and 8 to generate a new temporary token.

An entirely too honest/frank look at lessons learned from independent consulting

I’ve been so happy the last 6 weeks or so, working at 33 Sticks. Now that the dust has settled, I want to document some of the lessons I learned from my mere 5 months of independent consulting– it’s been a very enlightening experience, even though I’ve been a salaried-but-hourly-billable analytics implementation consultant for 10 of the last 12 years.

Here are a few other things being an independent consultant (taking primarily short-term work) has taught me:

  • The medical benefits system in the US is absolutely awful if you’re self-employed. Our only option was the exchange markets (aka obamacare)- only two insurance providers were available and one would require ditching all of our current healthcare providers. It ended up being about $1600/month to insure my relatively healthy family of 4, and that was a fairly mediocre plan. This doesn’t include the extra money/hassle we had to go through for our medications.
  • Setting up an LLC was really easy. Setting up a business bank account so I could sign checks made out to my LLC took a bit more effort, but it wasn’t bad (though it did catch me off guard- I should have known that that would be needed).
  • I haven’t had to do self-employment taxes yet, but I chose a weird year to start, what with Trump changing the tax plan (the IRS took a while to get their “how much income to withhold” calculator working for the new tax plan).
  • There are a lot of free/cheap tools for single-person companies out there- I use Asana (free), Everhour (free), Zoho Invoicing (it’s free to a point, and I prefered it to Everhour’s invoicing options), and Google Business (5/month- warning, the google business sync utility for Google Drive is even worse than the one for personal Google Drive accounts).
  • There is such a gap of implementation expertise in the digital analytics industry, there is no shortage of work out there to do. Work wasn’t hard to find. Finding the RIGHT work is the harder part- so many organizations are so short-handed that they look to outside consultants to fill some of those gaps, but it can be really hard to provide value in some of those situations. If you’re after a paycheck, there is plenty of that to go around… but if you’re fulfillment-needy like me, and need to know you are making a difference and providing value, you have to be a bit pickier about what work you take on.
  • Becoming truly profitable, and having the type of projects I want to be doing, would take time. Companies  looking for a full digital transformation are far less likely to come to a single independent consultant (though for many companies, a digital transformation is needed before the data could be really valuable).
  • Financially, it pays to remove the middle man, but not as much as you’d think. I was working in a wide range of rates, depending on the project, but $160-$225/hr seems a fairly normal rate for folks with my background. Of course, that doesn’t count what I spent on administration, branding/marketing, paperwork, etc… not to mention the lack of benefits (medical/dental/vision, time off, 401k). In the end, to keep the same income I was used to, I needed to do about two thirds the billable work (and had to deal with the unpredictable flow of money).
  • Sales/procurement processes are always slow. It doesn’t matter if the client is eager to start next Monday, and you’re ready to start next Monday- the client’s org will slow things down by at least 2 weeks- and even that is only if the client has put a fire under them.
  • Payment comes slowly. If a contract is “net-45” (ie, the client has 45 days to pay after being invoiced), it really means “the check will hopefully be in the mail by the 45th day”. I didn’t get my first check until 2.5 months into working, and I will continue getting checks until probably June for work wrapping up early in April.
  • Planning vacations or major future expenses is really hard. My husband and I are not exactly financial risk-takers, and since we never knew what checks would come in when, or when projects would start/end, it was very difficult for us to commit to a vacation a few months out.
  • Scoping projects and forecasting is hard. I’ve never been good at scoping. At Adobe, I’d be asked for my opinion on how long something would take to do, and I’d say “uh, 20 hours?” Then I’d see the final estimate that went to the client was for 120 hours, not 20. Turns out, though, I really am fairly efficient (heaven knows I’ve done this long enough), and rarely came even close to the amount of time the client was prepared to pay me for. The hourly billing model penalizes efficient work, and isn’t tied to value provided. On paper, I had 50+ hours of work I could do each week. In practice, unless I fudged the numbers (I didn’t), I was able to fill all of my client’s needs and then some, in maybe half of the expected time.
    • This confirms something the whole industry should keep in mind: you might pay more for senior/principal consultants, but odds are they will get through work much faster than their less-experienced peers, so you may save on hours billed. That is, if you are stuck on that pesky hourly billing model (see my thoughts on that model on the 33 sticks blog).
  • Even with that added flexibility, there are still not enough hours in the day.  I didn’t get even close to having the time to do all the productive things I wanted to do.
  • When you switch from salaried to hourly, and you are in charge of your own schedule, you start to see opportunity cost everywhere. “I slept 7 hours last night?! If I had been working instead, I could have made $1470 instead!!!” I’ve always had to do weekly timesheets and keep up my utilization rates (I have two awards on my shelf for being one of the most utilized consultants in Adobe consulting), but it had never made some a tangible difference to my family’s well-being.
  • I’d miss being around my peers. I had my clients, sure, but if I accomplished something I was proud of, I’d rush downstairs to tell the only people around to hear about it: my family. They’ve long since learned to not ask what I was excited about, they just say “yay, you did the thing!” It’s not quite the same as sharing with a peer. Thank heavens for twitter and #measure slack, so I can still bounce ideas off of peers and interact with humans who aren’t related to me.
  • Independence is hard for the anxiety-ridden (and I do have plenty of anxiety). I like to think I am a fairly independent/low-maintenance employee, and hope my previous employers would agree. But having absolutely no oversight was different. There was no one tell to me that the thing I was focusing on was indeed the best use of my time; no one to tell me that my work was stellar, satisfactory or still needed improvement; no one to justify things to if it didn’t go the way I hoped.


I had two main reasons for going independent:

  1. Freeing myself up so that if/when 33 Sticks was ready, I’d be available. Seriously, we’ve been trying to make this happen for years, and the timing was just never right. I wanted to make sure I didn’t miss a chance again.
  2. Having the flexibility/time to work on product ideas.

On both fronts, I’d say: mission accomplished! Clearly, the 33 Sticks thing is happening. And while on the product side, I haven’t released anything new since December, I was able to learn more server-side skills, so I could prototype out a few new product ideas. So progress has been made, even if I don’t have anything to publicly show for it.

But mostly, it was a very eye-opening experience: it’s nice to know now that it is an option for me, but that it probably won’t ever be my ideal working scenario. I’m very glad I had this short window of a new experience.

Why I’m so excited about joining 33 Sticks

(Cross posted from the 33 sticks blog)

33 Sticks formed shortly after I had to part ways with Hila and Jason about 5 years ago. Since the beginning, I’ve followed their story and cheered them on, excited about what they were accomplishing and hoping I’d get to be a part of it someday. Unfortunately the timing never lined up- they’d finally be ready to add someone like me to the team, but I’d have just started a commitment elsewhere. In October when I went independent, a large part of that decision was that I wanted to be free and ready when 33 Sticks was ready, and I’m thrilled that things finally lined up just in time for Summit.

I’ve either been employed or done contract work for a dozen different agencies since 2006. After this much time as an implementation consultant, I’ll admit I’m experiencing some burnout. Some folks have already heard me swear off consulting- it can just be so hard to really provide value. So why is 33 Sticks an exception?

The people

I’ve worked with Hila, Jason, and Jon before and know how awesome they are, and I can already see that I have much to learn from Jim Driscoll. There isn’t a member of the team that isn’t a principal-level consultant with years of experience with all different levels of projects. There is no offshore team we’ve committed to delegate work to. Every single member of this team is the type of person to go over the top to see clients succeed, yet they all have a rich life outside of work too. It’s a rare and incredible thing, to join a team where you already know and respect each of your coworkers, and genuinely enjoy spending time with each of them.

The model

33 Sticks contracts aren’t based on hours billed, but rather on value provided. This is a difficult model to get to work- you have to really trust the consultants and the clients to manage scope and be on the same page. It probably wouldn’t work at larger agencies, nor would it work for staff augmentation projects. It only works if the consultants can really build a relationship with the client, and have the experience to focus and drive engagements towards whatever will provide the most value.

In recent years, as I’ve been more exclusively on large enterprise projects, I’ve seen the consulting industry struggle more and more with keeping a cohesive vision for a project. You may have a dozen consultants spread between optimization, implementation, analysis, project management… then on the client-side, you may also have over a dozen folks on different parts of the project. It can feel like there are a lot of people in the car but no one is driving. With the 33 Sticks model, we can work with clients to get that project-wide focus and build a cohesive data ecosystem. You can’t truly consult and provide strategic guidance if you are just taking orders from whoever signed the contract. 33 Sticks can partner with our clients and use the experiences we have from touching hundreds of projects over the years to offer unique guidance, helping focus the engagement on what will provide the most long-term value.

The goal

I feel like Jason and Hila’s goals for 33 Sticks wouldn’t work for everyone, but they align with my own goals well. We aren’t going to take over the world. The goal is not to sell a lot of contracts, grow a lot of staff, influence a lot of projects, and build up wealth. There is no “exit strategy”. Instead, the goal is to do things that provide value, and do those things well. Which isn’t to say there isn’t a financial goal, but even that is much more focused on quality of life, having flexibility not only with how we spend our non-work hours, but also being able to do the type of work we want. For me, that means continuing to work remotely from Atlanta with a flexible schedule, and also have time to keep working on the product ideas and documentation I’m passionate about.

I so appreciate all the well-wishes and congratulations- hopefully after reading this, folks can fully understand why I am so excited about this opportunity. And while we’re not looking to take over the world, I do hope I can help 33 Sticks spread their value even further.


New industry tool: Adobe Configuration Export

An industry friend and former coworker, Gene Jones, made me aware of an awesome new tool he’s created- a tool that exports your Report Suite info into an excel file. It can compare the variable settings of multiple report suites in one tab, then creates a tab with a deeper look at all the settings for each report suite.

This is similar to the very handy Observepoint SDR Builder– I’ll freely admit I’m likely to use both in the future. Both (free) tools show you your settings and allow for report suite comparison. The Observepoint SDR Builder uses a google sheet extension and has a little more set up involved (partially because if you’re an Observepoint customer you can expand its functionality) but it can allow you manage your settings directly from the google sheet (communicating those changes back to the Adobe Admin Console).

But sometimes all you want is a simple export of current settings in a simple, local view, in which case the Adobe Configuration Export tool is very straightforward and simple to use.

And, it’s open source– the community can add to it and make use of it for whatever situations they dream up. I’m excited to see what features get added in the future (I see a “Grade Your Config” option that intrigues me). Nice work, Gene!

Adobe Launch’s Rule Ordering is a big deal for Single Page Apps

In November, I posted about some of the ways that Launch will make it easier to implement on Single Page Apps (SPAs), but I hinted that a few things were still lacking.
In mid-January, the Launch team announced a feature I’ve been eagerly awaiting: the ability to order your rules. With this ability, we finally have a clean and easy way to implement Adobe Analytics on a Single Page App.

The historical problem

As I mentioned in my previous post, one of the key problems we’ve seen in the past was that Event-Based Rules (EBRs) and Direct Call Rules (DCRs) can’t “stack”. Let me explain what I mean by that.

Not a single page app? Rule Stacking rocks!

For example, let’s say I have an internal search “null results” page, where the beacon that fires should include:

  • Global Variables, like “s.server should always be set to document.hostname”
  • Variables specific to the e-commerce/product side of my site with a common data layer structure (pageName should always be set to %Content ID: Page Name%)
  • Search Results variables (like my props/eVars for Search Term and Number of Search Results, and a custom event for Internal Searches)
  • Search Results when a filter is applied (like a listVar for Filter Applied and an event for User applied Search Filter)
  • Null Results Variables (another event for Null Internal Searches and a bit of logic to rewrite my Number of Search Results variable from “0” to “zero” (because searching in the reports for “0” would show me 10, 20, 30… whereas “zero” could easily show me my null results)

With a non-SPA, when a new page load loads, DTM would run through all of my page load rules and see which had conditions that were matched by the current page. It would then set the variables from those rules, then AFTER all the rules were checked and variables were set, DTM would send the beacon, happily combining variables from potentially many rules.

Would become this beacon:

If you have a Page Load Rule-based implementation, this allows you to define your rules by their scope, and can really use the power of DTM to only apply code/logic when needed.

Single Page App? Not so much.

However, if I were in a Single Page App, I’d either be using a Direct Call Rule or an Event-Based Rule to determine a new page was viewed and fire a beacon. DCRs and EBRs have a 1:1 ratio with beacons fired- if a rule’s conditions were met, it would fire a beacon. So I would need to figure out a way to have my global variables fire on every beacon, and set site-section-specific and user-action-specific variables, for every user action tracked. This would either mean having a lot of DCRs and EBRs for all the possible combos of variables (meaning a lot of repeat effort in setting rules, and repeated code weight in the DTM library), or a single massive rule with a lot of custom code to figure out which user-action-specific variables to set:

Or leaving the Adobe Analytics tool interface altogether, and doing odd things in Third Party Tag blocks. I’ve seen it done, and it makes sad pandas sad.

The Answer: Launch

Launch does two important things that solve this:

  1. Rules that set Adobe Analytics Variables do not necessarily have to fire a beacon. I can tell my rule to just set variables, to fire a beacon, or to clear variables, or any combination of those options.
  2. I can now order my rules to be sure that the rule that fires my beacon goes AFTER all the rules that set my variables.

So I set up my 5 rules, same as before. All of my rules have differing conditions, and use two similar triggers: one set to fire on Page Bottom (if the user just navigated to my site or refreshes a page, loading a fresh new DOM) and one on Data Element Changed (for Single Page App “virtual page views”, looking at when the Page Name is updated in the Data Layer).

UPDATE: I realize now that you probably wouldn’t want to combine “Page Bottom” and “Data Element Changed” this way, because odds are, it’s going to count your initial setting of the pageName data element as a change, and then double-fire on page load. Either way, it’s less than ideal to use “data element changed” as a trigger because it’s not as reliable. But since this post is already written and has images to go with it, I’ll leave it, and we can pretend that for some reason you wouldn’t be updating your pageName data element when the page initially loads. 

When I create those triggers, I can assign a number for that trigger’s Order:

One rule, my global rule, has those triggers set to fire at “50” (which is the default number, right in the middle of the range it is recommended that I use, 1-100). The rule with this trigger not only sets my global variables, it also fires my beacon then clears my variables:

Most of my other rules, I give an Order number of “25” (again, fairly arbitrary, but it gives me flexibility to have other rules fire before or after as needed). One rule, my “Internal Search: Null Results” rule is set to the Order number “30”, because I want it to come AFTER the “Internal Search: Search Results” rule, since it needs to overwrite my Number of Search Results variable from “0” (which it got from the data layer) to “Zero”.

This gives me a chance to set all the variables in my custom rules, and have my beacon and clearVars fire at the end in my global rule (the rule’s Order number is in the black circles):

I of course will need to be very careful about using my Order numbers consistently- I’m already thinking about how to fit this into existing documentation, like my SDR.


This doesn’t just impact Single Page Apps- even a traditional Page Load Rule implementation sometimes needs to make sure one rule fires after another, perhaps to overwrite the variables of another, or to check a variable another rule set (maybe I’m hard coding s.channel in one rule, and based on that value, want to fire another rule). I can even think of cases where this would be helpful for third party tags. This is a really powerful feature that should give a lot more control and flexibility to your tag management implementation.

Let me know if you think of new advantages, use cases, or potential “gotchas” for this feature!

Followup Post: Allocation in Analysis Workspace

I recently posted about some of the implications and use cases of using Linear Allocation (on eVars) and participation (props/events) and in my research, I thought I had encountered a bug in Analysis Workspace. After all, for this flow:

Page A Page B Page C Page D Newstletter Signup event (s.tl)
“Page A”
“Page A”
“Page B”
“Page B”
“Page C”
“Page C”
“Page D”
“Page D”

I saw this in Reports and Analytics (so far, so good):

But then in Analysis Workspace for that prop, trying to recreate the same report, I saw this, where the props were only getting credited for events that happened on their beacon (none got credit for the newsletter signup):

Basically, I lost that participation magic.

Similarly, for the eVar, I saw this report in Reports and Analytics:

And in Workspace, it behaved exactly like a “Most Recent” eVar:

Again, it lost that linear magic.

Calculated Metrics to the Rescue

With the help of some industry friends (thanks, Jim Kultgen at Kohler and Seth Burke at Adobe) I learned that this is not a bug, necessarily- it’s the future! Analysis Workspace has a different way of getting at that data (one that doesn’t require changing the backend settings for your variables and metrics).
In Analysis Workspace reports, allocation can be decided by a Calculated Metric, instead of the variable’s settings. In the calculated metric builder, you can specify an allocation by clicking the gear box next to the metric in the Calculated Metric Definition:

A Note On “Default” Allocation here

On further testing, in Analysis Workspace, it seems that eVars with the back-end settings of either “Most Recent” and “Linear” allocation are treated the same: both will act like “Most Recent” with a metric brought in, and both will act like “Linear” when you bring in a calculated metric where you specified to have Linear Allocation. One might say, if you use Analysis Workspace exclusively, you no longer would need to ever set an eVar to “Linear”.

“Default” does still seem to defer to the eVar settings when it comes to Most Recent or Original (just not Linear). So in an eVar report where the eVar’s backend setting is “Original”, whether I used my “normal” Newsletter Signups event (column 2), or my Calculated one with Linear Allocation (column 3), credit went to the first page:

So, the Calculated Metric allocation did NOT overwrite my eVar setting of “Original”.

So how do I replicate my Linear eVar report?

To get back that Linear Allocation magic, I would create a new Calculated Metric, but I would specify “Linear Allocation” for it in the Calculated Metric Definitions. Then I can see that linear metric applied to that eVar (the original metric in blue, the new calculated one with linear allocation in purple) :

Note that it’s 40-20-20-20, rather than 25-25-25-25. I’ll admit, this isn’t what I expected and makes me want to do more testing. I suspect that it’s looking at my FIVE beacons (four page views, one success event) and giving that Page D double credit- one for its page view beacon, and one for the success event beacon (even though it wasn’t set on that beacon, it WAS still persisting). So it isn’t perfectly replicating my R&A version of the report, but it is helping me spread credit out between my four values.

And my participation prop?

Similarly, with the prop, when I bring in my new “Linear Allocation” calculated metrics I just set up for my eVar (in blue), I now see it behave like participation for my Newsletter Signup metric, unlike the original non-calculated metrics (in green):

…but those Page View numbers look just like linear allocation in an eVar would (2.08, 1.08, .58, .25), not the nice clean numbers (4, 3, 2, 1) I’d get for a prop with participation. At this point, I still don’t have my Content Velocity prop report, but I’m getting closer.

So how do I get my Content Velocity?

Analysis Workspace has a “Page Velocity” Calculated metric built into its Content Consumption template, which reports the same data as my Content Velocity (participation-enabled) prop did in Reports & Analytics.

 If I want to recreate this calculated metric for myself, I use the formula “Page Views (with Visit Participation)/Page Views”:

Though my friend Jim Kultgen suggested a metric he prefers:

((Page Views 'Visit Participation')/(Visits))-1

This shows you how a page contributed to later page views, discounting how it contributed to itself (because obviously it did that much- every page does), and looking at visits to that page (so repeat content views don’t count for much).

These two calculated metrics would show in an AW report like this:


If I use Analysis Workspace exclusively, I may no longer need to enable participation on metrics or props- I could just build a Calculated Metric off of existing metrics, and change their allocation accordingly, and that would work the same with either my eVars or my Props.

Knowing a few of these quirks and implications, I can see a future with simpler variable maps (no more need for multiple eVars receiving the same values but with different allocation settings) and the ability to change allocation without tweaking the original data set (my “Newsletter Signups” metric retains its original reporting abilities, AND I can build as many Calculated Metrics off of it as I want). I’m excited to see how Adobe will keep building more power/flexibility into Workspace!