Cross-post from 33 Sticks: Setting up Adobe Analytics for GDPR

I recently posted on the 33 Sticks blog; I figured I’d copy the post here for posterity’s sake ;). 

There is so much documentation out there for Adobe Analytics and GDPR, it’s hard to see how it all fits together (though I do feel like Adobe’s documentation on the GDPR workflow is a good place to start). Note, I am NOT claiming to be an expert on this- I’ll defer to Adobe staff for their expertise. And I am NOT offering advice on what/how to regulate- I’ll defer to your legal/privacy team for that. But since I just had to muddle through all this, and learned a lot in the process, I figured I’d share my learnings and hopefully help others who are also muddling through.

I’ve found that in general, when folks are talking about changes in Adobe Analytics to account for GDPR, they’re talking about one of three things:

  1. Obfuscating/removing User IP addresses
  2. Adobe Data Retention Settings
  3. Client Opt-out

Obfuscating/Removing IP Addresses

This is pretty straightforward, though the documentation is a bit tricky to find. This is simply a setting you can set in the Admin Console of Adobe Analytics within General Account Settings for each Report Suite:

Adobe’s General Settings Documentation has good info on these settings. To me, the important take-aways here are:

  • Replace the last octet of IP address with 0 is basically like taking the street number off of my house’s address- you may still be able to know my general location, but you no longer have the specifics. This change applies BEFORE data is processed, meaning it WILL affect Adobe’s ability to do Bot/IP Filtering, might affect VISTA rules, and will make it so Adobe’s Geo-segmentation will have less info to work with and will therefore be at least a little less accurate.
  • IP Obfuscation affects what analysts/admins can view of the IP address, like in Data Warehouse. You can choose to leave the IP address as-is, to obfuscate it so it becomes a unique string that can’t be used to identify the user, or to replace it with “x.x.x.x” (which is the default option for EMEA suites gong forward). The obfuscation or deletion happens further along in data processing, after VISTA rules and Bot/IP filtering.

Adobe Retention Settings

After May 25, 2018, Adobe may start deleting data older than 25 months, unless you specifically work with your Adobe Account reps to extend this to up to 37 months (at a cost). Unlike Google Analytics (which will keep standard reports but just delete user/event data), Adobe truly is just deleting all data older than your retention window. When thinking about this, I’d encourage you to consider:

  1. the rareness of a user who hasn’t reset their cookies/changed devices/changed browsers in over 2 years
  2. if your site and/or implementation hasn’t significantly changed in 2+ years, then we may have bigger issues than data retention

Basically, if you’re heavily using data that is over two years old, I’m fairly certain that you’re already not looking at data that could be compared as apples-to-apples with your current site/implementation.

You can view your current data retention by going to the Data Governance interface mentioned later in this post (note, my Report Suites say anywhere from 37 months to 121 months, even though I have definitely not worked to extend it beyond 25 months- I suspect that since I have not explicitly extended it, I can’t count on it staying this way):

Client Opt-Out

This is definitely the most involved piece of GDPR compliance. Again, Adobe’s documentation on the GDRP Workflow has some good information, but here is my take on what you need to do (assuming you are already on the Experience Cloud):

Label what data needs to be “governed”

Here, on a per-Report-Suite basis, I can go through all my dimensions and metrics and flag what things should be affected by data governance. Many of my dimensions and metrics don’t NEED to be governed- for instance, browser type can probably just be left alone (Disclaimer: seriously, talk to your legal team about what to govern). Other things, like geo-location, Adobe may have automatically already applied appropriate labels to, which you just need to review/confirm:


But my own organization’s policies may dictate that I go even more stringent and also label things like US States, which Adobe didn’t auto-apply a label to. The more likely scenario is that I need to pop open the subtle drop-down menu that says “Standard Dimensions” and go to my custom Events and Dimensions so I can find my eVar that captures User ID and label it so Adobe knows how to govern it:

The labels are, unfortunately, not super straight-forward, but basically, these are your options for each dimension/metric:

Adobe will use these labels to decide what to do when it receives a request from you about a user access/deletion.

Set Up Your Privacy Portal for Capturing Adobe ID Requests

Before Adobe can “govern” anything, you need to give users a way of opting out of tracking. This means setting up a Privacy Portal on your site, and using it as a means of collecting information about who is requesting to access their data or opt out. Adobe has provided some tools to help find out about the WHO and WHAT, but then it’s up to your Data Regulator (whoever in your org is assigned to do this stuff) to pass that information along to Adobe.

1. The User Visits the Privacy Portal

adobePrivacy.js (or the Adobe Experience Cloud Privacy Launch extension) can put all the tracking identifiers we have for the current user into a JSON object.

Our user might request to merely view what data is being kept on him, in which case, he’ll have to wait- adobePrivacy.js can show us his IDs, but not much more than that. But I could at least show him the identifiers if I want. He may request to delete all past data (and/or get a copy of what was deleted). For that, I need to take that JSON object from adobePrivacy.js and pass it along to whatever mechanisms my Org has in place to organize data governance requests with with Adobe GDPR API.

For example-driven learners like me, I have an extremely unattractive example page showing how to use adobePrivacy.js.
This is what the “retrieve” response might look like:

[
 {
  "company": "adobe",
  "namespace": "visitorId",
  "type": "analytics",
  "name": "s_fid",
  "description": "Fallback Visitor ID",
  "value": "64F04470FAKE04E9-1DADD8FAKE65B7C2"
 },
 {
  "company": "adobe",
  "namespace": "CORE",
  "namespaceId": 0,
  "type": "standard",
  "name": "AAM UUID",
  "description": "Adobe Audience Manager UUID",
  "value": "610212449467061254000504ALSOFAKE"
 },
 {
  "company": "adobe",
  "namespace": "ECID",
  "namespaceId": 4,
  "type": "standard",
  "name": "Experience Cloud ID",
  "description": "This is the ID generated by Visitor and set in 1st party cookie.",
  "value": "6080944537973STILLFAKE359908301249"
 }
]

2. I Submit the Request Through the GDPR API/API Portal

I can use either the GDPR UI Portal (which I can get to from my Adobe Experience Cloud Admin Console) or the GDPR API (after I’ve set up an adobe.io integration- see Appendix on this post).

Here, I can take the JSON object I got from my portal (shown to the right in blue), batch it up with other user’s info (if desired), and let Adobe know who has made an access/delete request. Requests take 1-2 weeks. For access requests, you get a CSV that returns the status of your requests.

I happen to use Postman for my request, which is a handy UI for API requests. This is what my request might look like:

POST API request to https://platform.adobe.io/data/privacy/gdpr/
Headers:

x-gw-ims-org-id : DCF779195968NOTREAL@AdobeOrg
x-api-key: 5a7105dNotARealAPIKeyc735355
Authorization: Bearer eyJ4NXUiOiJpbXNfbmExLWtleS0xLmNlciIsImFsZyI6IlJTMjU2In0.eyJpZCI6IjE1Mjc2MTI3MjQwNTdfYjRkNjg4YTUtOThhMi00MzM2LWIwNjgtNDkwYjYzZThiMTIThisIsntARealTokenI6IjVhNzEwNWQ0YmNiMjQwZjQ4NDBmZmNmYTBjNzM1MzU1IiwidXNlcl9pZCI6IkQ2MjgzNjJDNUFGQzU1REQwQTQ5NUMxMEB0ZWNoYWNjdC5hZG9iZS5jb20iLCJ0eXBlIjoiYWNjZXNzX3Rva2VuIiwiYXMiOiJpbXMtbmExIiwiZmciOiJTTzZOSUY1UEZMTjdDSEFPQUFBQUFBQUFRND09PT09PSIsIm1vaSI6IjU5MzFhZmM5IiwiYyI6IlZFTE5iN1JHcEhhN0h5dkNYSi9SNFE9PSIsImV4cGlyZXNfaW4iOiI4NjQwMDAwMCIsInNjb3BlIjoib3BlbmlkLEFkb2JlSUQscmVhZF9vcmdhbml6YXRpb25zLGFkZGl0aW9uYWxfaW5mby5wcm9qZWN0ZWRFakeFakeFake0Q29udGV4dCIsImNyZWF0ZWRfYXQiOiIxNTI3NjEyNzI0MDU3In0.WBPyKnis4BN1sAmFFSCM1Lazg51z2rnuaniZYPcATOSscfVOB-6L-yWvo1kTjfxxMVvzBLLr9H6pNr2ZzA8PzUDbcYjzzjRvmSqVEII3vW0KFTmG5cO5fmi8j0e662WXg0cp4hUhOhr0MvGa5vRPXBKr7NmtaU0d5_bsKs_5AJBfDsUCnJ5ZcGnK_8DFKb9VIqmxFdLl_dQzKl2dMaEqsK-98cUTT32Th0nC5rQ96-N8TsuYD2fmqzSCiOCQRhXQeQ1U97UvlYOobgKTAF41WDt3gsa786ouV668YZN9-J3tVaejGosEUcHYTKpRKmpKS_jwElfA0ptNV3PCS-aBNg
Content-Type: application/json

Body (JSON):

{
 "companyContexts": [
  {
   "namespace": "imsOrgID",
   "value": "DCF77919596885950A495D3E@AdobeOrg"
  },
  {
   "namespace": "analytics",
   "value":"33stickssandbox"
  }
 ],
 "users": [
  {
   "key": "GDPR-1234",
   "action": ["access","delete"],
   "userIDs": [
    {
     "company": "adobe",
     "namespace": "visitorId",
     "type": "analytics",
     "name": "s_fid",
     "description": "Fallback Visitor ID",
     "value": "64F04470FAKE04E9-1DADD8FAKE65B7C2"
    },
    {
     "company": "adobe",
     "namespace": "CORE",
     "namespaceId": 0,
     "type": "standard",
     "name": "AAM UUID",
     "description": "Adobe Audience Manager UUID",
     "value": "610212449467061254000504ALSOFAKE"
    },
    {
     "company": "adobe",
     "namespace": "ECID",
     "namespaceId": 4,
     "type": "standard",
     "name": "Experience Cloud ID",
     "description": "This is the ID generated by Visitor and set in 1st party cookie.",
     "value": "6080944537973STILLFAKE359908301249"
    }
   ]
  }
 ],
 "expandIds": true
}

3. Adobe Acts Based on Data Governance Labels

Adobe sees a request to access/delete the data for ECID 64F04470FAKE04E9-1DADD8FAKE65B7C2 and sees what data we have for that user. Let’s look at three dimensions and their settings for an example:

If we have data for that user in the Domains dimension, it will see that that data has a data governance label of “ACC-PERSON” which, according to the tooltip means it “will never be returned for a GDPR access request, unless an ID-PERSON label is applied on a variable in this report suite”. I am keeping tracking of an ID for this user in one of my eVars, so the user’s access request will show what Adobe knows their domain to be.
Entry Page doesn’t have any data governance labels applied, so the Entry Page data for this user is left alone.
Entry Page Original has both a “DEL-DEVICE” and a “DEL-PERSON” label on it, meaning Entry Page Original data for this user will be anonymized.

Next Steps

I’ve submitted a few user access/deletion requests so I can see how it affects the data and what the access report looks like, so I’ll have a follow up post in a few weeks with my findings.

Appendix I: Passing along my own Identifications for Users

If I have an eVar (or prop) that I use to identify users (for example, capturing a hashed user ID), then in my data governance labels, I would check the “ID-PERSON” radio button.

Then I need to specify which NAMESPACE I’m going to keep that value in for my API requests. Basically, my API JSON objects already have the IDs that Adobe sets and knows about:

{
 "company": "adobe",
 "namespace": "ECID",
 "namespaceId": 0,
 "type": "standard",
 "name": "Experience Cloud ID",
 "description": "This is the ID generated by Visitor and set in 1st party cookie.",
 "value": "6080944537973STILLFAKE359908301249"
}

So now in my API requests I can add in the IDs that I have for that user:

{
 "namespace": "myuserid",
 "value": "malReynolds1234",
 "type": "analytics",
 "isDeletedClientSide": false
}

Then Adobe’s Data Governance tools can make the connection that IDs sent to the “myuserid” namespace in my API requests correspond to the IDs in my custom dimension that I’m labelling as “ID-PERSON”.

Appendix II: Setting Yourself Up for the API

So, that all seems simple enough, right (ha!)? For me, one of the trickier parts of getting this all set up was setting myself up to use the GDPR API through an Adobe.io integration. I had an advantage because I’ve used a similar integration for Adobe Launch Extensions, but even then for the GDPR API I had to have at least one support ticket (first through Adobe Client Care, then through the adobe.io support team- turns out the ever-evolving documentation didn’t have the right endpoint for me to use yet, but that has since been fixed.)

Pulling largely from the Adobe.io Experience Cloud and GDPR whitepaper, here are the steps I took:

    1. You will need to generate a public and private key. I find the easiest way to do this is to open up a Terminal (aka Command Prompt), navigate to a sensible folder (eg, “cd analytics/gdpr”) and type in the following:
      openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout private.key -out certificate_pub.crt

      It will prompt you to fill in some information about yourself and your org- complete the prompts, and you should now have two files in your folder: “certificate_pub.crt” and “private.key”. You’ll use these in a moment.

    2. If you don’t already have one, you’ll need to create an adobe.io account (with the same email you use for the experience cloud). Sign in to the adobe.io console.
    3. Create a new integration. On the second screen, select “Access an API”. On the third screen, select the service “GDPR API”.
    4. On the final screen, give it a name (like “GDPR API for Acme, Inc”) and description. Take the “certificate_pub.crt” you created in step 1 and upload it to the “Public keys certificates” field. Click “Create Integration” then “Continue to Integration Details”.
    5. On the Integration Details screen, note your Organization ID (eg “DCF7791959688FAKEID495D3E@AdobeOrg”)- this should match your Experience Cloud Org ID for your company. You’ll need this for the “x-gw-ims-org-id” field in your API Request Headers.
    6. Also on the Integration Details Screen, note your API Key (Client ID) (eg, “765f21b62606FAKEapiKEYb3e656048a910e”). You’ll need this for the “x-api-key” field in your API Request Headers.
    7. On the Integration Details screen, click the “JWT” tab. It will have generated a JWT that you can basically ignore. Open the “private.key” file you created in step 1 in a text editor, copy the contents (including the “——BEGIN PRIVATE KEY——“ and “——END PRIVATE KEY——“ lines) and paste into the “Paste Private Key” field.
    8. Copy the “Sample CURL Command” value and paste it into your Terminal/Command Prompt and hit enter. This should return something like this:
      {
       "Token_type":"bearer",
       "access_token":"eyJ4NXUiOiJpbXNfbmExLWtleS0xLmNlciIsImFsZyI6IlJTMjU2In0.eyJpZCI6IjE1Mjc2MTkzNjUzMzBfODYzOGM5NTYtOTM4My00ZTk5LTg0OTYtYz-hmYTM0OGQ5NjQyX3VlMSIsImNsaWVudF9pZCI6Ijc2NWYyMWI2MjYwNjQyNTlhMmIzZTY1NjA0OGE5MTBFAKETOKENDONOTCOPYME0N0I1NUIwRDlEQjQwQTQ5NUUyQ0B0ZWNoY-WNjdC5hZG9iZS5jb20iLCJ0eXBlIjoiYWNjZXNzX3Rva2VuIiwiYXMiOiJpbXMtbmExIiwiZmciOiJTTzZVRUY1UEhMTjcySEFPQUFBQUFBQUFZRT09PT09PSIsIm1vaSI6IjZh-NDBlMTg4IiwiYyI6IkVVVVgwOVdML1VKbE9pY2Y2Tk5NOTAREALTOKENNfaW4iOiI4NjQwMDAwMCIsInNjb3BlIjoib3BlbmlkLEFkb2JlSUQscmVhZF9vcmdhbml6YXRpb25zL-GFkZGl0aW9uYWxfaW5mby5wcm9qZWN0ZWRQcm9kdWN0Q29udGV4dCIsImNyZWF0ZWRfYXQiOiIxNTI3NjE5MzY1MzMwIn0.a4sUwZjuJyU_g3STYAnK5uQrDLj2AOeRlj3GmTuY-MeK5MrWnXFg3MTdLxgz1cbdkJiV42sAxjoWUtsTfANa1wnIIPimHpVvgypJBJ4VcaQk7h0iio1asPxmeUq3NUrVM7WjnVwqwc6fHlou2OFGbkiL_OulM7D4Yj-kzI68GAV0wJbi-D38rWGlI_nPpq_ICR_0WU3w4l4KPfqk3B6gkaFDedVY6fLpqTQLfad6NQI7BujC1ljsV1RuQnaQ6o59WR6d20IRNVF0N9P2j2SnGasjayQ9uoDSuDp4r-N1I40w6ExOBeGzRGLg-KFxFkTgOhqE1XnqKJfBzuJ9QWKHg",
       "expires_in":86399993
      }

      The portion in purple is your API Authorization Token for the next 24 hours. After that, you need to repeat steps 7 and 8 to generate a new temporary token.

An entirely too honest/frank look at lessons learned from independent consulting

I’ve been so happy the last 6 weeks or so, working at 33 Sticks. Now that the dust has settled, I want to document some of the lessons I learned from my mere 5 months of independent consulting– it’s been a very enlightening experience, even though I’ve been a salaried-but-hourly-billable analytics implementation consultant for 10 of the last 12 years.

Here are a few other things being an independent consultant (taking primarily short-term work) has taught me:

  • The medical benefits system in the US is absolutely awful if you’re self-employed. Our only option was the exchange markets (aka obamacare)- only two insurance providers were available and one would require ditching all of our current healthcare providers. It ended up being about $1600/month to insure my relatively healthy family of 4, and that was a fairly mediocre plan. This doesn’t include the extra money/hassle we had to go through for our medications.
  • Setting up an LLC was really easy. Setting up a business bank account so I could sign checks made out to my LLC took a bit more effort, but it wasn’t bad (though it did catch me off guard- I should have known that that would be needed).
  • I haven’t had to do self-employment taxes yet, but I chose a weird year to start, what with Trump changing the tax plan (the IRS took a while to get their “how much income to withhold” calculator working for the new tax plan).
  • There are a lot of free/cheap tools for single-person companies out there- I use Asana (free), Everhour (free), Zoho Invoicing (it’s free to a point, and I prefered it to Everhour’s invoicing options), and Google Business (5/month- warning, the google business sync utility for Google Drive is even worse than the one for personal Google Drive accounts).
  • There is such a gap of implementation expertise in the digital analytics industry, there is no shortage of work out there to do. Work wasn’t hard to find. Finding the RIGHT work is the harder part- so many organizations are so short-handed that they look to outside consultants to fill some of those gaps, but it can be really hard to provide value in some of those situations. If you’re after a paycheck, there is plenty of that to go around… but if you’re fulfillment-needy like me, and need to know you are making a difference and providing value, you have to be a bit pickier about what work you take on.
  • Becoming truly profitable, and having the type of projects I want to be doing, would take time. Companies  looking for a full digital transformation are far less likely to come to a single independent consultant (though for many companies, a digital transformation is needed before the data could be really valuable).
  • Financially, it pays to remove the middle man, but not as much as you’d think. I was working in a wide range of rates, depending on the project, but $160-$225/hr seems a fairly normal rate for folks with my background. Of course, that doesn’t count what I spent on administration, branding/marketing, paperwork, etc… not to mention the lack of benefits (medical/dental/vision, time off, 401k). In the end, to keep the same income I was used to, I needed to do about two thirds the billable work (and had to deal with the unpredictable flow of money).
  • Sales/procurement processes are always slow. It doesn’t matter if the client is eager to start next Monday, and you’re ready to start next Monday- the client’s org will slow things down by at least 2 weeks- and even that is only if the client has put a fire under them.
  • Payment comes slowly. If a contract is “net-45” (ie, the client has 45 days to pay after being invoiced), it really means “the check will hopefully be in the mail by the 45th day”. I didn’t get my first check until 2.5 months into working, and I will continue getting checks until probably June for work wrapping up early in April.
  • Planning vacations or major future expenses is really hard. My husband and I are not exactly financial risk-takers, and since we never knew what checks would come in when, or when projects would start/end, it was very difficult for us to commit to a vacation a few months out.
  • Scoping projects and forecasting is hard. I’ve never been good at scoping. At Adobe, I’d be asked for my opinion on how long something would take to do, and I’d say “uh, 20 hours?” Then I’d see the final estimate that went to the client was for 120 hours, not 20. Turns out, though, I really am fairly efficient (heaven knows I’ve done this long enough), and rarely came even close to the amount of time the client was prepared to pay me for. The hourly billing model penalizes efficient work, and isn’t tied to value provided. On paper, I had 50+ hours of work I could do each week. In practice, unless I fudged the numbers (I didn’t), I was able to fill all of my client’s needs and then some, in maybe half of the expected time.
    • This confirms something the whole industry should keep in mind: you might pay more for senior/principal consultants, but odds are they will get through work much faster than their less-experienced peers, so you may save on hours billed. That is, if you are stuck on that pesky hourly billing model (see my thoughts on that model on the 33 sticks blog).
  • Even with that added flexibility, there are still not enough hours in the day.  I didn’t get even close to having the time to do all the productive things I wanted to do.
  • When you switch from salaried to hourly, and you are in charge of your own schedule, you start to see opportunity cost everywhere. “I slept 7 hours last night?! If I had been working instead, I could have made $1470 instead!!!” I’ve always had to do weekly timesheets and keep up my utilization rates (I have two awards on my shelf for being one of the most utilized consultants in Adobe consulting), but it had never made some a tangible difference to my family’s well-being.
  • I’d miss being around my peers. I had my clients, sure, but if I accomplished something I was proud of, I’d rush downstairs to tell the only people around to hear about it: my family. They’ve long since learned to not ask what I was excited about, they just say “yay, you did the thing!” It’s not quite the same as sharing with a peer. Thank heavens for twitter and #measure slack, so I can still bounce ideas off of peers and interact with humans who aren’t related to me.
  • Independence is hard for the anxiety-ridden (and I do have plenty of anxiety). I like to think I am a fairly independent/low-maintenance employee, and hope my previous employers would agree. But having absolutely no oversight was different. There was no one tell to me that the thing I was focusing on was indeed the best use of my time; no one to tell me that my work was stellar, satisfactory or still needed improvement; no one to justify things to if it didn’t go the way I hoped.

Conclusion

I had two main reasons for going independent:

  1. Freeing myself up so that if/when 33 Sticks was ready, I’d be available. Seriously, we’ve been trying to make this happen for years, and the timing was just never right. I wanted to make sure I didn’t miss a chance again.
  2. Having the flexibility/time to work on product ideas.

On both fronts, I’d say: mission accomplished! Clearly, the 33 Sticks thing is happening. And while on the product side, I haven’t released anything new since December, I was able to learn more server-side skills, so I could prototype out a few new product ideas. So progress has been made, even if I don’t have anything to publicly show for it.

But mostly, it was a very eye-opening experience: it’s nice to know now that it is an option for me, but that it probably won’t ever be my ideal working scenario. I’m very glad I had this short window of a new experience.

Why I’m so excited about joining 33 Sticks

(Cross posted from the 33 sticks blog)

33 Sticks formed shortly after I had to part ways with Hila and Jason about 5 years ago. Since the beginning, I’ve followed their story and cheered them on, excited about what they were accomplishing and hoping I’d get to be a part of it someday. Unfortunately the timing never lined up- they’d finally be ready to add someone like me to the team, but I’d have just started a commitment elsewhere. In October when I went independent, a large part of that decision was that I wanted to be free and ready when 33 Sticks was ready, and I’m thrilled that things finally lined up just in time for Summit.

I’ve either been employed or done contract work for a dozen different agencies since 2006. After this much time as an implementation consultant, I’ll admit I’m experiencing some burnout. Some folks have already heard me swear off consulting- it can just be so hard to really provide value. So why is 33 Sticks an exception?

The people

I’ve worked with Hila, Jason, and Jon before and know how awesome they are, and I can already see that I have much to learn from Jim Driscoll. There isn’t a member of the team that isn’t a principal-level consultant with years of experience with all different levels of projects. There is no offshore team we’ve committed to delegate work to. Every single member of this team is the type of person to go over the top to see clients succeed, yet they all have a rich life outside of work too. It’s a rare and incredible thing, to join a team where you already know and respect each of your coworkers, and genuinely enjoy spending time with each of them.

The model

33 Sticks contracts aren’t based on hours billed, but rather on value provided. This is a difficult model to get to work- you have to really trust the consultants and the clients to manage scope and be on the same page. It probably wouldn’t work at larger agencies, nor would it work for staff augmentation projects. It only works if the consultants can really build a relationship with the client, and have the experience to focus and drive engagements towards whatever will provide the most value.

In recent years, as I’ve been more exclusively on large enterprise projects, I’ve seen the consulting industry struggle more and more with keeping a cohesive vision for a project. You may have a dozen consultants spread between optimization, implementation, analysis, project management… then on the client-side, you may also have over a dozen folks on different parts of the project. It can feel like there are a lot of people in the car but no one is driving. With the 33 Sticks model, we can work with clients to get that project-wide focus and build a cohesive data ecosystem. You can’t truly consult and provide strategic guidance if you are just taking orders from whoever signed the contract. 33 Sticks can partner with our clients and use the experiences we have from touching hundreds of projects over the years to offer unique guidance, helping focus the engagement on what will provide the most long-term value.

The goal

I feel like Jason and Hila’s goals for 33 Sticks wouldn’t work for everyone, but they align with my own goals well. We aren’t going to take over the world. The goal is not to sell a lot of contracts, grow a lot of staff, influence a lot of projects, and build up wealth. There is no “exit strategy”. Instead, the goal is to do things that provide value, and do those things well. Which isn’t to say there isn’t a financial goal, but even that is much more focused on quality of life, having flexibility not only with how we spend our non-work hours, but also being able to do the type of work we want. For me, that means continuing to work remotely from Atlanta with a flexible schedule, and also have time to keep working on the product ideas and documentation I’m passionate about.

I so appreciate all the well-wishes and congratulations- hopefully after reading this, folks can fully understand why I am so excited about this opportunity. And while we’re not looking to take over the world, I do hope I can help 33 Sticks spread their value even further.

 

New industry tool: Adobe Configuration Export

An industry friend and former coworker, Gene Jones, made me aware of an awesome new tool he’s created- a tool that exports your Report Suite info into an excel file. It can compare the variable settings of multiple report suites in one tab, then creates a tab with a deeper look at all the settings for each report suite.

This is similar to the very handy Observepoint SDR Builder– I’ll freely admit I’m likely to use both in the future. Both (free) tools show you your settings and allow for report suite comparison. The Observepoint SDR Builder uses a google sheet extension and has a little more set up involved (partially because if you’re an Observepoint customer you can expand its functionality) but it can allow you manage your settings directly from the google sheet (communicating those changes back to the Adobe Admin Console).

But sometimes all you want is a simple export of current settings in a simple, local view, in which case the Adobe Configuration Export tool is very straightforward and simple to use.

And, it’s open source– the community can add to it and make use of it for whatever situations they dream up. I’m excited to see what features get added in the future (I see a “Grade Your Config” option that intrigues me). Nice work, Gene!

Adobe Launch’s Rule Ordering is a big deal for Single Page Apps

In November, I posted about some of the ways that Launch will make it easier to implement on Single Page Apps (SPAs), but I hinted that a few things were still lacking.
In mid-January, the Launch team announced a feature I’ve been eagerly awaiting: the ability to order your rules. With this ability, we finally have a clean and easy way to implement Adobe Analytics on a Single Page App.

The historical problem

As I mentioned in my previous post, one of the key problems we’ve seen in the past was that Event-Based Rules (EBRs) and Direct Call Rules (DCRs) can’t “stack”. Let me explain what I mean by that.

Not a single page app? Rule Stacking rocks!

For example, let’s say I have an internal search “null results” page, where the beacon that fires should include:

  • Global Variables, like “s.server should always be set to document.hostname”
  • Variables specific to the e-commerce/product side of my site with a common data layer structure (pageName should always be set to %Content ID: Page Name%)
  • Search Results variables (like my props/eVars for Search Term and Number of Search Results, and a custom event for Internal Searches)
  • Search Results when a filter is applied (like a listVar for Filter Applied and an event for User applied Search Filter)
  • Null Results Variables (another event for Null Internal Searches and a bit of logic to rewrite my Number of Search Results variable from “0” to “zero” (because searching in the reports for “0” would show me 10, 20, 30… whereas “zero” could easily show me my null results)

With a non-SPA, when a new page load loads, DTM would run through all of my page load rules and see which had conditions that were matched by the current page. It would then set the variables from those rules, then AFTER all the rules were checked and variables were set, DTM would send the beacon, happily combining variables from potentially many rules.

Would become this beacon:

If you have a Page Load Rule-based implementation, this allows you to define your rules by their scope, and can really use the power of DTM to only apply code/logic when needed.

Single Page App? Not so much.

However, if I were in a Single Page App, I’d either be using a Direct Call Rule or an Event-Based Rule to determine a new page was viewed and fire a beacon. DCRs and EBRs have a 1:1 ratio with beacons fired- if a rule’s conditions were met, it would fire a beacon. So I would need to figure out a way to have my global variables fire on every beacon, and set site-section-specific and user-action-specific variables, for every user action tracked. This would either mean having a lot of DCRs and EBRs for all the possible combos of variables (meaning a lot of repeat effort in setting rules, and repeated code weight in the DTM library), or a single massive rule with a lot of custom code to figure out which user-action-specific variables to set:

Or leaving the Adobe Analytics tool interface altogether, and doing odd things in Third Party Tag blocks. I’ve seen it done, and it makes sad pandas sad.

The Answer: Launch

Launch does two important things that solve this:

  1. Rules that set Adobe Analytics Variables do not necessarily have to fire a beacon. I can tell my rule to just set variables, to fire a beacon, or to clear variables, or any combination of those options.
  2. I can now order my rules to be sure that the rule that fires my beacon goes AFTER all the rules that set my variables.

So I set up my 5 rules, same as before. All of my rules have differing conditions, and use two similar triggers: one set to fire on Page Bottom (if the user just navigated to my site or refreshes a page, loading a fresh new DOM) and one on Data Element Changed (for Single Page App “virtual page views”, looking at when the Page Name is updated in the Data Layer).

When I create those triggers, I can assign a number for that trigger’s Order:


One rule, my global rule, has those triggers set to fire at “50” (which is the default number, right in the middle of the range it is recommended that I use, 1-100). The rule with this trigger not only sets my global variables, it also fires my beacon then clears my variables:

Most of my other rules, I give an Order number of “25” (again, fairly arbitrary, but it gives me flexibility to have other rules fire before or after as needed). One rule, my “Internal Search: Null Results” rule is set to the Order number “30”, because I want it to come AFTER the “Internal Search: Search Results” rule, since it needs to overwrite my Number of Search Results variable from “0” (which it got from the data layer) to “Zero”.

This gives me a chance to set all the variables in my custom rules, and have my beacon and clearVars fire at the end in my global rule (the rule’s Order number is in the black circles):

I of course will need to be very careful about using my Order numbers consistently- I’m already thinking about how to fit this into existing documentation, like my SDR.

Conclusion

This doesn’t just impact Single Page Apps- even a traditional Page Load Rule implementation sometimes needs to make sure one rule fires after another, perhaps to overwrite the variables of another, or to check a variable another rule set (maybe I’m hard coding s.channel in one rule, and based on that value, want to fire another rule). I can even think of cases where this would be helpful for third party tags. This is a really powerful feature that should give a lot more control and flexibility to your tag management implementation.

Let me know if you think of new advantages, use cases, or potential “gotchas” for this feature!

Followup Post: Allocation in Analysis Workspace

I recently posted about some of the implications and use cases of using Linear Allocation (on eVars) and participation (props/events) and in my research, I thought I had encountered a bug in Analysis Workspace. After all, for this flow:

Page A Page B Page C Page D Newstletter Signup event (s.tl)
prop1
eVar1
events
“Page A”
“Page A”
“event1”
“Page B”
“Page B”
“event1”
“Page C”
“Page C”
“event1”
“Page D”
“Page D”
“event1”
“”
“”
“event2”

I saw this in Reports and Analytics (so far, so good):

But then in Analysis Workspace for that prop, trying to recreate the same report, I saw this, where the props were only getting credited for events that happened on their beacon (none got credit for the newsletter signup):

Basically, I lost that participation magic.

Similarly, for the eVar, I saw this report in Reports and Analytics:

And in Workspace, it behaved exactly like a “Most Recent” eVar:

Again, it lost that linear magic.

Calculated Metrics to the Rescue

With the help of some industry friends (thanks, Jim Kultgen at Kohler and Seth Burke at Adobe) I learned that this is not a bug, necessarily- it’s the future! Analysis Workspace has a different way of getting at that data (one that doesn’t require changing the backend settings for your variables and metrics).
In Analysis Workspace reports, allocation can be decided by a Calculated Metric, instead of the variable’s settings. In the calculated metric builder, you can specify an allocation by clicking the gear box next to the metric in the Calculated Metric Definition:

A Note On “Default” Allocation here

On further testing, in Analysis Workspace, it seems that eVars with the back-end settings of either “Most Recent” and “Linear” allocation are treated the same: both will act like “Most Recent” with a metric brought in, and both will act like “Linear” when you bring in a calculated metric where you specified to have Linear Allocation. One might say, if you use Analysis Workspace exclusively, you no longer would need to ever set an eVar to “Linear”.

“Default” does still seem to defer to the eVar settings when it comes to Most Recent or Original (just not Linear). So in an eVar report where the eVar’s backend setting is “Original”, whether I used my “normal” Newsletter Signups event (column 2), or my Calculated one with Linear Allocation (column 3), credit went to the first page:

So, the Calculated Metric allocation did NOT overwrite my eVar setting of “Original”.

So how do I replicate my Linear eVar report?

To get back that Linear Allocation magic, I would create a new Calculated Metric, but I would specify “Linear Allocation” for it in the Calculated Metric Definitions. Then I can see that linear metric applied to that eVar (the original metric in blue, the new calculated one with linear allocation in purple) :

Note that it’s 40-20-20-20, rather than 25-25-25-25. I’ll admit, this isn’t what I expected and makes me want to do more testing. I suspect that it’s looking at my FIVE beacons (four page views, one success event) and giving that Page D double credit- one for its page view beacon, and one for the success event beacon (even though it wasn’t set on that beacon, it WAS still persisting). So it isn’t perfectly replicating my R&A version of the report, but it is helping me spread credit out between my four values.

And my participation prop?

Similarly, with the prop, when I bring in my new “Linear Allocation” calculated metrics I just set up for my eVar (in blue), I now see it behave like participation for my Newsletter Signup metric, unlike the original non-calculated metrics (in green):

…but those Page View numbers look just like linear allocation in an eVar would (2.08, 1.08, .58, .25), not the nice clean numbers (4, 3, 2, 1) I’d get for a prop with participation. At this point, I still don’t have my Content Velocity prop report, but I’m getting closer.

So how do I get my Content Velocity?

Analysis Workspace has a “Page Velocity” Calculated metric built into its Content Consumption template, which reports the same data as my Content Velocity (participation-enabled) prop did in Reports & Analytics.

 If I want to recreate this calculated metric for myself, I use the formula “Page Views (with Visit Participation)/Page Views”:

Though my friend Jim Kultgen suggested a metric he prefers:

((Page Views 'Visit Participation')/(Visits))-1

This shows you how a page contributed to later page views, discounting how it contributed to itself (because obviously it did that much- every page does), and looking at visits to that page (so repeat content views don’t count for much).

These two calculated metrics would show in an AW report like this:

Conclusion

If I use Analysis Workspace exclusively, I may no longer need to enable participation on metrics or props- I could just build a Calculated Metric off of existing metrics, and change their allocation accordingly, and that would work the same with either my eVars or my Props.

Knowing a few of these quirks and implications, I can see a future with simpler variable maps (no more need for multiple eVars receiving the same values but with different allocation settings) and the ability to change allocation without tweaking the original data set (my “Newsletter Signups” metric retains its original reporting abilities, AND I can build as many Calculated Metrics off of it as I want). I’m excited to see how Adobe will keep building more power/flexibility into Workspace!

Participation and Linear Allocation in Adobe Analytics- behavior I expected, and some I did not

I recently posted about some of the implications and use cases of using Linear Allocation (on eVars) and participation (props/events) and in my research, I thought I had encountered a bug in Analysis Workspace. After all, for this flow:

Page A Page B Page C Page D Newstletter Signup event (s.tl)
prop1
eVar1
events
“Page A”
“Page A”
“event1”
“Page B”
“Page B”
“event1”
“Page C”
“Page C”
“event1”
“Page D”
“Page D”
“event1”
“”
“”
“event2”

I saw this in Reports and Analytics (so far, so good):

But then in Analysis Workspace for that prop, trying to recreate the same report, I saw this, where the props were only getting credited for events that happened on their beacon (none got credit for the newsletter signup):

Basically, I lost that participation magic.

Similarly, for the eVar, I saw this report in Reports and Analytics:

And in Workspace, it behaved exactly like a “Most Recent” eVar:

Again, it lost that linear magic.

Calculated Metrics to the Rescue

With the help of some industry friends (thanks, Jim Kultgen at Kohler and Seth Burke at Adobe) I learned that this is not a bug, necessarily- it’s the future! Analysis Workspace has a different way of getting at that data (one that doesn’t require changing the backend settings for your variables and metrics).
In Analysis Workspace reports, allocation can be decided by a Calculated Metric, instead of the variable’s settings. In the calculated metric builder, you can specify an allocation by clicking the gear box next to the metric in the Calculated Metric Definition:

A Note On “Default” Allocation here

On further testing, in Analysis Workspace, it seems that eVars with the back-end settings of either “Most Recent” and “Linear” allocation are treated the same: both will act like “Most Recent” with a metric brought in, and both will act like “Linear” when you bring in a calculated metric where you specified to have Linear Allocation. One might say, if you use Analysis Workspace exclusively, you no longer would need to ever set an eVar to “Linear”.

“Default” does still seem to defer to the eVar settings when it comes to Most Recent or Original (just not Linear). So in an eVar report where the eVar’s backend setting is “Original”, whether I used my “normal” Newsletter Signups event (column 2), or my Calculated one with Linear Allocation (column 3), credit went to the first page:

So, the Calculated Metric allocation did NOT overwrite my eVar setting of “Original”.

So how do I replicate my Linear eVar report?

To get back that Linear Allocation magic, I would create a new Calculated Metric, but I would specify “Linear Allocation” for it in the Calculated Metric Definitions. Then I can see that linear metric applied to that eVar (the original metric in blue, the new calculated one with linear allocation in purple) :

Note that it’s 40-20-20-20, rather than 25-25-25-25. I’ll admit, this isn’t what I expected and makes me want to do more testing. I suspect that it’s looking at my FIVE beacons (four page views, one success event) and giving that Page D double credit- one for its page view beacon, and one for the success event beacon (even though it wasn’t set on that beacon, it WAS still persisting). So it isn’t perfectly replicating my R&A version of the report, but it is helping me spread credit out between my four values.

And my participation prop?

Similarly, with the prop, when I bring in my new “Linear Allocation” calculated metrics I just set up for my eVar (in blue), I now see it behave like participation for my Newsletter Signup metric, unlike the original non-calculated metrics (in green):

…but those Page View numbers look just Despite clearly remembering learning about it my first week on the job at Omniture in 2006, I realized recently that I did not have a lot of confidence in what participation and linear allocation would do in certain situations in Adobe Analytics. So I put a good amount of effort into testing it to confirm my theories, and I figured I’d pass along what I discovered.

First, the Basics: eVar Allocation

You may already know this part, so feel free to skip this section if you do. Allocation is a setting for Conversion Variables (eVars) in Adobe Analytics, with three options:

Let’s take a simple example to show what how this effects things. Let’s say a user visits my site with this flow:

Page A Page B Page C Page D Form Submit- Signup
s.eVar5=”Page A” s.eVar5=”Page B” s.eVar5=”Page C” s.eVar5=”Page D” s.events=”event1″

Most Recent (Last)

Most eVars have the “defaultiest” allocation of “Most Recent (Last)”, meaning in an event1 report broken down by eVar5, “Page D” would get full credit for the event1 that happened, since it was the last value we saw before event1. So far, pretty simple.

Original Value (First)

But maybe I want to know which LANDING page contributed the most to my event1s (there are other ways of doing this, but for the sake of my example, I’m gonna stick with using allocation). In that case, I might have the allocation for that eVar set to “Original Value (First)” so then “Page A” would get full credit for this event1, since it was the first value we saw for that variable. If my eVar is set to expire on visit, then it’s still nice and straightforward. If it’s set to never expire, then the first value we ever saw for that user will always get credit for any of that user’s metrics. If it’s set to expire in two weeks, then we’ll see the first value that was passed within the last two weeks.

This setting is frequently used for Marketing Campaigns (it’s not uncommon to see s.campaign be used for “Most Recent Campaign in the last 30 days” and then another eVar capture the exact same values, but be set to “Original Campaign in the last 30 days”).

Linear Allocation

If I’m feeling a bit more egalitarian, and want to know ALL the values for an eVar that contributed to success events, I would choose linear allocation. In this scenario, all four values would split the credit for the one event, so they’d each get one fourth of the metric:

(Though it may not actually look like this in the report- by default it would round down to 0. But I’ll talk about decimals later on).

So, that’s allocation.

Then what is participation?

Participation is a setting you can apply to a prop, so that if you bring a Participation-enabled metric into the prop’s report, you can see which values were set at some point before that event took place. Repeat: to see participation you must have a prop that is set to “Display Participation Metrics”:

And the metric you want to see needs to have participation enabled (without this, in the older Reports and Analytics interface, that event won’t be able to be brought into the prop report):

Unlike linear allocation for an eVar, participation for a prop means all the values for that prop get full credit for an event that happened. So, given this flow:

Page A Page B Page C Page D Form Submit- Signup
s.prop1=”Page A” s.prop1=”Page B” s.prop1=”Page C” s.prop1=”Page D” s.events=”event1″

You would see a report like this, because each value participated in the single instance of that event:

New Learnings (for me): Content Velocity

One thing these settings can be used for is measuring content velocity: that is, how much a certain value contributed to more content views later on. For instance, if I have a content site, and I want to know how much one piece of content tends to lead to the reading of MORE content, I might use a participation-metric-enabled prop with a participation-enabled Page View custom event, or I might use an eVar with linear allocation against a Page View custom event (whether or not the event has participation enabled doesn’t matter for the eVar). For my test, I did both:

Page A Page B Page C Page D
s.prop1=”Page 1″
s.eVar1=”Page 1″
s.events=”event1″
s.prop1=”Page 2″
s.eVar1=”Page 2″
s.events=”event1″
s.prop1=”Page 3″
s.eVar1=”Page 3″
s.events=”event1″
s.prop1=”Page 4″
s.eVar1=”Page 4″
s.events=”event1″

The prop

The prop version of this report would show me that Page 1 contributed to 4 views (its own, and 3 more “downstream”). Whereas Page 2 contributed to 3 (its own, and two more downstream), etc…

The eVar

Alternatively, the eVar would show me some thing pretty odd:

Those weird numbers don’t make sense on this small scale (how could 0 get 6.3%?), because it is rounding, and not showing me decimals. If I want to see the decimals, I can create a really simple calculated metric that brings in my custom Page View event (event1) and tells it to show decimals:

The report then makes a little more sense and show us where the rounded numbers came from (and how Page 4, with “0” Page Views, got 6.3% of the credit), but may still seem mysterious:

Those are some odd numbers, right? Here’s the math:

 Value Credit Why?   Explanation
Page 1 2.08 1+0.5+0.33+0.25 It got full credit for its own view, then half the credit (shared with page 2) for the event on Page 2, then a third of the credit (shared with Page 2 and Page 3) on Page 3…
Page 2 1.08 0.5+0.33+0.25 It only got half credit for the event that took place on its page (shared with Page 1), then a third of the credit (shared with Page 1 and Page 3) on Page 3, etc…
Page 3 0.58 .33+.25 It only gets a third of the credit that took place on its page, and a quarter of the credit for the fourth page.
Page 4 0.25 0.25 The event that happened on this page is shared with all four pages.

Crazy, right? I’m not going to tell you which an analyst should prefer, but as always, you should ask the question: “What will you DO with this information?”

What happens when multiple values appear in the same flow?

Let’s say the user does something like this, where they hit one value a couple page views in a row (Page B in this example), or they hit a value 2 separate times (Page A in this example):

Page A Page B Page B
(again)
Page C Page D Page A
(again)
Conversion event
prop1
eVar1
events
“Page A”
“Page A”
“event1”
“Page B”
“Page B”
“event1”
“Page B”
“Page B”
“event1”
“Page C”
“Page C”
“event1”
“Page D”
“Page D”
“event1”
“Page A”
“Page A”
“event1”
events=”event2″

For the prop, it’s pretty straightforward. This will look like 6 event1s, where Page A gets value for all 6, and Page D gets credit for just 2 (itself, and the Page A that came afterwards):

For the eVar, it gets a little more complicated (I added in a calculated metric so you can see the decimals). Page A (accessed twice at separate times) got double credit for the conversion (which I might have predicted), but Page B (accessed twice in a row) ALSO gets double credit for the conversion (which I didn’t predict, probably because I’m too used to thinking in terms of the CVP plugin):

Caveats

A couple things to be aware of:

  • Settings for participation and allocation don’t apply retroactively- you can’t apply them to existing data. If you want to start using it, you need to change your settings and you’ll see it applied to future data. However, this can mess with existing data, so be careful.
  • Analysis Workspace does some unexpected behavior for both participation and allocation. I’ll have a followup post on that.

Conclusion

Both participation and linear allocation aren’t used often, but they can uniquely solve some reporting requirements and can provide a lot of insight, if you know how to read the data. I hope my experimentation and results here help make it clearer how you might be able to use and interpret data from these settings. Let me know if you have other use cases for using these settings, and how it has worked out for you!like linear allocation in an eVar would (2.08, 1.08, .58, .25), not the nice clean numbers (4, 3, 2, 1) I’d get for a prop with participation. At this point, I still don’t have my Content Velocity prop report, but I’m getting closer.

So how do I get my Content Velocity?

Analysis Workspace has a “Page Velocity” Calculated metric built into its Content Consumption template, which reports the same data as my Content Velocity (participation-enabled) prop did in Reports & Analytics.

 If I want to recreate this calculated metric for myself, I use the formula “Page Views (with Visit Participation)/Page Views”:

Though my friend Jim Kultgen suggested a metric he prefers:

((Page Views 'Visit Participation')/(Visits))-1

This shows you how a page contributed to later page views, discounting how it contributed to itself (because obviously it did that much- every page does), and looking at visits to that page (so repeat content views don’t count for much).

These two calculated metrics would show in an AW report like this:

Conclusion

If I use Analysis Workspace exclusively, I may no longer need to enable participation on metrics or props- I could just build a Calculated Metric off of existing metrics, and change their allocation accordingly, and that would work the same with either my eVars or my Props.

Knowing a few of these quirks and implications, I can see a future with simpler variable maps (no more need for multiple eVars receiving the same values but with different allocation settings) and the ability to change allocation without tweaking the original data set (my “Newsletter Signups” metric retains its original reporting abilities, AND I can build as many Calculated Metrics off of it as I want). I’m excited to see how Adobe will keep building more power/flexibility into Workspace!

Quick poll: What should I tackle next?

Each new year, I tend to dive into some new side project (this is how the Beacon Parser and the PocketSDR came about). I have quite a few things I want to tackle right now, and one main thing I’m slowly plugging away at, but in the meantime, I’m wondering what to prioritize. So, a poll:

Any other ideas (or desired improvements to existing tools)? Let me know in the comments.

New tool: PocketSDR Mobile App for Adobe Analytics

I mentioned in my previous post that one of the reason I’m going “independent” is to have more time to work on products and pet projects. One of those ongoing projects of mine has been a mobile app you can use to keep your Adobe Analytics SDR/Variable Map easily accessible on your phone.
   
Get it on Google Play
The first release of this actually went out in August 2016, but I didn’t let anyone know because I felt it was still too “beta” and I wanted to clean it up before making it more publicly known. A year and a half later (and various framework upgrades that required redoing the whole thing… each time learning and applying those learnings), I got it to a point where I don’t feel ashamed to share it, though of course there is always room for more improvement.

To use the app, you will need your Adobe Analytics Web Services API key. And since no one wants to enter their 32-digit API Key into their mobile device, this newest version of the app allows you to enter your API key on the web (meaning you can copy-and-paste on your desktop machine) then get a link that will allow your mobile device to open the app with those credentials already entered. I highly recommend using that API Shortcut tool before diving into the app.

I created the app for a few reasons:

  • As with the beacon parser, the main reason was because I wished a tool like this existed and figured if I was going to make it for myself, I may as well let other people use it too.
  • I have a web development background, and wanted to learn more about developing for Mobile Apps. I’ll admit on this front, I cheated a bit: rather than learning multiple native app languages (like Swift or C++), I used the Ionic Framework, which let me program the app using Angular (which fits with my JS/HTML background better than native languages), then use Cordova to turn it into a Mobile App. Still, I did get to learn a lot about mobile development in general, analytics options within mobile, and the release cycle for mobile development (I can’t just save a file and FTP it to my server), not to mention Angular 1/2 and typescript.
  • I needed a situation in which I could test out analytics tracking in various Single-Page App scenarios (yay Angular).
  • Because at heart, I am a developer. While I enjoy helping clients sort out governance and documentation issues, sometimes I just want to retreat to my basement and do some coding, for that straight-forward validation of seeing your code work in real-time. It’s good to keep those skills alive.

All in all, I learned so much. And I’ve already used the app quite a bit to keep track of my client’s Variable Maps (“what did we use event40 for? Oh yeah!”) However, I’m not a professional mobile developer, and this project was done entirely in my evenings/PTO as a learning exercise that happened to create a useable product. So please be forgiving of any thing in the app that is less-than-ideal; there is a reason I’m not charging for the app at all. I will continue working on improvements, particularly with an eye for performance (I’m looking at you, Android….) and I’m already aware of potential aesthetic issues on iPhone X. Please let me know of any other feedback or suggestions- I’d love to hear what you think!

P.S. Before anyone asks why I didn’t use the Adobe API OAuth2 Authentication, I thought about it, and may yet move to using that, but have concerns about how that works for marketing cloud AND non-marketing cloud logins. That, and the API documentation is… lacking… so I decided for now to stick with what I know. If anyone has experience with OAuth2 authentication and wants to discuss, please reach out. 

P.P.S. A special thanks to my beta testers!

Exciting News: Self-Employment!

Bilbo Going on an Adventure

I’ve finally made the leap, and am now consulting as my own independent entity. I’ve worked at many wonderful consulting agencies over the years and happily still have a good relationship with each of them, but for some time now I’ve wanted to move more and more into building products. Unfortunately, thus far no one has wanted to hire me as a junior Product Manager or Developer for anywhere near the same salary I’ve been getting as a Principal Consultant, so in order to pursue my product dreams, I needed to reduce my commitment to consulting and find a more flexible arrangement.
I will continue consulting, because I want to stay informed and have current practical experience with implementation (plus I got to keep paying my bills). But without an agency as a “go between”, I can work fewer billable hours and have more time to work on products and projects. Don’t get me wrong: agencies as a “go between” provide a lot of value: I won’t pretend to not be daunted by marketing, sales contracts, benefits and taxes. But thus far, it’s been a great growing experience for me. And I’m lucky to have a very supportive network as I branch into the unknown.

So now I have a chance to work on some other projects, like fixing up/modernizing the beacon parser, and other projects I’ll post about shortly (stay tuned!) I’ll also continue working with Cognetik for a few exciting initiatives they have going on, so you will see me on their blog still occasionally. And there are some other agencies I’m eager to work with still if it doesn’t interfere with my product dreams, so this may or may not last long.

I already have a good amount of independent work to keep me busy for the next few months, so this post isn’t me necessarily soliciting for more work (unless you happen to have the PERFECT project for me, in which case, let’s talk!) But if you want to talk about products and opportunities, please reach out! I’m now at jenn@digitalDataTactics.com.