My Daily Ritual

The thing I am asked about most often is some variant of “How are you able to do everything that you do?”… It’s usually buttressed by things like: “You have so many interests”, “You’re married with kids, how do you have the time?” or “Do you ever have down time? I just want to watch Netflix when I get home.” I never really know how to respond to this… it really is just the way I live my life and has been for a long time. After talking to people a bit about this and enduring constant quizzing, it seems that it might come down to my strict adherence to a daily ritual. I call this a ‘ritual‘ because it really is something that I’ve built up over decades with an explicit outcome in mind… to live the life that I live. It’s not a routine (a sequence of actions regularly followed; a fixed program.), and it’s not set in stone. I’m constantly iterating on this ritual to make it better for me. That’s also part of the key… this is FOR ME… it’s been iterated on for most of my life. It’s been adjusted to fit years of medical tests and customized for what I know about my genetic makeup. Every part of it has been vetted and tweaked to make it overall positive for my biochemistry. This ‘ritual’ likely won’t work for you… in fact, it will be a horrible thing for many people, but maybe by me documenting it, there is something in here that you will find useful. Maybe you will be inspired to start on the journey of creating you’re own. At least, you will get to see how things change over time because I plan on continuing to update this post as my process evolves.

This is a LONG post. Everything documented here is the current state of my practice that arose from years of iteration from collecting data about myself in great detail and experimenting with things to improve various aspects of my life. I’m always experimenting and this post WILL NOT document experiments. There were many failures and I don’t discuss those here. This is only for things that have become part of my permanent ritual. If you want to know about my latest experiments, ask me about them the next time you see me. At any given time, there’s usually only one thing that I’m experimenting with… this makes it easier to identify positive or negative correlations and eliminate additional variables that could be skewing results regarding my experiment hypothesis.

How do I collect and analyze this data? I’ve used tons of things over the years, but at this point it’s essentially custom software that uses the Google Fit platform as central storage. I use several commercial apps and hardware for data collection and all but one integrates with Google Fit. This makes for an easy integration point since the additional software that I write just needs to be able to use the Google Fit API to enter or consume data. For many years, I manually analyzed everything. Over the last few years and with the advancements in Machine Learning, I’ve been slowly building software to help with my analysis. Everything that has become a part of my ritual arose out of a desire to make a positive change to some monitored data point that I felt a need to improve. I won’t really dive into the details about specific data points for every single thing in this post, but if you’re curious about anything specific, feel free to ask.

The Morning Ritual

I tend to wake up about the same time every day. I don’t use an alarm and try to never schedule anything so early that I would need one. I have a skylight in my bedroom that is useful for slowly nudging me to wake up as the sun comes up. Embracing my own personal Circadian Rhythms has been very beneficial for me. Getting good quality sleep is also critical to me. Sleep experimentation was probably one of the very first things I played around with in order to increase my productivity. I followed a polyphasic sleep schedule for years, but no longer do that since it’s not really compatible with having a family or a traditional job. It was likely useful in training myself to make the most out of the sleep that I get. This practice taught me how to fall asleep fast, get into a state of REM sleep quickly and spend more time in deep (delta wave) sleep.

The first thing I do upon waking, is the same thing I do right before going to sleep. I lay in bed for a few minutes mindfully breathing. This gets the day started right by allowing me to reflect on what I’m going to do. This morning breathing takes on different forms (meant to energize me for the day) unlike my nightly version which always follows the same pattern and purpose (to get me in the right state for sleep).

My sleep quality dictates how the rest of my day progresses. Most of the time, my sleep quality is high, occasionally things go awry and I have ritual adjustments for when this happens. I won’t really go into the specifics of the adjustments since it is a pretty rare occurrence… I do so many things to make sure that my sleep quality is always rock solid. I’ve used many products to monitor sleep quality over the years, but my current choice is by far the best, least intrusive method for me. I use the Oura app to check the details on my sleep quality right after completing my morning breathing routine.

I get out of bed and drink a glass of water with my morning supplements:

The main goal here is to increase blood flow, enhance my immune system, reduce inflammation and stimulate the production of BDNF.

I take measurements with a eufy bluetooth smart scale. The one I use measures weight, BMI, and mass for body fat, muscle, and bone. It also tracks percentages for everything including visceral fat. The app has it’s own trend tracking, but I ultimately settled on this model because it integrates with Google Fit.

Next I run through a quick yoga routine. This changes daily and is focused on increasing flexibility. The daily variance is mainly to focus on areas where I may be having issues or feel that I need improvement. The constant here is that there are certain ‘whole body’ flexibility enhancing postures that I do no matter what. This also serves as a warm up to my actual workout.

I’ve tried many workouts over the years and have decided that the best for me must include the following:

  • require minimal equipment; I travel a good bit and don’t want lack of access to specialized equipment to make it easy for me to skip a workout.
  • a way to do a full body workout in a minimal amount of time
  • maximize a full range of natural motion to minimize injuries

Due to this, I’ve created a High Intensity Interval Training, body weight focused workout that I do Monday, Tuesday, Thursday, Friday. It only takes 20 minutes… I focus on lower body Monday/Thursday and upper body Tuesday/Friday. This gives me ample time to rest muscle groups before the next time around. In order to prevent this routine from becoming ‘routine’ (allowing me to avoid the plateau effect), I cycle the exercises weekly on an 8 week schedule. Each day consists of ~8 different exercises that are done for 30 seconds each, with a 10 – 15 second rest period between exercise and then the whole set of exercises are repeated 3 times. This does a good job of getting my heart rate up and is way more effective for me than any other routine that I’ve tried so far. If my Oura ring shows a high readiness score indicating I’m up for a challenge, I’ll repeat this workout in the after noon or early evening.

You might notice that my morning ritual doesn’t include breakfast. I used to be a big advocate of ‘grazing’, but over the last year I’ve become a complete advocate of Intermittent Fasting (IF). I follow a strict 18:6 protocol every day except for Saturday and Sunday (anything goes on the weekend). Occasionally, I’ll alter one day a week to 16:8 to accommodate any meetings or events that I have scheduled. I choose the 18:6 protocol because research has shown this to have a more profound impact on autophagy. I would love an effective way to measure this, but many of the most recent enhancements to my ritual is around increasing autophagy. The IF area of my routine is where I’m currently doing some of the most experimentation (e.g. does time of fast matter? does what I consume when breaking the fast matter? is there a decrease in effect when this becomes routine? what can I do differently on cheat days?) and I expect more updates to occur here over the next few months.

I make a giant pot of tea that I sip on throughout the morning. This is often a Darjeeling/Ceylon black tea blend, but I’ve been adding more and more green tea as part of an investigation into green tea having added benefits above and beyond black tea. If I need an extra boost, I’ll make a cup of espresso as well.

My work day

At this point, my work day begins… I’ll do a quick scan of email and some dashboards that I have to see if there are any immediate fires that need to be put out. Usually there is nothing, but I find it great to get these out of the way ASAP. Notice that I don’t spend any time on non-essential email, social media, political news, etc. That can wait for another time since the mornings are for Getting Things Done (GTD).

Getting Things Done

I read this book when it first came out and nothing has been more beneficial to my productivity than what arose out of reading this. I started a system that was paper based as described in the original book, but quickly developed my own iteration using electronic tools. I’ve morphed this system to different tool chains at least 3 major times, but continue to use the same basic principles with some added enhancements of my own.

The rest of my morning consists of complete focus on completing two objectives. One personal objective and one ‘work’ objective. I decide what these are the day before I start working on them (more about this later). They meet the ‘next action‘ criteria from GTD… that means that I know exactly what needs to be done, there is no investigating, there are no unknowns at the time that I decide to work on them, there is just a set of straight forward steps to actually get that objective done that requires some uninterrupted time to do them. Most of the time, these are easy, sometimes they take longer or ‘unknown unknowns’ are discovered. If I finish early, I’ll dig into some email at this point (always time boxed) or review other objectives that are ready to be worked on and pick one of those. During this time, I try to remain focused on my task except for one allowable interruption…

The Importance of Movement

Another great feature of the Oura ring is that it will alert you if it feels that you haven’t moved enough over time. I’ve always felt that moving while working was extremely important. I’ve used standing desks for more than a decade and a few months ago I also purchased a FluidStance. The FluidStance is a balance board that you can stand on at your desk and based on what I’ve seen it is way more effective at increasing your activity/calorie burn than just standing alone. I’ll alternate using it and just standing flat on a mat throughout the day and my Oura ring will never alert me to get moving while doing that. Occasionally though, I will sit while working and I’ve developed a few quick routines to run through on Oura ring activity alerts that are designed to get my heart rate to ~80 percent of my max for 3 to 5 minutes.

The Mid Day Transition

By the time mid day approaches, I’m almost always done with my two major objectives for the day. I mark the transition by taking a few minutes to stimulate my brain differently by learning another language. I use duolingo for this daily practice. You can find and follow me there by searching for my name. I’ll do another quick email checkin and then update/review my GTD lists. The goal here is to get any pending problems front of mind for the next part of my day.

Another basic thing that I’ve been doing for a very long time is a ‘lunch time’ walk. This started out mostly as a way to get some movement during the day and to get outside of the office on nice days. These are great reasons, but I’ve evolved this into an informal mindful walking practice. I get outside no matter the weather and walk for at least 20 minutes. I’ve built an infinite labyrinth trail at my house that I walk for this purpose. I focus on the changes that occur to the trail day by day and let my subconscious churn on problems and the upfront items from my GTD list that I’ve recently reviewed. Some of my best ideas arise out of this practice or immediately after… plus I get another 20 minutes of exercise in during the day!

“Lunch”

Now it’s time for my lunch… this is normally around 2PM unless I’m meeting someone for a more traditional lunch time meeting. I don’t have extremely strict rules regarding what I eat… just a balanced meal that minimizes processed foods and sugars. I tend to keep it low-carb since I like to save my carbs for beer ūüėĀ I do have a ritual for how I break my intermittent fast though.

I break my fast by drinking an Apple Cider Vinegar (ACV) cocktail. This is simply one tablespoon of ACV (with the mother) in a full glass of water. I do this for several reasons, but it started for the same reason I started IF… I have a history of diabetes in my family and both of these practices have been shown to minimize insulin spikes and resistance. Further research and analysis has also shown evidence supporting an increase in gut health leading to enhancements in nutrient extraction for the food I’m about to eat. Additionally, ACV has been shown to support an alkalizing effect on the body. This prevents leaching of calcium from your bones, has been shown to support your immune system and is generally beneficial for many endogenous processes within your body. The morning breathing techniques that I use are also designed to maximize this alkalizing effect.

After consuming this drink, I’ll eat a handful of raw almonds. Good fiber, high in magnesium (more about this later) and generally starts to make me feel full and helps prevent over eating during my ‘feeding window’. The only other daily thing here is adding some high C8 Capryilic Acid Content MCT oil to my meal. This can be mixed into just about anything, and makes a decent salad/sandwich dressing just by itself. This is done again to decrease blood glucose levels and has the nice side affect of increasing blood ketone levels which gives me a mental boost for the afternoon. I’ll go through some of my less pressing emails while eating lunch and prep for making the remainder of the day productive.

Time to Learn

Afternoon is all about learning and idea generation… most of the time I focus on getting more items in my GTD lists to the ‘next action’ state. This might involve investigating alternative approaches, digging into unknowns, but often requires learning something new. I started a basic practice that became my afternoon routine after reading about the 5 hour rule. I’m pretty sure I first heard about this through an interview with Warren Buffett. I did start out struggling to find my 5 hours a week to do this, but with practice and dedication, it eventually became the more like ’25 hour rule’ that it is for me now. This approach to learning, coupled with GTD, has really allowed me to supercharge my productivity over the years. I don’t have a ton of rules for how this occurs, but here are a few:

  • First priority is always to get a backlog of items, related to an Objective that has high near term ROI, to the ‘next action’ state. I never want to spend any time in my mornings to do this.
  • At least once a week, I force myself to come up with one ‘new business’ Objective. This can be a new approach to lead generation, new source of revenue, or a new investment strategy. The time to do this is often spread throughout the week, but at the end of the week, I should always have a new Objective in this class of work that is mostly ready to be worked on. This serves to constantly get me thinking outside of the box with regards to diversifying revenue streams in order to insulate my lifestyle from any unforeseen circumstances that can jeopardize any one existing source of income.
  • Any remaining time I spend reading… I currently use Pocket to keep track of anything that I’d like to read that isn’t a physical book or stored in Google Play Books.

During this time, I still pay attention to my activity levels the same way that I do during the morning and follow a similar routine for increasing my activity levels. The number one underlying goal for this time is to…

Prep for tomorrow

I never want to wake up questioning what is most important for me to do in the morning. It’s a waste of time when I’m in the best state for working on the real tough problems. This uncertainty often leads to poor sleep since I’ll ruminate on all of the things that I could possibly work on trying to weigh the pros and cons of each. Because of this, I want to end my work day by figuring this out. I review all the objectives that I have that are high priority items and pick the ‘next action’ tasks that have the highest ROI for at least one personal and one work related item. Barring any emergency that occurs over night, these will be the things that I focus on most in the morning. This eliminates any procrastination-related churn in my mornings and sets me up for a good night’s sleep with a defined set of items for my subconscious to ruminate on.

I’ll take another walk to lower insulin-like growth factor a few minutes before eating dinner. Dinner, like lunch, is balanced from a macro-nutrient perspective, minimizes processed foods, but otherwise anything is game.

After Dinner

After eating dinner, my ritual is much more fluid. This is time for friends and family. Hanging out, conversation and fun. There’s no real focus on working out since I’ve almost always met my goals during the day. I’m not thinking about tomorrow because I’ve already figured out exactly what I’m going to do (and I’m confident that it’s something that I can get done). The only real thing that I do at this point is pay attention to the finish line of my feeding window. As this time approaches, if I feel any indication that good quality sleep may be a problem, (e.g. muscle soreness from working out, anything else weighing on my mind) I’ll eat two tablespoons of raw almond butter. This is a magnesium bomb, and done at the right time, increases Gamma-Aminobutyric Acid (GABA). GABA is effective at promoting relaxation (i.e. better sleep) and the magnesium also promotes muscle recovery.

Sometimes work bleeds over into the evening and when that does occur, I want to do everything to minimize any detrimental impact to my sleep quality. I use wellness settings on all of my electronic devices to minimize interruptions, dim brightness and alter color hues after a certain time. If I spend any time in front of a screen, I use blue light blocking glasses. I go to bed when I’m ready to sleep. I do my bedtime breathing exercise and start the whole process again when I wake up.

Conclusion

So there it is… the daily ritual post. I’ll update it as things evolve. I’m more than happy to answer any questions about why I do things the way that I do. I held off on going into the many reasons why things have evolved the way that they have to keep this readable, but I assure you there is a method behind all of my madness… and I’m more than happy to discuss it if you really want to hear it! I could write just as much about why I DON’T do certain things, or the experimentation involved in arriving at my conclusions, so if you’re curious about either of those things inquire as well. Most importantly, if you decide to go down this path for yourself, I’d love to talk through your process and share some of the things that I’ve found.

Oura ring review

I’m fanatical about tech gadgets, but even more so for wearables and things that reliably fulfill my needs as a “Quantified Selfer“. Good quality sleep data has always been elusive. Many devices that I’ve tried were so intrusive as to ruin any chance of actually getting good sleep. Others just did a terrible job of reliably collecting the data that I wanted. I backed a Kickstarter for the Hello Sense and this was one of the first devices that really generated useful data. Not only did it track my sleep activity, but the base unit also collected data about my bedroom light levels and air quality. Sadly, the company went bust and the device ultimately became unusable after the cloud servers were shut down.

Another Kickstarter project caught my eye… the Oura ring… having been burned by so many crowd funded tech gadgets in the past, I initially held off on backing the project, but I kept a close eye on its progress and saw many great reviews on the original ring from people I trusted. When Oura announced a gen 2, I was all over it and jumped right in to purchase one as soon as I could.

I’ve had my Oura ring for a few months now and I feel totally qualified to review all aspects of it now that it’s experienced pretty much everything I can throw at it…. I am a HUGE fan of this thing! There isn’t much that I can complain about and I feel that it is worth every penny.

The Oura ring system consists of the ring, a mobile app, and the Oura Cloud… a web based equivalent of the mobile app which allows you to dig a bit deeper into the data and an API that you can use to write apps for the Oura Cloud or pull the data collected by your ring into other systems.

The ring looks like… a ring… much more so than the first generation… it doesn’t make you the focus of a room like wearing Google Glass did ūüėŹ This is a pretty amazing feat considering all of the sensors that it packs and the fact that you can go days without needing to charge the battery. It’s waterproof and fairly resilient… I’ve definitely pushed mine to some limits that I probably shouldn’t have and it’s survived. The ring connects to the app on your phone via bluetooth and you can put it in radio silent mode and still have it collect data for quite some time before needing to sync it.

The sleep tracking of the device is rock solid. I’ve done tons of things to wreak havoc with my sleep in order to test the ring’s ability to detect it. Every morning after destroying my sleep in the name of science, I’d check the app. It would basically tell me, “Dude, go back to bed, you need it”. There really was no fooling its sleep detection.

I bought the Oura Ring mainly to track sleep time and sleep quality (as measured by the amount of time spent in the different stages of sleep), but the ring is so much more than ‘just’ a sleep tracker. The Oura app is divided into four sections: Readiness, Sleep, Activity and a Dashboard that surfaces summary information from the other three. The Sleep section tracks a few additional items above and beyond what I bought the ring for. These include a resting heart rate trend and sleep latency.

The Oura Ring is also an activity tracker. I’ve been wearing various activity trackers since the first versions were commercially available. I’ve never really been a fan of wearing anything around my wrist since they always seem to get in the way, but I’ve always overlooked that in order to get the activity data. The Oura app has recommendations for how much activity you should be getting (this changes daily based on your ‘Readiness’ which I’ll discuss later). It also tracks your progress toward your daily goal and the intensity of the activity that you do. You can also turn on notifications in the app to remind you to get up and move on a regular basis. For activity that gets your heart pumping, the ring does a pretty good job of tracking. I’ve noticed that it doesn’t always do the best job of tracking activity that is less vigorous. The app has the ability to manually input this type of activity. This is one area where I wish the Oura App would improve. I already track all of my activity in Google Fit and I would love if the Oura app could just tie into that ecosystem to get this data instead of requiring me to enter it in two different places. Most of the activity I want to track tends to get picked up by the ring, but there are certain activities (i.e. impact martial arts) where I remove the ring and need to manually track the activity. I like the fact that I can get near real time feedback about my activity intensity. This has allowed me to develop a routine that I can do frequently throughout the day that gets me into a high intensity level of activity very quickly (this is a must for any practitioner of High Intensity Interval Training).

The ‘Readiness’ section of the app really pulls together information from the other two sections to give you a general idea of how much you should push yourself on any given day. It takes into account how well you’ve been sleeping and how active you’ve been and combines that with trends regarding your HRV, body temperature and respiratory rate in order to provide a suggestion for ‘pushing your activity to new levels’ or just ‘taking it easy’ on any given day. I’ve found this to be great for me to figure out when are the best times during the day for me to workout and also what supplements seem to help me recover faster. It’s also pretty effective at giving me a heads up when I might be coming down with something and gives me an extra verification point to rest instead of pushing through it.

So there you have it… my Oura ring review. It’s an awesome piece of hardware. Besides the lack of support for Google Fit (bi-directional support would be awesome!) my only other real complaint is that I wish it came in half sizes… that would make it even less obtrusive than it already is! If anyone is interested in getting an Oura ring, let me know, I have a few discount codes that I can provide.

Shopify Webhooks driving AWeber

This post is a solution to a problem I had with the AWeber Shopify integration. To get the most out of this post, check out the original problem here.

…the continuation…

Being that I was spending 30+ minutes every day manually solving this problem, it was important that I had a MVP solution quickly. I took a step back to think about my immediate needs and the future direction that I would like to take this solution and came up with the following constraints:

  • Need to get something basic up and running quickly that can be easily iterated upon
  • Everything needs to be deployable to the Google Cloud Platform (and not cost a fortune to run)
  • The solution should be something that I can eventually monetize. This means a clean, UI based integration in the Shopify ecosystem (i.e. support for Node, React, Next) and the need to be able to handle many Shopify stores and scale appropriately.
  • Anything built must be easy to fit into the multichannel lead generation vision of Threddies. Eventually this would need to become the way that all leads get added to my email service provider without using any direct integrations.

Shopify Webhooks

I did a little digging and realized that I could solve just about every variant of the core problem if I were notified any time a customer was created or updated in Shopify. Conveniently enough, Shopify provides webhooks for both of these cases (in addition to many more). Webhooks are great for creating quick integrations and very easy to handle using Google Cloud Functions.

I prototyped the ‘create customer’ webhook and had something up and running in no time for my test store. I also started to think more about how I can quickly iterate on webhook based integrations in the future. The most simplistic integration using webhooks doesn’t require authentication, but it does require verifying the data sent is actually from the expected Shopify store and not just anyone on the internet. This is done using the X-Shopify-Hmac-Sha256 header. When you receive the webhook data, you need to verify the data in the body by generating this value (using a private key) and comparing it with what Shopify sends. There are two different ways to do this and it depends on how you integrate with Shopify. The preferred approach is to develop a Shopify app which has it’s own key that you can use to verify every authorized store that is using your app. The drawback of this approach is that you need a full blown Shopify app that implements the store authorization flow and requires some UI work. Since I’m not a React expert, I opted to take the second approach and avoid the UI by having each Shopify store owner that was going to be using this integration provide their store’s key to me. You can get this key by going to your store’s settings page and registering a webhook in the ‘Notifications’ area. This key is used to verify the integrity of all webhook data sent. Things I learned from this step:

  • Cloud Functions would likely not be the final way of deploying this since it did not provide a way to surface a UI in a customer’s Shopify store.
  • The Cloud Function for each webhook is going to have a lot of repeated boilerplate for verifying the integrity of the data sent and handling responses/errors. I would also need a more centralized storage location for keeping Shopify Store specific data so that it wouldn’t need to be duplicated in every Cloud Endpoint
  • Shopify webhooks require a timely response, so putting any heavy lifting in the cloud function is not going to happen. Take too long to respond, and Shopify will deregister your interest in the webhook data. This started to get me thinking about how to recover from this scenario.

Hookup to AWeber

I was getting data from Shopify and verifying the integrity of the data, but at this point nothing was happening with it. In order to get the data into AWeber, I had to again obtain some information from each admin of the Shopify stores that I was integrating with. At the bare minimum, I needed an AWeber account id and a list id to add all subscribers to. I also needed my customer to authorize my app to interact with their AWeber account. More requirements for UI, but I still wanted to put that off and focus on solving the original problem. I also didn’t want to force my users to go into AWeber in order to add my integration. I found this great NodeJs wrapper library around AWeber’s API that did everything I wanted it to do. Using this, you can enter your AWeber integration information and use it to generate an auth URL that you can send to your customer. They can then use that URL to provide your integration with the necessary permissions to their account. They send back a verifier code after authorization that you can use to get all of the necessary tokens needed for your integration to access that account. This information doesn’t change unless the user removes your integration, so it works perfectly until I actually setup the full blown authorization path in my Shopify App.

There were two AWeber issues that I discovered at this point: there is no way to add the subscriber’s location without using an originating ip address for geolocation. This means the ability to send using the subscribers local time window is not something that will work with subscribers added this way. Also, AWeber integrations are confirmed opt-in by default; AWeber would not turn this off for everyone using my integration, so I had to turn it off for each account/list I wanted to use it with and tell my customers to do the same. This was necessary since my integration already leverages Shopify’s confirmed opt in and I didn’t want to confuse my subscribers by making them do it again. The Shopify webhook payload already includes a flag for ‘accepts_marketing’ and I verify that this flag is true before attempting to add any information to AWeber. Things learned at this point:

  • I really need that UI!
  • The create customer/update customer flows look very similar from a Cloud Function perspective, so there needs to be some consolidation.
  • The AWeber deauth path needs to be handled. This can’t be done in a cloud function since it’s so far removed from the user capable of fixing this problem. For now, just alert on the error when it occurs and add it to the list of issues to handle later. This is another item that indicates the need for a systems health check in the customer’s Shopify store (and a way to recover all subscribers added between the time when a failure occurs and the Shopify store owner resolves the problem)

Ready for Production

At this point, things were working well enough that I felt confident allowing this integration to start doing my job for me. Before moving everything to production, I refactored everything to eliminate the obvious problems that I saw at this point.

Instead of Cloud Functions being the primary entry point, I created a NodeJs app to do this instead. This allowed me to setup all of the webhook routing inside this app and put all of the webhook verification and ‘health check’ code into this app. If there was a problem, I could fail fast without fear of Shopify deregistering the webhook.

This also provided a place where I could add all of the UI code for the integration and the intelligence for recovering from failures. This app can also morph into a frontend that is capable of handling and routing any future webhook integration that I want to create.

I then deployed the NodeJS app to Google App Engine. At first it wasn’t working and the errors indicated that the Next.js build step wasn’t occurring on deployment. I solved this by adding a custom build step that is automatically run on deployment by Google Cloud Build. You can do this by adding the script ‘gcp-build’ to your package.json. All of this gets deployed to an App Engine standard environment using automatic scaling. So far with 4 Shopify stores using this integration, the entire platform stays under GCP’s daily usage quotas and costs nothing to run!

Next Steps

Obviously, if anyone else shows interest in this solution, the most immediate need is to make the UX better by rolling out a nice Shopify Admin UI, but there are a few other next steps that I’m currently working on.

  • There is still too much heavy lifting in the webhook handler. I’m working on just pulling enough out of the payload to know what ultimately needs to be done and then publishing this as a Shopify-agnostic event to Google Pub/Sub where it will ultimately be processed by a Google Cloud Function. This will allow me to let Shopify know that the webhook has been processed much more quickly and sets up the necessary infrastructure to start removing my other AWeber integrations in favor of directly publishing the necessary information as an event into this platform.
  • The event driven architecture opens up many additional possibilities… it allows me to do more analysis on the data before it gets into my email service provider. This allows me to better tag and identify the sources of this data. The AWeber Etsy integration for instance doesn’t provide any capability for tagging or otherwise identifying these subscribers. Eventually, this will be the place where I can plugin my ML project that I have been working on that correlates behavior and surfaces insights across sales channels
  • Turn this into a full blown marketing channel app for Shopify. I’ve always dreamt about a day where I can do my email marketing from Shopify the same way that I run Google Shopping or Facebook Ads campaigns. This platform provides the foundation for doing that. I’m really excited about the possibilities!
  • If enough interest exists, turn this into an actual product. Reach out if you’re a Shopify and AWeber customer that is already experiencing the original problem I solved, or have ideas for how this can become perfect for something that you’re trying to do.

Ghost in the (Google Cloud) Shell

One of the perks of being a technologist that is not tied to a traditional 9 to 5 is that you have immense freedom in terms of where you can complete your ‘work’ from. I’ve always toyed with the digital nomad lifestyle… but it’s kind of ridiculous when you need to lug around an insane amount of equipment in order to effectively complete your tasks. I have several computers; optimized for specific types of work, or tied to specific clients. This always required me to think ahead before traveling about which project I was going to work on while away (there was a time where I would take everything with me, but traveling with kids has definitely made me want to pack as minimally as possible).

Development and image processing requires horsepower and even the best laptops for doing this are big and heavy…. and expensive… so much so that it’s something to think about if you travel to a country like India, Russia or China where in today’s political climate the likelihood of your hardware getting confiscated is higher than ever. Having this happen while traveling was what ultimately drove me to become a Chromebook advocate. Losing the hardware is one thing, but losing the data contained on the device is even worse. Chromebooks solved the data problem… You could powerwash the device and then restore it back to its former state at any point in time from the cloud. Worst case scenario, you lose a reasonably inexpensive piece of hardware, but your data is intact. Unfortunately, Chrome OS hardware hasn’t historically been the best option for development… especially if you want to maintain the security offered by the powerwash technique that I mentioned.

The desire to be able to travel anywhere, any time at a moments notice and feel confident that I can deal with anything that comes up while I’m gone using just my Chromebook ultimately drove me to experiment with setting up containerized development environments. I wanted something divorced from the hardware that I could easily get up and running and know that everything is setup the way I need it to be. This was great, but I still needed someplace where I could access these containers from anywhere. I eventually became more and more a fan of the Google Cloud Platform (GCP)… the container centric approach to everything and the fact that the price was right ultimately led me to migrate all of my cloud infrastructure to GCP. It wasn’t long before my containerized development environments followed… and then I discovered Google Cloud Shell.

Google Cloud Shell takes this whole idea a step further. It gives me a 5GB persistent space accessible from any browser. I don’t even need the Chromebook any more. Everything that I store in my home directory stays there across sessions. Even better, it’s directly connected to all of my projects in GCP. I’ve been doing almost all of my recent development using Google Cloud Shell and the integrated Orion Editor exclusively… and I LOVE it! For web based development and microservices, it’s absolutely great. Especially if you’re ultimately deploying to GCP. The only time I’ve gone back to my ‘development’ laptop has been to do Android development as I haven’t really found a good solution for running things like Android Studio or emulators using this approach.

But I want to develop for ‘free’

Ok, I can hear a bunch of you thinking that you don’t want to be forced to develop on GCP (and potentially incur costs) before you’re ready to deploy to production. Guess what? ngrok works great in Google Cloud Shell… you can expose your local dev environment securely anywhere on the web without deploying your project to GCP. What about localhost? ngrok exposes debug information on 127.0.0.1, so there’s no way to access that from Google Cloud Shell, right? Wrong… with GCS, you can expose a ‘ ‘web preview’ from any port just by clicking on the icon within GCS, you can map this to expose ngrok’s debug interface.

Onward to Production

Google Cloud Shell obviously has all of the Google Cloud SDK integrated by default, so when you’re ready to go to production, it’s a piece of cake. GCS even knows which Cloud Project you’re working on (and reminds you of that fact in the terminal). Turn off ngrok, push to your cloud environment and update your systems to point to the production version!

Conclusion

Is Google Cloud Shell the absolutely flawless solution to every development need that a digital nomad has? Definitely not, but it’s pretty damn good. I haven’t found a good way to do Android development this using it. It’s absolutely fantastic for doing Node development though…. especially if you’re deploying to GCP ultimately. Google Cloud Shell does have a usage limit of 60 hours per week, so if you’re burning the candle from both ends, you’ll want to remember to shut it down when you do take a break so that you don’t hit that limit. Give it a shot for yourself and let me know what you think.

Annoying Time Sink of the Month…

Every day there is an annoying thing that I MUST do.  It annoys me that I need to, but as part of running a professional business it must be done.  I discovered the issue about a month ago and have been doing this job manually every day since… all the while thinking about a solution to automating this task out of my life for good.

The Background

Threddies, being both an E-Commerce and Brick and Mortar business, has two distinct buyer’s journeys that start out in very different ways.  In one case, a prospective lead finds us online and is ultimately driven to our website (this journey becomes much more complicated when you consider that a user’s journey could start from Google, Amazon, Etsy, eBay, etc.).  In the other, someone stops by our shop in person… talks to us about the local happenings… and if we’re lucky, makes a purchase. Our end goal is to turn visitors into customers, and even better REPEAT customers. Most of the time, people do not become a customer on the first visit, so our goal is to make them a potential customer, by getting them to sign up for our email list, or at the very least, joining us on their preferred social media platform.  A lot has been written about this Omnichannel retail problem and it was something that we felt we had a fairly good handle on.

We recently have been consolidating our technology stack and introducing processes to make things consistent across all of the sales/leads ‘channels’ that we support.  One area of integration that occured (and ultimately became the root of my current problem) was our choice to move the brick and mortar store to use Shopify POS instead of Square’s offering.  This had many benefits: reduction in payment processing fees, combined CRM and inventory systems, a common user interface for our employees to use across all channels, etc.

The Vexing Problem

The problem that I didn’t see coming involves the collection of emails in our email marketing platform.  We use AWeber and AWeber has both a Square and Shopify integration. We were actively using the Shopify integration to collect emails from our website and the Square integration to collect emails from B&M purchases (We also use the Etsy integration, but the issues with that is a subject for another blog post).  In both cases, we use separate web based sign up forms or the AWeber Atom mobile app (appropriately tagged) to collect email addresses from those that don’t ultimately make a purchase. If you are a Shopify and AWeber user, you should be aware that AWeber’s integration does not support collecting emails that are entered using Shopify’s Newsletter functionality, this was something that I discovered early last year and have built an acceptable workaround for that (I can document this for anyone interested).

After removing Square from the store, I quickly noticed that no emails were being collected from B&M purchases.  Initially, I thought it was a sync issue and that they would eventually show up, but they never did. I did some digging and testing and it seems that the AWeber/Shopify integration was only collecting emails from customers who made a purchase from the website, no emails would ever be added from Shopify POS.  My daily chore of adding emails had begun… even worse, since I was constantly watching what was going on with email adds, I started to notice other issues.

We don’t require users to provide an email address in order to make a purchase, providing a valid phone number will allow you to make a purchase as well.  We do require an email address, however, in order to create a Threddies account which gives the customer access to some additional features that they wouldn’t get otherwise.  I noticed over time that there were many customers that made a purchase without an email and then ended up creating an account with an email at a later date. NONE OF THESE EMAILS WERE BEING CAPTURED!  This was a fairly large problem that required some explaining to our customers when I finally added them all manually to our email list. I did some additional testing and discovered that this same use case occurs when a customer updates their email address from their account.  AWeber never gets the updated email address. This was a particular issue since many of our customers originate from channels that obfuscate their real email addresses (Amazon, eBay, etc.), but then they ultimately warm to us and provide their real email address after making repeat purchases.

Solutions

I immediately started looking for solutions since manually doing this  every day was a nightmare. The AWeber provided integration is free with an AWeber account, but there are several paid solutions in the Shopify app store.  All of the third party integrations suffered from various issues… either they used polling on a regular interval to collect email addresses rather than being reactive to events occurring in the Shopify ecosystem, or they came with heavy handed tag syncing.  None of them specifically guaranteed the Shopify POS or Newsletter functionality that I desired. Since this problem was/is on my mind every day, I started thinking about the ideal state that I would like to have… tailored email marketing automation that is triggered by the channel that the user originated from.  This is important because the Amazon/eBay/Etsy’s of the world tend to be very restrictive regarding the content of the email that you can send to their users. Due to this, many of the emails sent to customers originating from these channels tends to be manually sent (often through systems that don’t have guaranteed high deliverability like AWeber), rather than automated which takes a crazy amount of time.  I also want intelligence behind tag syncing between Shopify and AWeber. It was clear that none of the existing integrations could meet these needs. This is the problem without an automated solution… for now… Stay tuned!

If you made it this far, and are interested in the thrilling conclusion… I wrote about my solution here.

I’m retiring…

…from the rat race at least ūüėȬ† I’m extremely excited to announce that effective 11/09/2018 I will be a full time employee of Threddies!¬† Many of you are likely already aware that I’ve been working part time at my day job since April in preparation for this and I know, based on the questions that I’ve received, that it will save a lot of time for everyone if I document the basics .

A brief history lesson

We’ve been building Threddies for well over a decade and have learned a lot along the way.¬† The tech stack has evolved and been replaced several times.¬† We’ve overcome many of the difficulties of building a new technology based small business in the Upper Bucks and Montgomery County area of Pennsylvania.¬† We’ve made mistakes and come up with better ways of doing things, but in that time, we’ve grown what was once a tiny side project into a serious e-commerce and brick and mortar business.¬† ¬†In doing this, we’ve realized that we have a lot to offer others who want to do something similar in our region.

Threddies has never really been a major focus of mine since I’ve always had a full time ‘day’ job and a freelance consulting business throughout most of its lifetime.¬† I’ve been working on a transition plan for the last few months and laying out a plan for how to grow Threddies while also continuing to stay current with the latest advancements in technology.

So what exactly will you be doing?

As of COB on 11/09, I will manage the day to day operations of Threddies.¬† My consulting business will still exist, but will have a major change of focus.¬† We will be taking on fewer ‘implementation’ based contracts and instead focus on more ‘strategy’ based ones.¬† I want to make Upper Bucks and Montgomery counties a great place to start a small business, especially one that is technology based.¬† I want to help others succeed in building out awesome companies that make our local area a better place to live.¬† I want to help fill the void in relevant technology and entrepreneurship related education in our public school system.¬† If you’re passionate about joining me in this, check out the ‘How can I help?’ section below.

I’m currently in the process of taking things that have worked for Threddies and my consulting business and building solutions around them that others can use.¬† You’ll see more of me at local meetups discussing the latest initiatives and working to better understand pain points that we can offer additional solutions for.

I’m one of your existing clients, what does this mean for me?

First, I’m pretty confident that this is not news for you since I worked with every one of my existing clients to come up with a personalized plan for what happens after this change ūüėČ All existing contracts will be completed at the same level of quality as if this change never occurred.¬† For many, I won’t be taking on new work, but I hope to continue working with you on new strategic initiatives.

I was hoping to become a client, what now?

Reach out!¬† I’m interested in helping, especially if you’re local.¬† I might say ‘no’ to taking on your project directly, but I can refer you to someone who is more than capable.¬† I’m also really interested in hearing what you have planned and continuing to build my network.¬† I may have some expertise to help you and you might be doing something that’s too awesome for me to pass up being involved!

What about all of the fun, one-off experiments?

I believe that the worst thing you can do in business or technology is become stagnant and stop learning.¬† So the ‘experiments’ will definitely continue.¬† I fully believe that I will have even more time and resources to invest in conducting (and hopefully documenting) some really crazy experiments.¬† In fact, the majority of ‘implementation’ based consulting projects that will be continuing after 11/9 fall into this category!

How can I help?

Drop me a line and let’s chat.¬† I’m looking for people passionate about technology and small business in the Upper Bucks and Montgomery area.¬† Let’s bounce ideas off of each other and do great things!¬† If you are starting or growing a business in the area and want to chat or are looking for financing or technological expertise, I want to hear from you!

How to backup and restore inMyCellar database

 

Probably the most frequent question I hear from inMyCellar users is: “HELP!! I got a brand new shiny phone, installed inMyCellar and my cellar is now empty!¬† How do I get it back?”.¬† I hear your concern, and it pains me that I haven’t had enough free time to complete the uber cloud based backup solution of your dreams (believe me, it IS coming), but I figure it’s about time that I finally document the approved and tested way of doing it in a way that there should be little left to the imagination regarding how to make it happen.

Currently, your entire ‘cellar’ is stored in an on device sqlite database and luckily Google provides an easy way to back that up and restore it on the same device or on another device.

Install adb

First, you need to install the Android Debug Bridge (adb) on a computer that you can connect your Android device to.  ADB is a tool put out by Google for Android developers.  You can read all about the great things it can do and download it here.

Turn into a developer

Now that you have ADB ready to go, the next step is to make sure that ADB can ‘talk’ to your Android device.¬† This involves turning ‘Developer Options’ on on your Android and setting it up for ‘USB Debugging’.¬† Don’t worry, you can easily turn these settings off with one click when you’re done, but there are a ton of cool things you can do with them, so you might want to keep it enabled.¬† Turning on development options and USB debugging is described in detail here.

You can verify everything is working by typing the following at a terminal on your computer:

adb devices

If you see a device identifier listed and no complaints about not being authorized, you’re good to go on to the next step.¬† If you encounter any issues, refer back to the prior pages to make sure you followed all of the steps and follow any troubleshooting steps for your computer’s operating system.

Back that thing up

The next step is to create a backup of your inMyCellar data.  Make sure the device that has your current inMyCellar data is plugged into your computer and accessible via ADB.  Create a directory on your computer called: inMyCellar navigate to that directory in a terminal and type:

adb backup -f ./data.ab -noapk com.transmutex.inmycellar

Hit ‘enter’ and you should get a prompt on your Android device asking you if you would like to backup your inMyCellar data.¬† You can set a password at this stage to encrypt your data, but you can skip this step since inMyCellar does not store anything sensitive.

When the backup is complete, you should have a file on your computer in your inMyCellar directory named data.ab.¬† This is a backup of all of your inMyCellar data.¬† Now it’s time to put that data on your new Android device.

Restore inMyCellar data

At this time, you can unplug your old Android device and plug in your new device.¬† ¬†Make sure you installed a fresh copy of inMyCellar from the Play store on your new device.¬† You will then need to make sure that this device is setup for debugging and accessible via adb, so follow the same steps listed above that you did for your old Android device.¬† Once it’s accessible, run the following command from a terminal in your inMyCellar directory:

adb restore ./data.ab

Once again, you will be prompted on you device, asking if you want to restore the data.  If you setup a password when originally backing up your data, you will need to enter it here.  When this process is complete, you should be able to open up inMyCellar and see all of your data on your new device!  Feel free to reach out with any questions or concerns.

Hey Google, talk to the Beer Judge Exam Trainer!

It’s finally here… my first app for the Google Assistant has been approved.¬† You can get all of the details here,¬† but the basic gist is that it’s a straightforward helper for studying for the BJCP exam.¬† It’s available¬†on¬†Google Home,¬†Android 6.0+ phones (soon to be 5.0+),¬†TVs¬†and¬†iOS 9.0+ phones.¬† I basically used this as an example app to become familiar with the process in order to build a more sophisticated app, but if people find this useful I will enhance it.¬† Just say, “Ok Google, Talk to Beer Judge Exam Trainer” on your Google Assistant equipped device.¬† As always, drop me a line with any feedback.

Who’s the Phoneme?

Ever since attending Google I/O¬†(one of the best conferences I’ve ever attended… seriously Google, please have me back next year!) earlier this year, I felt I needed to build a dedicated app for the Google Assistant.¬† I kicked around a ton of ideas and at first felt daunted by all of the things that I didn’t know that were fundamentals for even making something simple in the space:¬† My programming language(s) of choice were not in the ecosystem.¬† I knew nothing about Conversational UI.¬† TensorFlow, DialogFlow and all of the Machine Learning (ML) terminology were new to me and the SDKs were changing rapidly.¬† So I did the same thing I did in the early days of Android and just jumped in and started learning.

I had a job (which seems like eons ago) in which I worked on many things that became the precursors to modern day AI and ML concepts.  So a lot of that was learning the new names for concepts with which I was already familiar.  Libraries to facilitate AI had come a long way as well, and there were off the shelf pieces for things that entire companies had arisen around back in the day.

Even after getting a pretty good grasp on the basics, I held off for a bit on actually building something; secretly hoping that Google would roll out Kotlin support to all of their server side infrastructure. ūüėČ That never happened, but I’d also done a fair amount of Node.js and general JavaScript development over the years and finally just decided to dive in there.¬† This is pretty much required to use the backend components of their FireBase infrastructure for Android as well, so it’s not lost time in continuing to be familiar.¬† I was pleasantly surprised to see that Speech Synthesis Markup Language (SSML) was pervasive in the voice assistant space.¬† I had followed the early W3C recommendation pretty closely as part of a (way before its time) Augmented Reality Gaming Engine that I had worked on as a side project.¬† This is where the title of the post comes in…

I dabbled in a lot of odd things in college… so many things that, over the years since I’ve learned them, seemed so far afoot of my current chosen career as to be laughable.¬† One of my big obsessions (that still is) was language and its origins.¬† Why is it that ancient Sanskrit is so similar in some ways to Classic Mayan?¬† I could discuss this stuff forever, but for the purposes of this digression, my point is that I’ve taken a bunch of linguistics and language courses.¬† One course in particular was immensely useful in creating a primarily voice centric assistant app.¬† In this course, I learned about phonemes.¬† Phonemes are a convenient way of representing the way words should be pronounced.¬† It’s probably pretty obvious how this would be valuable when dealing with a highly technical subject matter using just voice, but just think about the wide range of pronouncing words in English that have very similar spellings and you’ll get the idea.¬† Phonemes are often represented using symbols from the International Phonetic Alphabet.¬† I chuckled to myself about the irony that I was building a beer related voice assistant using the IPA, but this was the secret sauce to making my app sound like it was a beer judging expert and not some rube reading unfamiliar words out of a homebrewing manual.

I ended up building something for the Assistant that I’m proud of and learning a bunch along the way.¬† The app is currently under review with Google, otherwise I’d be telling you all to go out and try it.¬† I’ll leave that for a future post.

Using the Physical Web to Drive Subscriber Growth and Engagement

Threddies store front

NOTE: As of December 6, 2018, Google will be discontinuing Nearby Notifications on Android, making much of the information in this post no longer relevant.  Read more here.

Threddies recently made the transition from existing online only to having a B&M boutique shop.  The Threddies shop specializes in items that are noticeably distinct from the online offerings and this posed a challenge for email marketing efforts.  What was the best way to approach introducing the new store to existing customers?  How could we ease new customers of the boutique into the existing Threddies email campaign?  Would existing online customers (who are spread around the world) even care about the physical location?

Establishing the baseline

Many of the questions regarding how to actually structure the effort using our email marketing toolkit were quickly answered since AWeber was in the process of rolling out its segmenting on tags feature. ¬†We knew we didn’t want to maintain separate lists for online customers and those who frequented the shop. ¬†We also wanted to be able to easily keep all of our customers up to date about sales that were happening in the online store or new items from the shop that we intended to also sell in the online store. ¬†What was very clear was that we did not want to bombard online customers with information that would only really be relevant to people who could actually visit our physical location.

We sent one email to all customers informing them of the plan to open the B&M storefront and directed anyone who was interested in more information to use an alternate signup form that we created with a tag that denoted their interest. ¬†This would add new subscribers with the appropriate tag, but would also update existing subscribers to have the tag that we were going to key off of in order to send email with information specific to the boutique. ¬†We now had the ability to send targeted emails to those who actually cared about the physical store. ¬†That was great for existing customers, but it wasn’t really the best way to get visitors to the store signed up to our list.

Growing the local customer base

If someone made a purchase in the store, they would automatically be added to our marketing efforts, but we noticed in the early days, that a lot of people were stopping in and just browsing as we were tweaking the shop layout and products that were for sale. ¬†We wanted to be sure that anyone who stopped in early on and wasn’t ‘converted’ would still have incentive to come back as the concept was evolving. ¬†Setting up a device running AWeber’s Atom with the appropriate tag pre-filled was an option, but the shop is small and that would take up space that we could use for more merchandise and initial indications were that people weren’t going to be very proactive about signing up and would need to be instructed to do so. ¬†Wouldn’t it be great if there was a way to inform any visitor to the shop that we have a way for them to register for more information without someone directing them to a tablet in the corner of the store? ¬†Wouldn’t it be great if they could do all this from the device that they already have in their pocket?

Enter the Physical Web

I had been playing around with beacons and thought using them to solve this dilemma would be a good experiment.  A beacon is a Bluetooth Low Energy device, capable of broadcasting information.  Since it uses Bluetooth, the range of that broadcast is limited to a fairly small area and the beacon broadcast power can be tweaked to adjust that range.  There are two main competing beacon standards: iBeacon (favored by Apple) and Eddystone (an open standard developed by Google).  Android and iOS devices can use both standards, but iBeacon requires that an app be written to specifically interact with the beacon.  Eddystone has a URL format which has become the cornerstone of the Physical Web.  Android devices can interact natively with Eddystone-URL and iOS devices can do the same using the Chrome browser.  I already had some Estimote beacons which support both iBeacon and Eddystone, so I configured one to broadcast the Eddystone-URL format.

Setting up the beacon

Estimote provides an Android app and Web UI to configure their beacons but any beacon that supports Eddystone will have a similar configuration process that is manufacturer specific. ¬†I’ll outline the basic steps without explicitly discussing the exact process required by my beacon manufacturer.

The first thing you’ll want to do is make sure the beacon is broadcasting Eddystone packets and more specifically the Eddystone-URL format. ¬†This format requires a URL that holds the content of what you want to broadcast. ¬†I used the URL for my web signup form that was configured to tag subscribers as being interested in the physical storefront. ¬†You will need to configure the beacon to broadcast this URL as well. ¬†The url cannot be bigger than 17 bytes, so you’ll likely need to use an URL shortener in order to accomplish this. ¬†I like https://goo.gl/¬†as it provides a nice dashboard with some metrics about your shortened URLs. ¬†Once everything is configured, and your beacon is properly broadcasting the Eddystone-URL format, you should see a ‘nearby’ notification on any reasonably recent Android device.

When someone clicks on this notification, they will be directed to your signup form.

Now you can place your beacon in the location where you would like to broadcast your physical web location and adjust its broadcasting power accordingly. ¬†Prepare to answer questions, as you’ll likely have someone who sees your ‘nearby’ notification for the first time and is curious about what exactly is happening.

Beacon of Hope

I’m still analyzing data regarding the effectiveness of this approach. ¬†It’s not perfect because not every device supports it without additional configuration, but adopting a standards approach like this allows others using Eddystone beacons and the Physical Web to aid in familiarizing customers with this technology and help drive adoption.

Moving forward, I’ll likely use this in conjunction with a dedicated Threddies app allowing use of more beacon functionality while providing additional opportunities for engagement. ¬†If you have any questions about using beacons or general discussion about the Physical Web, feel free to reach out, I’d love to hear from you.