Can’t Quoit for Summer

I can’t remember too many summer time outings as a kid where the game of Quoits wasn’t nearby.  It was such a naturally expected feature of any outdoor event that I found it shocking when no one seemed to even know the game when I moved to the Philadelphia area.  I love to play the game and it was one of the first things I setup as soon as I moved into my current house.

The origins of quoits

What is Quoits?

Many of my local friends have never played the quoits before I rope them into playing, but just about everyone has played something similar.  The basic gist of the game is that you are throwing a round disc with a hole in the middle and trying to get it to land on (or as close as possible) to a metal peg in the ground.

A friend of mine from Kentucky and a handful of others that grew up around Pennsylvania Dutch areas were already familiar with quoits.  I always thought of quoits as a “PA Dutch” thing. I’ve seen and purchased sets in places traditionally known for large Mennonite populations. I was definitely surprised when I learned that historically it had been a predominantly English game.

Wikipedia describes several different varieties of quoits, but in my mind the only real version is the Traditional American 4 lb.  If you’re not throwing a piece of heavy metal at another piece of metal, I’m not interested. I’ve played some of the others, but nothing compares to the variant that I played all my life.  For this game, you need proper pits that are more or less a permanent fixture wherever you want to play.

The Pitch

So I hear you… a bunch of people hanging around with beers in one hand and throwing 4 lb metal discs at pins stuck in the ground doesn’t sound like something you want in the same backyard that your kids and their friends play in.  I didn’t either, so that became a prime motivator for me in this project.  What made this problem even worse is that I wanted the ability to keep the game going until after the sun went down.  This meant that the pit location had to get closer to the house or I had to run supplemental lighting.

I opted to go closer to the house since I already have some pretty powerful flood lights in a more or less perfect spot.  This also solved some other minor concerns of mine like the ability to be in close earshot of my outdoor speakers and having more options for putting your beverage of choice down to rest while you’re throwing. I also built a spot under my deck for storing the quoits in between games. Speaking of this deck, it also offers a great spot for spectators for when the quoit matches get interesting.

This really underscored the need for a good solution to my number one concern.  Anyone who’s tripped over one at night can tell you all about the dangers of open pits and exposed pins.  Plus who wants to mow grass around either of them?

measuring twice before I cut once

Recessed Covered Pits

I found a design for constructing recessed covered pits and decided to give it a try.  The design pretty much met all of my requirements, and it looked like a fun little project as well.  Unfortunately, the site that held these original plans no longer exists. The basic structure consisted of a 36 inch inside square made from 2 x 6s, topped with a sheet of ply wood mounted on a square made from 2 x 4s that is just under 36 inches outside with two reinforcing pieces to prevent the plywood from sagging. I used wood meant for the outdoors, but also painted each box for added protection and a more interesting look.

finished assembled quoit box

The most difficult part of this project was digging the holes in my shale filled front yard to put the pit boxes in! After getting the boxes in place, I filled each with about 9 yards of baseball infield clay to get the hob at regulation height. Overall I ended up with a great set of recessed quoit pits that keeps the clay pristine and I can drive my lawn mower right over top!

Google Shopping Actions with Shopify

At Google Marketing Live on May 14th, Google announced a radical revamp of their Google Express shopping service. Even before this announcement, I was working with Google to get Threddies approved and setup using Google Shopping Actions. Google Shopping Actions (GSA) is a mechanism for exposing your products for purchase to the Google Assistant via Google Merchant Center. I had some very specific needs in doing this that required some additional work beyond the basic setup. In this article, I’ll walk through what I did and my reasons for doing so.

Hey Google, Buy some Threddies Scrunchies

Threddies has used Google Shopping Ads to promote many of our products across Google’s ecosystem since we started selling online. We currently use the Google Shopping Marketing campaign capability that is built into Shopify to do this. This setup exposes a product feed to Google Merchant Center using Google’s Content API. This allows us to keep our advertised products data (price, image, description, shipping, etc.) in sync with our website. The Content API is great! It gives prospective customers the best experience by providing up to date information. This eliminates any surprises when they ultimately click through to our website to make a purchase. A simple way to setup GSA would be to just expose this feed directly and call it a day.

Unfortunately, doing this isn’t what I would consider the best case scenario for a few reasons:

  • What if I want to sell products that I don’t advertise or only a subset of those that I do?
  • What if the new Google Shopping imposes constraints on data that Google Shopping Ads does not? (spoiler: it does, and I don’t like the idea of modifying my website content in order to meet these restrictions)
  • Ideally, I’d like to keep promoting our website products completely separated from selling the products on our new ‘Google channel’.

In addition, selling on Google’s platform was going to be more costly than selling directly to our customers via our website. Google charges a ‘commission’ for each product sold. Google Shopping also prefers a straight forward shipping policy to make its ‘universal cart’ more appealing to consumers. Participating in the new Google Shopping also requires us to support a more liberal return policy than we currently allow. Based on this, the ‘feed’ used to drive GSA would need to allow us to provide a different pricing model. One that directly matched our website would not work.

Feeding the Beast

Google Merchant Center does support ‘supplementing’ feeds and I initially planned to implement GSA using a supplemental feed. Ultimately, I decided to setup a completely separate feed for GSA. I did this mainly out of my desire to have no impact or dependency on the Content API feed used for Ads.

I setup a brand new GSA specific feed using Google Sheets. In this feed, I added all of the products that I initially wanted to sell in the new Google Shopping experience. My initial product list pretty closely matches the products that we are selling via Amazon. We did this in order to provide an alternative to dropshippers of our products who were already on Google Express. This is a problem that we are familiar with from our other sales channels. People dropshipping our products from Amazon tend to provide a less than ideal customer experience for many reasons. We try to discourage this practice wherever possible.

Taming the Beast

The first attempt at getting this to work was a mess. Initially I had multiple versions of the same products showing up (at different price points) in both Google Shopping and Ads. This would cause us to pay to advertise products that would be sold via Google Shopping (not desirable!). It also surfaced products that I was advertising from my website in my Google Shopping store (at a deep discount because they were using the website pricing!). I fixed both issues by creating a supplemental feed for Google Ads. In this feed, I use the ‘excluded_destination’ attribute to prevent products from ever showing up in Google Shopping.

Things were starting to look better, but I noticed an issue with products that were in both the Google Shopping Actions and the Google Ads feed. Both Google Shopping and Ads would prefer the data that was in the Google Shopping Actions feed. This resulted in some of my highest converting products being advertised with the Google Shopping data. These ads also brought users to my Google Shopping store rather than my website. I made two tweaks to my Google Shopping Actions feed to correct this. First, I created distinct ‘ids’ for every product in the feed to prevent overlap with the ids that were provided via the Content API feed. Second, I used the ‘included_destination’ attribute to specify that the products in this feed should only be surfaced in Google Shopping Actions.

Getting things in Ship(ping) Shape

One final note if you’re using a similar setup (Shopify with the Google Campaign Marketing app). I noticed that my Google Shopping Shipping policies (setup in Google Merchant Center) appeared to get blown away every few hours

This was maddening and it took me a bit to figure out what was going on. It turns out that Shopify’s Google Shopping marketing integration was doing it! This was easy to fix after I understood what was happening. You need to navigate in your Shopify admin to Apps > Google Shopping > Merchant Center Account and in the Shipping settings section select to manually manage them in the Import method drop down. To be extra safe, modify the ones that are imported from Shopify to never be used by Google Shopping Actions in Google Merchant Center first and then create new Google Shopping Actions only shipping policies after saving your import method settings in Shopify.

Unless you want to be an early adopter, I would recommend waiting a bit… a better Shopify Integration is coming. If you have been accepted into the Google Shopping Actions program and have a similar setup, hopefully this helps! Try it out and let me know what you think! If you want to use the Google Assistant to buy your next hair accessories: “Hey Google, I’d like to buy some Threddies Scrunchies.” or check out all of our products in Google Express.

Best Tips for Making Your Home Green

Making your home green doesn’t mean starting from scratch. Existing homes can be made more sustainable with some simple changes, one room at a time. A sustainable home needn’t mean less comfort or convenience either; green homes should be comfortable spaces that support the health of you and your family. As well as health benefits, green homes are cheaper to run as they make the most of all resources and the sale value of a green home has been found to be around 7% more than regular housing.

The heart of your green home

The kitchen is often referred to as the heart of a home; it’s also the room that uses the most energy.

Energy consumption in your kitchen can be reduced with a few simple adjustments.

The refrigerator is the third highest energy consumer in your house out of all appliances.  This can be reduced by keeping it on a warmer setting, letting food cool before refrigeration, defrosting food in the refrigerator, and deciding what you want before opening the door. Make sure your refrigerator has at least 3 inches space all round it for proper air flow and all door seals are working. If it’s time to replace it, go for an energy star rated model and continue with your energy efficient practices to save even more.

Dishwashers use around 30 kWh per month and around 5 gallons of water per wash – if it’s an energy efficient one. Switching to washing dishes by hand will save energy and around 1 -2 gallons of water. If you just can’t give up the dishwasher, make sure it is full before turning it on, don’t pre-rinse dishes unless particularly dirty, let dishes air dry and turn it completely off when the cycle is done.

You can further reduce water consumption in the kitchen by fitting a flow valve or aerator to your faucets. This will cut the flow by half without affecting pressure.

Sleeping green

Your bedroom should be a haven of comfort and rest.  Invest in quality bed linens, duvets and blankets that will last for years and keep you warm in winter so you can switch the heating off at night.

As it takes around 2,839 gallons of water to make a cotton bed sheet, it’s better to look for chemical free, organic cottons, linen, wool, and hemp fabrics. They’re better for you as they won’t leak toxins into your home. You can even get 100% organic mattresses that will give you a better and healthier night’s sleep, plus they’re fully biodegradable and recyclable, unlike a conventional mattress.

Keeping green in the bathroom

Keeping yourself clean and your bathroom green is fairly simple. Switch your shower head for a low-flow model. Installation is fairly simple and a good quality model will still deliver good pressure but use around 3 – 4 gallons less water per minute.

Baths use around three times the amount of water as showers, so when you do bath, scoop the water out for use in your garden, our create your own grey water system for flushing toilets and washing clothes.

Updating your toilet to improve efficiency and include a dual flush will save water further. If your toilet is leaking it could be losing 200 gallons each day, so get it fixed.

Small adjustments throughout an existing house – including insulation, solar panels and water recycling systems will further green your home. If you are considering a new build, think smaller and consider the configuration of rooms to improve energy efficiency while making the most of the environment you’ve chosen for the build.

How to Make Your Home Green, One Room at a Time

3D Printing Perfection

My 10 year old kids have finally caught the Gloomhaven bug. For anyone who hasn’t played this game (or doesn’t play with 10 year olds), let’s just say keeping things on the level and organized during a scenario and across a campaign can pose a challenge. I started looking around for things I could 3D print to ease this burden. I found a few existing models for player dashboards and box organization that looked promising. I started making some rough prototypes and quickly realized that I wanted everything to be as dimensionally accurate as possible in order to make sure that everything fit perfectly.

I will describe my steps in terms of my printer… a heavily customized Printrbot Simple Metal. Printrbot is sadly no longer in business, but the reasonably open nature of this printer has allowed it to live on. Even though I’ll be describing everything in terms of this printer, I’ll summarize with high level steps that should be applicable to any printer/slicer combination.

I started off printing lots of test cubes of known dimensions. These guys print fast and then I measure them with a digital caliper. If the dimensions aren’t near perfect, it’s time to make some adjustments and print again. Assuming your printer isn’t completely out of whack, this model will mostly help you perfect your hot end and heated bed temperatures for the filament that you are using. The goal here is to perfect the balance of adhesion to the build surface without making the part difficult to remove or squashing things in the Z-axis.

No matter what I did, I couldn’t really get my Z height to be perfect. It was always smaller than it should be and I kept adjusting my Z Offset using the G-code M212 ZXXX where XXX was some Z probe Offset adjusted over and over again in .1 mm increments. No matter what value I used, I could not get this perfect before reaching an offset that negatively impacted the first layer adhesion. I noticed that at a certain Z offset, where there was no possibility of the hotend smashing the first layer, the discrepancy in the Z height always remained the same. I decided to print a taller model to see if the issue with Z dimension accuracy was an issue with every layer (I should see a larger discrepancy with a larger model) or if it was purely a first layer problem (Z height discrepancy should be about the same no matter the model height).

I moved to printing a pyramid calibration print that was taller and would also allow me to test stacking dimensional accuracy in the X/Y axes. The first print proved that the Z height dimensional problems were absolutely a first layer issue.

I recently switched my preferred slicer from Simplify3d back to Cura due to an issue I had where Simplify3d’s overaggressive license protection scheme kept locking me out of being able to use the software requiring me to wait on their customer support (sometimes for days) to unlock my legitimate copy so that I can use it again. When this initially happened, I was in a hurry to continue printing a project that I was working on and I breezed through the Cura setup mostly accepting the defaults for my printer. Up until this point, extreme accuracy in the Z dimension wasn’t a high priority and I didn’t really notice any issues until diving into this project. I did some digging in Cura and noticed that a default setting was to do about an 80% extrusion on the first layer. I changed this to be 100% and after adjusting some hotend and heated bed temperatures on a few more cubes and my Z height was perfect!

By now, I had tons of little cubes in all sorts of colors littering my printing area. They might print fast, but they’re fairly useless. I decided to look for a test print that had at least some utility. I found a jenga block that took about 20 minutes to print and decided to go for it. I thought that since my printer was fairly dialed in now, I could use this to print when making subtle slicer changes or swapping filament to see how overall printing would be impacted. My kids saw a few of these in different colors and wanted a ‘giant’ jenga set so I scaled up the model and went to work.

Within a few minutes of printing a jenga block that pushed the limits of my build area in the X direction, I noticed some obvious warping in the corners. How could this be when everything was dialed in? I initially blamed the cooling fan. I felt maybe it was cooling the PLA to fast which was killing edge adhesion. I modified my slicer settings to turn the cooling fan off completely for the first two layers and then gradually ramp up to 100% over the next 5 layers. This seemed to fix the warping for the jenga blocks.

I finally got back to what originally led me down this path… printing inserts to organize GloomHaven. I unfortunately couldn’t print everything in a single piece since the largest inserts needed a 285 mm print surface in one direction and my franken bot maxed out at 250 mm in all directions 🙁 I contemplated putting everything on hold and figuring out how to enlarge my print surface, but then recalled what that process was like when I upgraded from my original 150 mm axes to my current 250 mm. It took forever to iron out the kinks. I decided to just split up the models to make them print using my constraints and added tabs so they could then be glued together (For most of my PLA based projects I use the E6000 craft adhesive).

Even by splitting the models, some of these pieces still pushed me almost to the limits of my build area in at least one direction. I continued to notice warping and in some cases, failed adhesion in the extreme ranges of both the X and Y axis. This was perplexing at first and after doing some research with several additional test prints it appeared that I had an issue with my bed not being level. This was crazy since one of the reasons I favored the printrbot early on was because it had an auto bed leveling capability. This was supposed to correct for minor imperfections in the bed’s levelness by modifying the gcode sent to the printer to compensate.

The printrbot uses a metallic sensor to determine where the bed is. Before each print, it probes the bed in 3 places in order to get an idea of how it needs to compensate for any slant in the bed. I started by taking apart everything and recalibrating this sensor thinking that was the problem. Of course after doing this, I pretty much had to walk back through everything I had just done to dial in my printer… a few days later, I was back in the same place… still having issues in the extreme X/Y regions of my build area.

I started reading more about the auto bed leveling procedure and remembered a former co-worker (who did not have an auto bed leveling feature) telling me about Simplify3d’s bed leveling wizard. I decided to give this a shot and record the results. This process basically moves the print head around to many points in the X and Y axis at a Z of 0 and probes the actual Z offset. It immediately became obvious that the problem was related to the 3 points probing that my auto bed level probe used. Everything within those points was fine, but there were several areas outside that were noticeably off in the Z offset compared to the probing points. These points that get probed are baked into the firmware, so I started digging around to see how I could change that.

I still use the original Printrbot firmware which is no longer maintained and it’s based on a fairly old version of the Marlin firmware. This version does not really support much modification in terms of how the auto leveling procedure works. While becoming knowledgable on the Marlin firmware, I discovered that newer versions of Marlin support a much more robust bed leveling procedure that arose out of the needs of SLA printers but was then modified for FDM printers. This procedure, called skewness compensation, allows you to define a ‘bed skew’ matrix that is applied before the auto bed level procedure is run. This sounded like the exact solution that I needed. Unfortunately, there isn’t an official printrbot firmware that supports this… but I found a project to get the Printrbot Simple series running on a modern Marlin based firmware.

My initial investigation suggested that using this firmware is still experimental and I may sacrifice other behaviors that haven’t been implemented yet in order to get bed skew compensation. Being that I have a bit of a print backlog and didn’t want this printer to be out of commission (especially since it prints things that are smaller and centered on the build plate perfectly), I decided to hold off on a firmware upgrade for now, but I am actively watching this project and plan to upgrade at some point.

I needed another solution, so I spent tons of filament and time tweaking the z-offset and generated g-code manually in order to be able to print at the extremes, but this resulted in a return of Z dimension accuracy issues when printing items that don’t expand beyond 75 mm from the bed center in the X or Y axis. I treat custom G-Code and firmware changes like a software project and keep meticulous records about what I changed and I, so I used this changelog to define an ideal ‘profile’ for printing things within a 75 mm square centered on the build plate and another ‘profile’ for printing outside of this square. This got me close enough for just about every use case I was immediately working on. I printed for a few days swapping out firmware profiles depending on whether the model I was printing crossed this imaginary boundary or not.

All the time I was doing this, I still had this nagging item in the back of my head… how do people who don’t have auto bed leveling deal with this issue? This led me down the path of an area I forgot all about from my early 3D printing days using home made rep raps… hardware bed leveling. To be honest, I probably more blocked this out of my mind rather than forgot about it because it was so nightmarish in the early days and was one of the reasons I insisted on a printer with auto bed level feature when I upgraded. I ultimately found someone who added hardware bed leveling to their Printrbot Simple Metal. I had to mess around a bit to get this working with my hardware modifications, but it ultimately allowed me to dial in the printer reasonably well for the remaining part of this project.

Indoor Gardening Setup

I’ve always been a fan of gardening… it probably has something to do with spending all that time out in the sun with my great grandmother digging in the dirt as a kid and enjoying the great things that came from it when it was ultimately time for harvest. I lost touch with this joy for a bit in my 20’s but there was nothing like the mind-numbing contrast of the cubicled office to make me want to get back outside and get my hands dirty. After buying my first home with some property, doing some real gardening was high on my list.

One problem living in the part of Pennsylvania that I do is that the outdoor growing season doesn’t last all year long 😢 I started doing a bunch of container gardening just so I could bring things like peppers and herbs inside over the winter. This was mainly in order to get a jump on the next season… assuming they got enough sunlight, I didn’t forget to water them or it didn’t get too cold where I was keeping them. Some of my failures here made having a dedicated indoor space for gardening a high priority when looking for my current home.

Seed Starting

My indoor “gardening space” started out as just a small shelf in a closet in my laundry room. The early intention was to set it up as a staging area for starting seeds and growing transplants indoors so that they could be planted outside as soon as conditions allowed. My laundry room was perfect for this since it was by far the most humid room in the house and also the warmest due to it’s placement right next to my furnace. Both conditions being ideal for starting most seeds.

I started with a pretty simple germination station and a supplemental heating mat since many of the things I wanted to get a head start on require warm germination temperatures. I also use peat pellets as my growing medium. There are cheaper ways to do this, but these are very effective in my experience, not all that messy and they help with adding some much needed organic material on a regular basis to my shale and clay rich soil. They also help with transplanting the plants which I’ll get to later. You put the peat pellets in a few days before adding seeds and mist them down every day until they expand a bit. At this point, you can add your seeds and continue to mist them as needed, making sure that you don’t make conditions so wet that mold starts growing on the pellets. In a few days, you should have some sprouts which you can then transplant.

Most seeds don’t require light to germinate, so this basic setup works great as long as you are on the ball about getting germinated seeds out of the station before the sprouts start to require light. Since this requires transplanting, which can take some time, I eventually added a small LED setup. This helps in three distinct ways. It buys me more time before I NEED to transplant. It gives me the ability to work with seeds that do require light to germinate and it also allows the sprouts to become much hardier before transplanting since they can use the light to continue growing. I don’t have the best finesse when transplanting sprouts, so any help I can get in having seedlings that can take some abuse during transplanting is always helpful.

My current seed germination setup towards the end of germination round with only the ‘stragglers’ remaining.

Switches and Outlets

Some of you might be wondering about the tech involved at this point. There’s already one light and a heating pad involved, neither of which you’d really want to run 24/7. Suffice it to say that like everything, I started out small just using power strips and manually turning things on and off. Eventually, I moved to using timers and then automated, programmable outlets/switches since the manual management became annoying and unreliable. It’s really amazing how much you can do with these COTS products and things like IFTTT and the Google Assistant. I still use much of this basic hardware, but have supplemented with some custom hardware based on the raspberry pi and software that I wrote using Android Things and Actions on Google. If people are interested, I can document this in another post. It’s another thing that I hope to make available to others at some point after working out most of the kinks and documenting it more thoroughly, so let me know if you’re interested!


No matter how you start your seeds, eventually you’re going to need to transplant them. You could attempt to take them right from the seed starting area to the outdoors, but if you’re not doing this under the utmost growing conditions, you’re likely not going to have the best of luck. This means you need some capability to handle this phase indoors as well. Assuming your intention is to ultimately put these plants outside in a garden, this phase differs from the germination stage in a few notable areas:

  • You will need a space for growing plants.
  • You will need light; ideally adjustable to accommodate your growing plants.
  • You will need an effective strategy for watering around all of these electrical systems that prevents over/under watering.
  • You will need actual soil for the plants to put down a root system.
  • You need ways of strengthening your plants so that they don’t become too weak to survive outside.
Some recent transplants on an elevated platform getting them closer to the light

Space is the Place

Even when I carved out that initial shelf in my laundry room closet to start my seeds, I knew that eventually I wanted to take over the entire closet. The first shelf started about 4 feet above the ground which gave me some serious growing space underneath. This height was also perfect for installing an adjustable fluorescent grow light system. In order to maximize the effect of the lights, I first covered all of the surfaces below this shelf with aluminum foil to reflect all light back at the plants. I chose a fluorescent system since I wanted it to be reasonably economical and didn’t need the added heat from the more energy consuming lights. At the time that I installed this, LEDs weren’t really viable due to their cost and questions regarding their effectiveness for growing plants. This latter concern has been addressed with newer models and I’ve since supplemented the base install with programmable LED arrays that allow me to tune the light wavelengths in order to optimize it for my plants and goals. Blue wavelengths encourage growth while reds encourage flowering/fruit production. You can see in many of the photos that the light is either skewed to red or blue or a mix depending on what I’m trying to accomplish.

Electricity and Water don’t mix

Obviously, after adding a few lights, heating elements and other controls, thinking about how to route power to everything becomes a concern… Especially when you factor in the need to water everything on a regular basis and deal with the inevitable situation where the water spills or goes someplace unintentional. It didn’t take me long to build catch basins beneath every spot where I place my plants in containers. There are a lot of benefits to this and I just found the largest plastic containers with lids available and use the lids. This depth is effective enough at keeping any over watering inside the lids. This has the added benefit of allowing you to water your plants ‘from the roots’ if you use containers that have holes in the bottom (which I would definitely recommend to prevent both under and over watering). These lids also allow you to route the power along the outskirts. No matter what, you definitely want to use GFCI outlets EVERYWHERE. I still do most of my watering by hand, mostly because I spend a bunch of time inspecting anything I’m growing on a regular basis anyway, but I’ve been experimenting with automating the watering in various ways.

Put roots down

The main goal of the ‘indoor transplant’ stage is to create plants that are hardy enough to put outside. One of the most fundamental things at this point is to provide everything the transplants need to create a healthy root system. This starts with using the peat pots mentioned earlier. They allow you to easily move the sprouts into a secondary container without disturbing any of the roots that have started at this point. Choice of container is the next step. I already mentioned that having a container with holes in the bottom and watering from the bottom of the container encourages healthy root growth but forcing roots to grow deeper in order to find water, but the size of the container also matters. Think about your timeline for moving the plants outdoors and the growth rate of your plants and adjust accordingly. If you’re moving them outdoors within a few days or a week, you can get by with a small container, but if it’s a plant that’s destined to stay in a container, or spend weeks inside first, you’ll want something much bigger. You can do multiple rounds of transplanting, but I like to think ahead about this and reduce the number of times I need to move the plants. No matter what container you decide on, you’ll need to fill it with good soil. Fill the container about 3/4 of the way and then take your sprout in its peat pod and tear the webbing on one side to make it easier for the roots to push through and place it in the center of the container. Add more soil around the plant and then water it deeply. Transplant complete!

Strong Plants

At this point, you’ve almost replicated a safe environment to reasonably approximate the conditions for preparing your plants for the outdoors. One thing you’re missing is stressors on the plant caused by weather conditions and inquisitive insects and animals. You can prep your plants for this by adding an adjustable, oscillating fan to the mix. I like to avoid directly blowing air on my plants and opt to have the fan face a wall and have the breeze ricochet back onto the plants.

Get Outside

What I’ve described here is a pretty effective way to get the jump on your growing season. Before putting your plants outside permanently, you’ll want to put them out during the day for a few days to ‘harden’ them. This is another area where having a system of trays makes things easier! I’ve been able to harvest weeks ahead of my neighbors when I get cooperative weather using this method for seed starting and I’ve been able to make that even better using techniques to create micro-climates outdoors (definitely another post).

Grow Inside

Pruned multi year pepper plants that are fruiting/flowering indoors!

If you have enough space, it’s also very easy to tweak this setup to create an all year round indoor growing environment. I do this mainly with peppers, greens and herbs. I’ll plant in permanent containers, move outside when the weather is right for the plants and then when it starts to get cold, prune the plants back and bring them indoors. I can then tweak the environment to either make the plants mostly dormant until the next growing season and put them back outside again, or to continue flowering and fruiting while inside. I have some pepper plants that are several years old at this point!

Interested in indoor gardening? Have you built something similar? What’s holding you back? I’d like to hear more from you!

My Daily Ritual

The thing I am asked about most often is some variant of “How are you able to do everything that you do?”… It’s usually buttressed by things like: “You have so many interests”, “You’re married with kids, how do you have the time?” or “Do you ever have down time? I just want to watch Netflix when I get home.” I never really know how to respond to this… it really is just the way I live my life and has been for a long time. After talking to people a bit about this and enduring constant quizzing, it seems that it might come down to my strict adherence to a daily ritual. I call this a ‘ritual‘ because it really is something that I’ve built up over decades with an explicit outcome in mind… to live the life that I live. It’s not a routine (a sequence of actions regularly followed; a fixed program.), and it’s not set in stone. I’m constantly iterating on this ritual to make it better for me. That’s also part of the key… this is FOR ME… it’s been iterated on for most of my life. It’s been adjusted to fit years of medical tests and customized for what I know about my genetic makeup. Every part of it has been vetted and tweaked to make it overall positive for my biochemistry. This ‘ritual’ likely won’t work for you… in fact, it will be a horrible thing for many people, but maybe by me documenting it, there is something in here that you will find useful. Maybe you will be inspired to start on the journey of creating you’re own. At least, you will get to see how things change over time because I plan on continuing to update this post as my process evolves.

This is a LONG post. Everything documented here is the current state of my practice that arose from years of iteration from collecting data about myself in great detail and experimenting with things to improve various aspects of my life. I’m always experimenting and this post WILL NOT document experiments. There were many failures and I don’t discuss those here. This is only for things that have become part of my permanent ritual. If you want to know about my latest experiments, ask me about them the next time you see me. At any given time, there’s usually only one thing that I’m experimenting with… this makes it easier to identify positive or negative correlations and eliminate additional variables that could be skewing results regarding my experiment hypothesis.

How do I collect and analyze this data? I’ve used tons of things over the years, but at this point it’s essentially custom software that uses the Google Fit platform as central storage. I use several commercial apps and hardware for data collection and all but one integrates with Google Fit. This makes for an easy integration point since the additional software that I write just needs to be able to use the Google Fit API to enter or consume data. For many years, I manually analyzed everything. Over the last few years and with the advancements in Machine Learning, I’ve been slowly building software to help with my analysis. Everything that has become a part of my ritual arose out of a desire to make a positive change to some monitored data point that I felt a need to improve. I won’t really dive into the details about specific data points for every single thing in this post, but if you’re curious about anything specific, feel free to ask.

The Morning Ritual

I tend to wake up about the same time every day. I don’t use an alarm and try to never schedule anything so early that I would need one. I have a skylight in my bedroom that is useful for slowly nudging me to wake up as the sun comes up. Embracing my own personal Circadian Rhythms has been very beneficial for me. Getting good quality sleep is also critical to me. Sleep experimentation was probably one of the very first things I played around with in order to increase my productivity. I followed a polyphasic sleep schedule for years, but no longer do that since it’s not really compatible with having a family or a traditional job. It was likely useful in training myself to make the most out of the sleep that I get. This practice taught me how to fall asleep fast, get into a state of REM sleep quickly and spend more time in deep (delta wave) sleep.

The first thing I do upon waking, is the same thing I do right before going to sleep. I lay in bed for a few minutes mindfully breathing. This gets the day started right by allowing me to reflect on what I’m going to do. This morning breathing takes on different forms (meant to energize me for the day) unlike my nightly version which always follows the same pattern and purpose (to get me in the right state for sleep).

Many people underestimate the importance of breathing ‘correctly’. The hurried modern life and other stressors have an extremely negative impact on how people breathe and most don’t even realize it. If you don’t currently have awareness regarding how you breathe on a regular basis and aren’t prioritizing doing something about it, you will be amazed at how quickly doing so can change the way you feel. There are many breathing techniques that you can use to address many different goals that you may have.

My sleep quality dictates how the rest of my day progresses. Most of the time, my sleep quality is high, occasionally things go awry and I have ritual adjustments for when this happens. I won’t really go into the specifics of the adjustments since it is a pretty rare occurrence… I do so many things to make sure that my sleep quality is always rock solid. I’ve used many products to monitor sleep quality over the years, but my current choice is by far the best, least intrusive method for me. I use the Oura app to check the details on my sleep quality right after completing my morning breathing routine.

I get out of bed and drink a glass of water to rehydrate. It also helps with getting consistent body related measurements.

I take measurements with a eufy bluetooth smart scale. The one I use measures weight, BMI, and mass for body fat, muscle, and bone. It also tracks percentages for everything including visceral fat. The app has it’s own trend tracking, but I ultimately settled on this model because it integrates with Google Fit.

Next I run through a quick yoga routine. This changes daily and is focused on increasing flexibility. The daily variance is mainly to focus on areas where I may be having issues or feel that I need improvement. The constant here is that there are certain ‘whole body’ flexibility enhancing postures that I do no matter what.

You might notice that my morning ritual doesn’t include breakfast. I used to be a big advocate of ‘grazing’, but over the last year I’ve become a complete advocate of Intermittent Fasting (IF). I follow a strict 18:6 protocol every day except for Saturday and Sunday (anything goes on the weekend). Occasionally, I’ll alter one day a week to 16:8 to accommodate any meetings or events that I have scheduled. I choose the 18:6 protocol because research has shown this to have a more profound impact on autophagy. I would love an effective way to measure this, but many of the most recent enhancements to my ritual is around increasing autophagy. The IF area of my routine is where I’m currently doing some of the most experimentation (e.g. does time of fast matter? does what I consume when breaking the fast matter? is there a decrease in effect when this becomes routine? what can I do differently on cheat days?) and I expect more updates to occur here over the next few months.

I make a giant pot of tea that I sip on throughout the morning. This is often a Darjeeling/Ceylon black tea blend, but I’ve been adding more and more green tea as part of an investigation into green tea having added benefits above and beyond black tea. If I need an extra boost, I’ll make a cup of espresso as well.

My work day

At this point, my work day begins… I’ll do a quick scan of email and some dashboards that I have to see if there are any immediate fires that need to be put out. Usually there is nothing, but I find it great to get these out of the way ASAP. Notice that I don’t spend any time on non-essential email, social media, political news, etc. That can wait for another time since the mornings are for Getting Things Done (GTD).

Getting Things Done

I read this book when it first came out and nothing has been more beneficial to my productivity than what arose out of reading this. I started a system that was paper based as described in the original book, but quickly developed my own iteration using electronic tools. I’ve morphed this system to different tool chains at least 3 major times, but continue to use the same basic principles with some added enhancements of my own.

The rest of my morning consists of complete focus on completing two objectives. One personal objective and one ‘work’ objective. I decide what these are the day before I start working on them (more about this later). They meet the ‘next action‘ criteria from GTD… that means that I know exactly what needs to be done, there is no investigating, there are no unknowns at the time that I decide to work on them, there is just a set of straight forward steps to actually get that objective done that requires some uninterrupted time to do them. Most of the time, these are easy, sometimes they take longer or ‘unknown unknowns’ are discovered. If I finish early, I’ll dig into some email at this point (always time boxed) or review other objectives that are ready to be worked on and pick one of those. During this time, I try to remain focused on my task except for one allowable interruption…

The Importance of Movement

Another great feature of the Oura ring is that it will alert you if it feels that you haven’t moved enough over time. I’ve always felt that moving while working was extremely important. I’ve used standing desks for more than a decade and a few months ago I also purchased a FluidStance. The FluidStance is a balance board that you can stand on at your desk and based on what I’ve seen it is way more effective at increasing your activity/calorie burn than just standing alone. I’ll alternate using it and just standing flat on a mat throughout the day and my Oura ring will never alert me to get moving while doing that. Occasionally though, I will sit while working and I’ve developed a few quick routines to run through on Oura ring activity alerts that are designed to get my heart rate to ~80 percent of my max for 3 to 5 minutes.

The Mid Day Transition

By the time mid day approaches, I’m almost always done with my two major objectives for the day. I mark the transition by taking a few minutes to stimulate my brain differently by learning another language. I use duolingo for this daily practice. You can find and follow me there by searching for my name. I’ll do another quick email checkin and then update/review my GTD lists. The goal here is to get any pending problems front of mind for the next part of my day.

Another basic thing that I’ve been doing for a very long time is a ‘lunch time’ walk. This started out mostly as a way to get some movement during the day and to get outside of the office on nice days. These are great reasons, but I’ve evolved this into an informal mindful walking practice. I get outside no matter the weather and walk for at least 20 minutes. I’ve built an infinite labyrinth trail at my house that I walk for this purpose. I focus on the changes that occur to the trail day by day and let my subconscious churn on problems and the upfront items from my GTD list that I’ve recently reviewed. Some of my best ideas arise out of this practice or immediately after… plus I get another 20 minutes of exercise in during the day!


Even though I’m a huge fan of Incidental Activity as the majority of my exercise, at least 4 days I week, I run through a vigorous short but dedicated workout routine. I’ve tried many workouts over the years and have decided that the best for me must include the following:

  • require minimal equipment; I travel a good bit and don’t want lack of access to specialized equipment to make it easy for me to skip a workout.
  • a way to do a full body workout in a minimal amount of time
  • maximize a full range of natural motion to minimize injuries
  • emphasize Functional Strength training

Due to this, I’ve created a High Intensity Interval Training, body weight focused workout that I do Monday, Tuesday, Thursday, Friday. It only takes 20 minutes… I focus on lower body Monday/Thursday and upper body Tuesday/Friday. This gives me ample time to rest muscle groups before the next time around. In order to prevent this routine from becoming ‘routine’ (allowing me to avoid the plateau effect), I cycle the exercises weekly on an 8 week schedule. Each day consists of ~8 different exercises that are done for 30 seconds each, with a 10 – 15 second rest period between exercise and then the whole set of exercises are repeated 3 times. This does a good job of getting my heart rate up and is way more effective for me than any other routine that I’ve tried so far. If my Oura ring shows a high readiness score indicating I’m up for a challenge, I’ll repeat this workout in the after noon or early evening.

I used to do this workout early in the morning, but have moved the bulk of strength training to immediately before I break my intermittent fast. The reason for this is due to a number of studies that have shown interesting things that occur to the AMPK and mTOR pathways while strength training in a fasted state. I can talk about this all day, but the basic gist is that training while fasted and then immediately breaking that fast with the right type of meal has been shown to have positive impacts on muscle preservation while fasting as well as fat loss and insulin sensitivity. When I first read these studies it sounded to good to be true, but I’ve verified the results in my own testing.


Now it’s time for my lunch… this is normally around 2PM unless I’m meeting someone for a more traditional lunch time meeting. I don’t have extremely strict rules regarding what I eat… just a balanced meal that minimizes processed foods and sugars. I tend to keep it low-carb since I like to save my carbs for beer 😁 I do have a ritual for how I break my intermittent fast though.

I break my fast by drinking an Apple Cider Vinegar (ACV) cocktail. This is simply one tablespoon of ACV (with the mother) in a full glass of water. I do this for several reasons, but it started for the same reason I started IF… I have a history of diabetes in my family and both of these practices have been shown to minimize insulin spikes and resistance. Further research and analysis has also shown evidence supporting an increase in gut health leading to enhancements in nutrient extraction for the food I’m about to eat. Additionally, ACV has been shown to support an alkalizing effect on the body. This prevents leaching of calcium from your bones, has been shown to support your immune system and is generally beneficial for many endogenous processes within your body. The morning breathing techniques that I use are also designed to maximize this alkalizing effect.

After consuming this drink, I’ll eat a handful of raw almonds. Good fiber, high in magnesium (more about this later) and generally starts to make me feel full and helps prevent over eating during my ‘feeding window’.

I’ll then wash down my supplements with another glass of water. I’m always experimenting with new things based on the data that I’m tracking and areas that I want to improve, but the current required items include:

The main goal here is to increase blood flow, enhance my immune system, reduce inflammation and stimulate the production of BDNF.

The only other daily thing here is adding some high C8 Capryilic Acid Content MCT oil to my meal. This can be mixed into just about anything, and makes a decent salad/sandwich dressing just by itself. This is done again to decrease blood glucose levels and has the nice side affect of increasing blood ketone levels which gives me a mental boost for the afternoon. I’ll go through some of my less pressing emails while eating lunch and prep for making the remainder of the day productive.

Time to Learn

Afternoon is all about learning and idea generation… most of the time I focus on getting more items in my GTD lists to the ‘next action’ state. This might involve investigating alternative approaches, digging into unknowns, but often requires learning something new. I started a basic practice that became my afternoon routine after reading about the 5 hour rule. I’m pretty sure I first heard about this through an interview with Warren Buffett. I did start out struggling to find my 5 hours a week to do this, but with practice and dedication, it eventually became the more like ’25 hour rule’ that it is for me now. This approach to learning, coupled with GTD, has really allowed me to supercharge my productivity over the years. I don’t have a ton of rules for how this occurs, but here are a few:

  • First priority is always to get a backlog of items, related to an Objective that has high near term ROI, to the ‘next action’ state. I never want to spend any time in my mornings to do this.
  • At least once a week, I force myself to come up with one ‘new business’ Objective. This can be a new approach to lead generation, new source of revenue, or a new investment strategy. The time to do this is often spread throughout the week, but at the end of the week, I should always have a new Objective in this class of work that is mostly ready to be worked on. This serves to constantly get me thinking outside of the box with regards to diversifying revenue streams in order to insulate my lifestyle from any unforeseen circumstances that can jeopardize any one existing source of income.
  • Any remaining time I spend reading… I currently use Pocket to keep track of anything that I’d like to read that isn’t a physical book or stored in Google Play Books.

During this time, I still pay attention to my activity levels the same way that I do during the morning and follow a similar routine for increasing my activity levels. The number one underlying goal for this time is to…

Prep for tomorrow

I never want to wake up questioning what is most important for me to do in the morning. It’s a waste of time when I’m in the best state for working on the real tough problems. This uncertainty often leads to poor sleep since I’ll ruminate on all of the things that I could possibly work on trying to weigh the pros and cons of each. Because of this, I want to end my work day by figuring this out. I review all the objectives that I have that are high priority items and pick the ‘next action’ tasks that have the highest ROI for at least one personal and one work related item. Barring any emergency that occurs over night, these will be the things that I focus on most in the morning. This eliminates any procrastination-related churn in my mornings and sets me up for a good night’s sleep with a defined set of items for my subconscious to ruminate on.

I’ll take another walk to lower insulin-like growth factor a few minutes before eating dinner. Dinner, like lunch, is balanced from a macro-nutrient perspective, minimizes processed foods, but otherwise anything is game.

After Dinner

After eating dinner, my ritual is much more fluid. This is time for friends and family. Hanging out, conversation and fun. There’s no real focus on working out since I’ve almost always met my goals during the day. I’m not thinking about tomorrow because I’ve already figured out exactly what I’m going to do (and I’m confident that it’s something that I can get done). The only real thing that I do at this point is pay attention to the finish line of my feeding window. As this time approaches, if I feel any indication that good quality sleep may be a problem, (e.g. muscle soreness from working out, anything else weighing on my mind) I’ll eat two tablespoons of raw almond butter. This is a magnesium bomb, and done at the right time, increases Gamma-Aminobutyric Acid (GABA). GABA is effective at promoting relaxation (i.e. better sleep) and the magnesium also promotes muscle recovery.

Sometimes work bleeds over into the evening and when that does occur, I want to do everything to minimize any detrimental impact to my sleep quality. I use wellness settings on all of my electronic devices to minimize interruptions, dim brightness and alter color hues after a certain time. If I spend any time in front of a screen, I use blue light blocking glasses. I go to bed when I’m ready to sleep. I do my bedtime breathing exercise and start the whole process again when I wake up.


So there it is… the daily ritual post. I’ll update it as things evolve. I’m more than happy to answer any questions about why I do things the way that I do. I held off on going into the many reasons why things have evolved the way that they have to keep this readable, but I assure you there is a method behind all of my madness… and I’m more than happy to discuss it if you really want to hear it! I could write just as much about why I DON’T do certain things, or the experimentation involved in arriving at my conclusions, so if you’re curious about either of those things inquire as well. Most importantly, if you decide to go down this path for yourself, I’d love to talk through your process and share some of the things that I’ve found.

Oura ring review

Oura ring review from a long time Quantified Selfer and former Hello Sense user.

my oura ring

I’m fanatical about tech gadgets, but even more so for wearables and things that reliably fulfill my needs as a “Quantified Selfer“. Good quality sleep data has always been elusive. Many devices that I’ve tried were so intrusive as to ruin any chance of actually getting good sleep. Others just did a terrible job of reliably collecting the data that I wanted. I backed a Kickstarter for the Hello Sense and this was one of the first devices that really generated useful data. Not only did it track my sleep activity, but the base unit also collected data about my bedroom light levels and air quality. Sadly, the company went bust and the device ultimately became unusable after the cloud servers were shut down.

Another Kickstarter project caught my eye… the Oura ring… having been burned by so many crowd funded tech gadgets in the past, I initially held off on backing the project, but I kept a close eye on its progress and saw many great reviews on the original ring from people I trusted. When Oura announced a gen 2, I was all over it and jumped right in to purchase one as soon as I could.

I’ve had my Oura ring for a few months now and I feel totally qualified to review all aspects of it now that it’s experienced pretty much everything I can throw at it…. I am a HUGE fan of this thing! There isn’t much that I can complain about and I feel that it is worth every penny.

The Oura ring system consists of the ring, a mobile app, and the Oura Cloud… a web based equivalent of the mobile app which allows you to dig a bit deeper into the data and an API that you can use to write apps for the Oura Cloud or pull the data collected by your ring into other systems.

The ring looks like… a ring… much more so than the first generation… it doesn’t make you the focus of a room like wearing Google Glass did 😏 This is a pretty amazing feat considering all of the sensors that it packs and the fact that you can go days without needing to charge the battery. It’s waterproof and fairly resilient… I’ve definitely pushed mine to some limits that I probably shouldn’t have and it’s survived. The ring connects to the app on your phone via bluetooth and you can put it in radio silent mode and still have it collect data for quite some time before needing to sync it.

The sleep tracking of the device is rock solid. I’ve done tons of things to wreak havoc with my sleep in order to test the ring’s ability to detect it. Every morning after destroying my sleep in the name of science, I’d check the app. It would basically tell me, “Dude, go back to bed, you need it”. There really was no fooling its sleep detection.

I bought the Oura Ring mainly to track sleep time and sleep quality (as measured by the amount of time spent in the different stages of sleep), but the ring is so much more than ‘just’ a sleep tracker. The Oura app is divided into four sections: Readiness, Sleep, Activity and a Dashboard that surfaces summary information from the other three. The Sleep section tracks a few additional items above and beyond what I bought the ring for. These include a resting heart rate trend and sleep latency.

The Oura Ring is also an activity tracker. I’ve been wearing various activity trackers since the first versions were commercially available. I’ve never really been a fan of wearing anything around my wrist since they always seem to get in the way, but I’ve always overlooked that in order to get the activity data. The Oura app has recommendations for how much activity you should be getting (this changes daily based on your ‘Readiness’ which I’ll discuss later). It also tracks your progress toward your daily goal and the intensity of the activity that you do. You can also turn on notifications in the app to remind you to get up and move on a regular basis. For activity that gets your heart pumping, the ring does a pretty good job of tracking. I’ve noticed that it doesn’t always do the best job of tracking activity that is less vigorous. The app has the ability to manually input this type of activity. This is one area where I wish the Oura App would improve. I already track all of my activity in Google Fit and I would love if the Oura app could just tie into that ecosystem to get this data instead of requiring me to enter it in two different places. Most of the activity I want to track tends to get picked up by the ring, but there are certain activities (i.e. impact martial arts) where I remove the ring and need to manually track the activity. I like the fact that I can get near real time feedback about my activity intensity. This has allowed me to develop a routine that I can do frequently throughout the day that gets me into a high intensity level of activity very quickly (this is a must for any practitioner of High Intensity Interval Training).

The ‘Readiness’ section of the app really pulls together information from the other two sections to give you a general idea of how much you should push yourself on any given day. It takes into account how well you’ve been sleeping and how active you’ve been and combines that with trends regarding your HRV, body temperature and respiratory rate in order to provide a suggestion for ‘pushing your activity to new levels’ or just ‘taking it easy’ on any given day. I’ve found this to be great for me to figure out when are the best times during the day for me to workout and also what supplements seem to help me recover faster. It’s also pretty effective at giving me a heads up when I might be coming down with something and gives me an extra verification point to rest instead of pushing through it.

So there you have it… my Oura ring review. It’s an awesome piece of hardware. Besides the lack of support for Google Fit (bi-directional support would be awesome!) my only other real complaint is that I wish it came in half sizes… that would make it even less obtrusive than it already is! If anyone is interested in getting an Oura ring, let me know, I have a few discount codes that I can provide.

Shopify Webhooks driving AWeber

This post is a solution to a problem I had with the AWeber Shopify integration. To get the most out of this post, check out the original problem here.

…the continuation…

Being that I was spending 30+ minutes every day manually solving this problem, it was important that I had a MVP solution quickly. I took a step back to think about my immediate needs and the future direction that I would like to take this solution and came up with the following constraints:

  • Need to get something basic up and running quickly that can be easily iterated upon
  • Everything needs to be deployable to the Google Cloud Platform (and not cost a fortune to run)
  • The solution should be something that I can eventually monetize. This means a clean, UI based integration in the Shopify ecosystem (i.e. support for Node, React, Next) and the need to be able to handle many Shopify stores and scale appropriately.
  • Anything built must be easy to fit into the multichannel lead generation vision of Threddies. Eventually this would need to become the way that all leads get added to my email service provider without using any direct integrations.

Shopify Webhooks

I did a little digging and realized that I could solve just about every variant of the core problem if I were notified any time a customer was created or updated in Shopify. Conveniently enough, Shopify provides webhooks for both of these cases (in addition to many more). Webhooks are great for creating quick integrations and very easy to handle using Google Cloud Functions.

I prototyped the ‘create customer’ webhook and had something up and running in no time for my test store. I also started to think more about how I can quickly iterate on webhook based integrations in the future. The most simplistic integration using webhooks doesn’t require authentication, but it does require verifying the data sent is actually from the expected Shopify store and not just anyone on the internet. This is done using the X-Shopify-Hmac-Sha256 header. When you receive the webhook data, you need to verify the data in the body by generating this value (using a private key) and comparing it with what Shopify sends. There are two different ways to do this and it depends on how you integrate with Shopify. The preferred approach is to develop a Shopify app which has it’s own key that you can use to verify every authorized store that is using your app. The drawback of this approach is that you need a full blown Shopify app that implements the store authorization flow and requires some UI work. Since I’m not a React expert, I opted to take the second approach and avoid the UI by having each Shopify store owner that was going to be using this integration provide their store’s key to me. You can get this key by going to your store’s settings page and registering a webhook in the ‘Notifications’ area. This key is used to verify the integrity of all webhook data sent. Things I learned from this step:

  • Cloud Functions would likely not be the final way of deploying this since it did not provide a way to surface a UI in a customer’s Shopify store.
  • The Cloud Function for each webhook is going to have a lot of repeated boilerplate for verifying the integrity of the data sent and handling responses/errors. I would also need a more centralized storage location for keeping Shopify Store specific data so that it wouldn’t need to be duplicated in every Cloud Endpoint
  • Shopify webhooks require a timely response, so putting any heavy lifting in the cloud function is not going to happen. Take too long to respond, and Shopify will deregister your interest in the webhook data. This started to get me thinking about how to recover from this scenario.

Hookup to AWeber

I was getting data from Shopify and verifying the integrity of the data, but at this point nothing was happening with it. In order to get the data into AWeber, I had to again obtain some information from each admin of the Shopify stores that I was integrating with. At the bare minimum, I needed an AWeber account id and a list id to add all subscribers to. I also needed my customer to authorize my app to interact with their AWeber account. More requirements for UI, but I still wanted to put that off and focus on solving the original problem. I also didn’t want to force my users to go into AWeber in order to add my integration. I found this great NodeJs wrapper library around AWeber’s API that did everything I wanted it to do. Using this, you can enter your AWeber integration information and use it to generate an auth URL that you can send to your customer. They can then use that URL to provide your integration with the necessary permissions to their account. They send back a verifier code after authorization that you can use to get all of the necessary tokens needed for your integration to access that account. This information doesn’t change unless the user removes your integration, so it works perfectly until I actually setup the full blown authorization path in my Shopify App.

There were two AWeber issues that I discovered at this point: there is no way to add the subscriber’s location without using an originating ip address for geolocation. This means the ability to send using the subscribers local time window is not something that will work with subscribers added this way. Also, AWeber integrations are confirmed opt-in by default; AWeber would not turn this off for everyone using my integration, so I had to turn it off for each account/list I wanted to use it with and tell my customers to do the same. This was necessary since my integration already leverages Shopify’s confirmed opt in and I didn’t want to confuse my subscribers by making them do it again. The Shopify webhook payload already includes a flag for ‘accepts_marketing’ and I verify that this flag is true before attempting to add any information to AWeber. Things learned at this point:

  • I really need that UI!
  • The create customer/update customer flows look very similar from a Cloud Function perspective, so there needs to be some consolidation.
  • The AWeber deauth path needs to be handled. This can’t be done in a cloud function since it’s so far removed from the user capable of fixing this problem. For now, just alert on the error when it occurs and add it to the list of issues to handle later. This is another item that indicates the need for a systems health check in the customer’s Shopify store (and a way to recover all subscribers added between the time when a failure occurs and the Shopify store owner resolves the problem)

Ready for Production

At this point, things were working well enough that I felt confident allowing this integration to start doing my job for me. Before moving everything to production, I refactored everything to eliminate the obvious problems that I saw at this point.

Instead of Cloud Functions being the primary entry point, I created a NodeJs app to do this instead. This allowed me to setup all of the webhook routing inside this app and put all of the webhook verification and ‘health check’ code into this app. If there was a problem, I could fail fast without fear of Shopify deregistering the webhook.

This also provided a place where I could add all of the UI code for the integration and the intelligence for recovering from failures. This app can also morph into a frontend that is capable of handling and routing any future webhook integration that I want to create.

I then deployed the NodeJS app to Google App Engine. At first it wasn’t working and the errors indicated that the Next.js build step wasn’t occurring on deployment. I solved this by adding a custom build step that is automatically run on deployment by Google Cloud Build. You can do this by adding the script ‘gcp-build’ to your package.json. All of this gets deployed to an App Engine standard environment using automatic scaling. So far with 4 Shopify stores using this integration, the entire platform stays under GCP’s daily usage quotas and costs nothing to run!

Next Steps

Obviously, if anyone else shows interest in this solution, the most immediate need is to make the UX better by rolling out a nice Shopify Admin UI, but there are a few other next steps that I’m currently working on.

  • There is still too much heavy lifting in the webhook handler. I’m working on just pulling enough out of the payload to know what ultimately needs to be done and then publishing this as a Shopify-agnostic event to Google Pub/Sub where it will ultimately be processed by a Google Cloud Function. This will allow me to let Shopify know that the webhook has been processed much more quickly and sets up the necessary infrastructure to start removing my other AWeber integrations in favor of directly publishing the necessary information as an event into this platform.
  • The event driven architecture opens up many additional possibilities… it allows me to do more analysis on the data before it gets into my email service provider. This allows me to better tag and identify the sources of this data. The AWeber Etsy integration for instance doesn’t provide any capability for tagging or otherwise identifying these subscribers. Eventually, this will be the place where I can plugin my ML project that I have been working on that correlates behavior and surfaces insights across sales channels
  • Turn this into a full blown marketing channel app for Shopify. I’ve always dreamt about a day where I can do my email marketing from Shopify the same way that I run Google Shopping or Facebook Ads campaigns. This platform provides the foundation for doing that. I’m really excited about the possibilities!
  • If enough interest exists, turn this into an actual product. Reach out if you’re a Shopify and AWeber customer that is already experiencing the original problem I solved, or have ideas for how this can become perfect for something that you’re trying to do.

Ghost in the (Google Cloud) Shell

One of the perks of being a technologist that is not tied to a traditional 9 to 5 is that you have immense freedom in terms of where you can complete your ‘work’ from. I’ve always toyed with the digital nomad lifestyle… but it’s kind of ridiculous when you need to lug around an insane amount of equipment in order to effectively complete your tasks. I have several computers; optimized for specific types of work, or tied to specific clients. This always required me to think ahead before traveling about which project I was going to work on while away (there was a time where I would take everything with me, but traveling with kids has definitely made me want to pack as minimally as possible).

Development and image processing requires horsepower and even the best laptops for doing this are big and heavy…. and expensive… so much so that it’s something to think about if you travel to a country like India, Russia or China where in today’s political climate the likelihood of your hardware getting confiscated is higher than ever. Having this happen while traveling was what ultimately drove me to become a Chromebook advocate. Losing the hardware is one thing, but losing the data contained on the device is even worse. Chromebooks solved the data problem… You could powerwash the device and then restore it back to its former state at any point in time from the cloud. Worst case scenario, you lose a reasonably inexpensive piece of hardware, but your data is intact. Unfortunately, Chrome OS hardware hasn’t historically been the best option for development… especially if you want to maintain the security offered by the powerwash technique that I mentioned.

The desire to be able to travel anywhere, any time at a moments notice and feel confident that I can deal with anything that comes up while I’m gone using just my Chromebook ultimately drove me to experiment with setting up containerized development environments. I wanted something divorced from the hardware that I could easily get up and running and know that everything is setup the way I need it to be. This was great, but I still needed someplace where I could access these containers from anywhere. I eventually became more and more a fan of the Google Cloud Platform (GCP)… the container centric approach to everything and the fact that the price was right ultimately led me to migrate all of my cloud infrastructure to GCP. It wasn’t long before my containerized development environments followed… and then I discovered Google Cloud Shell.

Google Cloud Shell takes this whole idea a step further. It gives me a 5GB persistent space accessible from any browser. I don’t even need the Chromebook any more. Everything that I store in my home directory stays there across sessions. Even better, it’s directly connected to all of my projects in GCP. I’ve been doing almost all of my recent development using Google Cloud Shell and the integrated Orion Editor exclusively… and I LOVE it! For web based development and microservices, it’s absolutely great. Especially if you’re ultimately deploying to GCP. The only time I’ve gone back to my ‘development’ laptop has been to do Android development as I haven’t really found a good solution for running things like Android Studio or emulators using this approach.

But I want to develop for ‘free’

Ok, I can hear a bunch of you thinking that you don’t want to be forced to develop on GCP (and potentially incur costs) before you’re ready to deploy to production. Guess what? ngrok works great in Google Cloud Shell… you can expose your local dev environment securely anywhere on the web without deploying your project to GCP. What about localhost? ngrok exposes debug information on, so there’s no way to access that from Google Cloud Shell, right? Wrong… with GCS, you can expose a ‘ ‘web preview’ from any port just by clicking on the icon within GCS, you can map this to expose ngrok’s debug interface.

Onward to Production

Google Cloud Shell obviously has all of the Google Cloud SDK integrated by default, so when you’re ready to go to production, it’s a piece of cake. GCS even knows which Cloud Project you’re working on (and reminds you of that fact in the terminal). Turn off ngrok, push to your cloud environment and update your systems to point to the production version!


Is Google Cloud Shell the absolutely flawless solution to every development need that a digital nomad has? Definitely not, but it’s pretty damn good. I haven’t found a good way to do Android development this using it. It’s absolutely fantastic for doing Node development though…. especially if you’re deploying to GCP ultimately. Google Cloud Shell does have a usage limit of 60 hours per week, so if you’re burning the candle from both ends, you’ll want to remember to shut it down when you do take a break so that you don’t hit that limit. Give it a shot for yourself and let me know what you think.

Annoying Time Sink of the Month…

Every day there is an annoying thing that I MUST do.  It annoys me that I need to, but as part of running a professional business it must be done.  I discovered the issue about a month ago and have been doing this job manually every day since… all the while thinking about a solution to automating this task out of my life for good.

The Background

Threddies, being both an E-Commerce and Brick and Mortar business, has two distinct buyer’s journeys that start out in very different ways.  In one case, a prospective lead finds us online and is ultimately driven to our website (this journey becomes much more complicated when you consider that a user’s journey could start from Google, Amazon, Etsy, eBay, etc.).  In the other, someone stops by our shop in person… talks to us about the local happenings… and if we’re lucky, makes a purchase. Our end goal is to turn visitors into customers, and even better REPEAT customers. Most of the time, people do not become a customer on the first visit, so our goal is to make them a potential customer, by getting them to sign up for our email list, or at the very least, joining us on their preferred social media platform.  A lot has been written about this Omnichannel retail problem and it was something that we felt we had a fairly good handle on.

We recently have been consolidating our technology stack and introducing processes to make things consistent across all of the sales/leads ‘channels’ that we support.  One area of integration that occured (and ultimately became the root of my current problem) was our choice to move the brick and mortar store to use Shopify POS instead of Square’s offering.  This had many benefits: reduction in payment processing fees, combined CRM and inventory systems, a common user interface for our employees to use across all channels, etc.

The Vexing Problem

The problem that I didn’t see coming involves the collection of emails in our email marketing platform.  We use AWeber and AWeber has both a Square and Shopify integration. We were actively using the Shopify integration to collect emails from our website and the Square integration to collect emails from B&M purchases (We also use the Etsy integration, but the issues with that is a subject for another blog post).  In both cases, we use separate web based sign up forms or the AWeber Atom mobile app (appropriately tagged) to collect email addresses from those that don’t ultimately make a purchase. If you are a Shopify and AWeber user, you should be aware that AWeber’s integration does not support collecting emails that are entered using Shopify’s Newsletter functionality, this was something that I discovered early last year and have built an acceptable workaround for that (I can document this for anyone interested).

After removing Square from the store, I quickly noticed that no emails were being collected from B&M purchases.  Initially, I thought it was a sync issue and that they would eventually show up, but they never did. I did some digging and testing and it seems that the AWeber/Shopify integration was only collecting emails from customers who made a purchase from the website, no emails would ever be added from Shopify POS.  My daily chore of adding emails had begun… even worse, since I was constantly watching what was going on with email adds, I started to notice other issues.

We don’t require users to provide an email address in order to make a purchase, providing a valid phone number will allow you to make a purchase as well.  We do require an email address, however, in order to create a Threddies account which gives the customer access to some additional features that they wouldn’t get otherwise.  I noticed over time that there were many customers that made a purchase without an email and then ended up creating an account with an email at a later date. NONE OF THESE EMAILS WERE BEING CAPTURED!  This was a fairly large problem that required some explaining to our customers when I finally added them all manually to our email list. I did some additional testing and discovered that this same use case occurs when a customer updates their email address from their account.  AWeber never gets the updated email address. This was a particular issue since many of our customers originate from channels that obfuscate their real email addresses (Amazon, eBay, etc.), but then they ultimately warm to us and provide their real email address after making repeat purchases.


I immediately started looking for solutions since manually doing this  every day was a nightmare. The AWeber provided integration is free with an AWeber account, but there are several paid solutions in the Shopify app store.  All of the third party integrations suffered from various issues… either they used polling on a regular interval to collect email addresses rather than being reactive to events occurring in the Shopify ecosystem, or they came with heavy handed tag syncing.  None of them specifically guaranteed the Shopify POS or Newsletter functionality that I desired. Since this problem was/is on my mind every day, I started thinking about the ideal state that I would like to have… tailored email marketing automation that is triggered by the channel that the user originated from.  This is important because the Amazon/eBay/Etsy’s of the world tend to be very restrictive regarding the content of the email that you can send to their users. Due to this, many of the emails sent to customers originating from these channels tends to be manually sent (often through systems that don’t have guaranteed high deliverability like AWeber), rather than automated which takes a crazy amount of time.  I also want intelligence behind tag syncing between Shopify and AWeber. It was clear that none of the existing integrations could meet these needs. This is the problem without an automated solution… for now… Stay tuned!

If you made it this far, and are interested in the thrilling conclusion… I wrote about my solution here.