Email marketing is a game, or at least it should be. The sentiment is captured in the now-aged movie, “Moneyball.” The movie, focused on a sub-par baseball team that used strategy to improve their winning percentage. It is a great classic for marketers everywhere. In the movie, the A’s manager, Billy Bene, applied Marketing principles to baseball and moved the team from a projected worst place finish to making the playoffs and changing the way the game is played forever.
The guiding principle for Billy Beane was that the collective value of the whole team outweighed the value of a single, premier player. Instead of paying a gazillion dollars to lure a power hitter that could do everything, the value came from a number of small wins. He recruited nine players where the collective value of the team was greater than any individual player. This is a great metaphor for email marketing.
All email marketers are looking for that big win– the unicorn technology that drives conversion “homeruns.” The problem is that those unicorn technologies don’t exist. Many show initial results, but then disappear quickly. Email marketing success is about racking up a bunch of small wins.
The game starts with a specific goal, such as “I want email revenue to increase by X%.”
Revenue is the last stage in the funnel, so play the game. Identify the choke point at the top of the funnel, and tackle it. Ask hard questions. For example, if you are having trouble with opens:
- Did the message arrive at the right time?
- Is the observable messaging relevant (subject line + preheader )?
- Have subscribers lost trust in our brand?
- Are you emailing too much or too little?
- Is your audience composition strategic?
- Are you emailing subscribers that engage?
- What does your acquisition strategy look like?
- Are you following list-hygiene best-practices?
- Do you have a new subscriber series?
- Do subscribers ascribe value to your emails?
There are many more questions that can be asked. Remember, as a game, the more questions that you can ask and answer, the more levers you can pull to impact outcomes.
At my company, we have experienced this first hand. We were working with a customer where we lost sight of the game. The customer stated at the beginning that they wanted to increase open rates. We were so focused on creating a fair test that we lost track of the bigger game. Without giving away the specifics of this test, this was the mistake we allowed ourselves (and the customer) to make:
We tested STO (Send Time Optimization) on subscribers that had no engagement in our data pool. This is data that we use to predict future opens and clicks. This meant for a given subscriber, we had zero chance of predicting when they were going to open an email. For this test, the only purpose that these subscribers served was to drag down the open rate.
Let’s say that the average open rate for this customer was 10%. By using STO for the entire audience, we might see a nominal increase; basically we improved the open against the control by 5%. Five percent higher open rate is good, but this is not the story. We were sending to a number of unengaged subscribers that had a miniscule chance of engagement.
We had lost sight of the game. Thankfully, this test is not the end of the story.
The audience was purged of non-engaging subscribers and the send window was re-evaluated (the send window is the period of time that subscribers are sent content through STO). The initial window was established as a full 24 hours. The results were analyzed from the first 24-hour send. Eighty percent of the activity was taking place within an 8-hour period. Everything else outside of that window was just dragging the overall performance down.
The questions asked during testing were:
- Was there anything wrong with our measurement techniques? Yes, we were sending to the unengaged audience for both the Control and Test groups.
- Did we distill the test to our pure, underlying goal? No, our send window was too big. The audience that we tested were subscribers that were not likely to engage during the selected timeframe. We needed to test the right window and the right audience for which AudiencePoint could have the greatest impact.
- Were there adjustments that could be made? Yes. We purged the unengaged subscribers and reduced the send window to the engaged hours.
The send window was reduced to 8 hours, where 80% of the activity was taking place. This small adjustment drove a substantial increase in revenue, and we could directly attribute the positive outcome to the timing change.
As Elizabeth Jacobi, who runs email marketing firm, MochaBear Marketing, stated, “It is not surprising that they had to reduce the send window to achieve the maximum results. The little things add up. Reducing or enlarging your send window, updating your creative or adjusting your engagement strategy can all lead to improved performance. Take the incremental gains, then repeat.“ (Case Study: Increasing Restaurant Revenue by 97% by Connecting Email and POS)
The rules of the game change depending on who is playing it and what the target outcome is. In our case, this game was not over yet, there was still more to do. It was time to roll this test out to the whole audience and work STO into their full email program.
Determining the best target and the way to win the email marketing game isn’t always easy and you need to keep trying various tactics to maximize the win. But win, you can, and the win can be big.
So, play the game of email marketing, and play to win. There are a lot of subscribers out there that you can turn into customers, and there are a lot of customers willing to spend more.
by Paul Shriner, Chief Evangelist, Co-Founder, AudiencePoint