Audiences are key to everything marketers do, right? You can have the best content, timing, sequencing, and channel strategy in the world, but if you're not talking with right people none of it matters. This post outlines how it's possible to improve audience segmentation through message experimentation, and how you can deploy that learning into other marketing strategies beyond just email.

 

Audience understanding is often hard-won. A combination of intuition from personal experience, contact attribute requirements, and inputs from your team each play a factor in defining and refining audience segments. And often, once those segmentation definitions are established, they don't change very much.

 

One thing we've discovered running large scale campaigns for different customers is that with the right open attitude and some effective reporting, you can learn a lot from the results of testing different versions of messaging. It's something we frankly hadn't expected to see with Motiva AI, but very glad it is. Take a simple canvas campaign. The use case is the following:

 

  1. You have a segment for a given campaign. Think about a fairly generic campaign you might run over a larger audience, or one where you're just collecting new contacts automatically and flowing them into your pipeline.Screen Shot 2018-04-18 at 12.15.48.png
  2. You run a Motiva AI experiment step with a few variations of your message.
    Screen Shot 2018-04-18 at 12.15.06.png
  3. Motiva tracks response patterns from your populations on each message variation and gathers data as your campaign runs.
  4. You use the Who Responded report to see what Motiva is discovering about your audience. Here's where it gets interesting. More often than not, we're finding that within a more generic audience, there are actionable subpopulations who are responding to different versions of your messaging. Here's an example where we're looking at Company Size and Industry attribute data laid on top of response data across three versions of a message in our campaign:
    Screen Shot 2018-04-18 at 12.19.04.png

    Motiva lets you project whatever data you have on contacts against message response results. Here we're seeing that there are potentially at least two distinct subgroups - a Electronics industry group and a Charitable Organizations group who are responding to different messages at higher rates. Note that as opposed to what most marketers look to get out of messaging testing, we didn't get to a clear single message that was a "winner" for the larger population. That tells us that either our messages were equally good, too similar to make a difference, or that maybe we have subpopulations responding in different ways.

 

Now, maybe in the example here you realize you don't really care about the Charitable Organizations contacts because they aren't your ideal target customer. Yes, overall that Version A performed better than C, but that was among an audience that isn't your focus! You just learned that you could increase performance by tweaking your content to better fit the groups you DO care about. Maybe this is an opportunity to filter and re-segment. Maybe it's also an opportunity to use data to drive a discussion with the creative team about doing more of the stuff that works for the group(s) you're interested in. The point is that you're able to both accelerate your ability to target and adapt to your audiences, and improve the performance of your campaign.

 

You could imagine getting into a campaign creation pattern where you're constantly testing and refining those audience definitions not just on a campaign basis, but across your team. And as your team improves its audience intelligence, you can deploy that learning across your entire marketing strategy.

 

To make this super concrete, a large Eloqua customer ran a 12-way message test for an onboarding campaign triggered from an external download. Contacts flowed into their campaign, hit a Motiva step and Motiva executed the 12-way test as it received contacts. We were able to find a few different versions that the larger audience responded to (no single best message), but when the customer reviewed the campaign Who Responded report they realized they could use those results to target audiences much more precisely than they had been and corrected some assumptions along the way. They used these insight to improve and tighten up messaging, and then took several of the subpopulation definitions they discovered into improving PPC and SEO strategies. Team learning FTW!