Skip navigation

Artificial intelligence (AI) is touted by many as “the” solution to radically transform marketing automation. Yet specifics are often absent about how we move toward a more productive future.

 

In the next three posts, we’ll describe several significant steps forward and the associated machine learning capabilities that can bring them to life.

 

To understand where you can apply AI, consider how you define a typical email marketing campaign. In its simplest form, you define a population, email message, and the date and time of delivery. In a multi-step campaign, you specify sequences of messages often with time delays and conditional logic governing the delivery of each message. In other words, you take your best guess at what messaging experience will best resonate with the population.

 

Easy, right? Hardly.

 

The classic first step in removing the guesswork is conducting an A/B/N test to identify the most compelling message out of two or more possible options. The approach seems simple in principle: randomly select a fraction of the population to serve as the test segment, and then randomly allocate the test segment contacts to cohorts of equal proportions. Each cohort receives one of the messages under consideration. The message option that produces the most significant response in terms of a specified performance criterion, such as unique open or click-through rate, is deemed the winner.

 

Immediately questions arise about how to apply the test. How large should the test segment be? How do you calculate a measure of confidence in the result? When should you reject the result? What should you do after a rejection? Instead of burdening platform users with these details, machine learning can step in to deliver far better outcomes without the headaches.

 

The Motiva AI platform does this today by applying a machine learning procedure for incremental testing to remove two issues with A/B/N testing. Prior to conducting an A/B/N test, it’s impossible to know how many test contacts we’ll need to discern the best option with high confidence. That depends on the magnitude of the difference in performance between the best option and its closest competitor.

 

As depicted below, we address this by spreading campaign execution over time and avoiding the need to define a test segment. In essence, everyone is in the pool. Random subsets of the population get treated at a regular frequency over a specified number of days. As evidence builds, the platform decides whether further testing is required or sufficient evidence exists to commit to the current best option.

 

A representative Motiva campaign searching for the best message option in terms of potential lead rate, where a potential lead is defined as a click-through without a corresponding unsubscribe.

 

A more significant weakness we address is the lack of A/B/N testing efficiency. Due to the equal allocation of contacts to message options, a majority of the test segment receives an inferior option when more than two options are sent. Furthermore, as the number of options grows, the fraction of suboptimal message allocations increases.

 

Not good!

 

Motiva AI delivers greater efficiency and a better user experience by varying the allocation of contacts to message options based on the available evidence. As responses arrive during the campaign, the learning algorithm updates the daily allocations to send only the most viable options weighted relative to their expected performance. This results in higher overall response rates in Motiva AI-run campaigns without the need for human intervention.

 

While our work has benefited from the latest research in multi-armed bandit learning, we’ve had to tackle several nuances associated with this scenario. Two in particular are batch experimentation and delayed responses. Most classic multi-armed bandit algorithms assume immediate feedback after a single experiment. In our context, cohorts of audience contacts are treated simultaneously with unknown response delays. The Motiva AI platform integrates all available evidence in a principled manner and explores the most compelling options throughout the campaign with efficiency and ease.

 

In this post, we’ve discussed a first step in moving toward a more supportive, effective experience for marketing and service professionals leveraging AI. In this scenario, we assume no contact-specific context is available; so the objective is to identify a single message option that performs best for the population as a whole. In our next post, we’ll discuss incremental testing that aims to discover the best mapping of available message options to contacts by utilizing contact-level context and response history.

Filter Blog

By date: By tag: