Skip navigation

Do It

10 Posts authored by: David Gutelius

Motiva AI's intelligent touchpoint frequency management for Eloqua

One of the key variables in campaign design is how often you send emails. Send too much email too often, and bad things follow.

  • Unsubscribes go up
  • IP, domain, and spam reputation damage
  • Risk to your brand increases, and customer trust erodes.


Not good! But how much is too much email? And for whom?


We're releasing a new smart Frequency Management feature in Motiva AI along with some advanced analytics that help you:


1) Control sending at a global level,

2) override this when you have to on a campaign basis, and

3) understand the response behaviors in your population so you develop a sense of where your tradeoffs are in email send frequency.


To use it, just set a limit for emails in a single week, and Motiva AI will take care of the rest. Whatever you set here will control all Motiva AI intelligent canvas steps. So if you have a stoplight program or other limits in place for existing Eloqua email steps, they won't be affected.


Screen Shot 2019-03-11 at 12.54.51.png

You will also be able to tell Motiva in what priority order to evaluate and execute active campaigns. Drag to sort campaigns into descending order of priority as needed.


Screen Shot 2019-04-08 at 15.14.28.png


Motiva AI can even suggest a reasonable frequency setting based on your contacts' behavior across all campaigns.


Finally, we're noodling on some Motiva Intelligent Analytics to help you understand behavior and frequency tradeoffs from your contacts' history:


Screen Shot 2019-04-26 at 5.32.38 PM.png


This frequency management feature sets the stage for further Motiva AI learning in the future - building on our learning over personalized messaging optimization, segments and audiences, send time, and more. Have a go for yourself here:

Motiva AI Per-Contact Send Time Optimization (STO) for Oracle Eloqua
Happy to announce we've released a new version of Motiva AI that automatically discovers the optimal send time for every contact in your contact db. Previously, we've covered Segment-level STO, which is still available when you're optimizing message variations and doing multivariate testing on the Eloqua canvas. This is a new custom action service and a drop-in replacement for any Eloqua Email step.


Using Motiva's Send Time AI is super easy.


  1. Swap in the Motiva Send Time AI action service for an existing Eloqua Email step on your canvas.
  2. Pick an email asset and any optional configuration settings.
  3. Save and activate.


Screen Shot 2019-01-29 at 09.28.21.png


Motiva AI will run the step for seven days. It works by combining historical activity and live machine learning exploration to learn when to send for each contact. The more you use it, the more Motiva AI learns, and the better response rate.


Also: when you use the Motiva Send Time AI step, you no longer need worry about timezones. The goal is to discover the optimal send time policy regardless of where a contact sits in the world. No more complex, TZ-specific campaign pathways to manage.


In early testing we're seeing clear benefit in every campaign we run. More info is here. You can install the Motiva app here.

Send Time Optimization (STO) Insights with Motiva AI and Eloqua: best and worst times and email delay distributions
We've released two new intelligent insights reports for Eloqua Motiva AI campaigns that help you easily target the best times and understand audience behavioral engagement.


Motiva AI's previous STO intelligence has offered a 24x7 grid description of relative response rates. These are available for each Motiva AI email step on any campaign.


There is now an additional summary chart inserted at the top of every Motiva AI STO insights report that looks like this:


Screen Shot 2018-10-23 at 11.04.26.png


We wanted to make it easier to interpret the fuller grid report (still available) and give you a quick sense of your top three strongest- and weakest-performing slots at a glance.


We've also added a brand new intelligent report for Delay Distribution. Delay Distribution describes the delay between the hour in which you send an email and the time it takes for your target population to open or click on that email. rather than a simple summary, we want to provide you with a fuller picture of what's happening on every Motiva email across the send time restrictions you designate. Here's how that looks:


Screen Shot 2018-10-23 at 11.04.05.png


The inspiration for this came from our users. We realized that the behavioral dynamics for Motiva newsletter optimization campaigns were behaving differently from onboarding and nurture campaigns, and differently from event-driven campaigns.



  • Different segments can show sometimes dramatically different response patterns with the same or similar campaign types
  • Seasonality can matter depending on industry vertical
  • Seniority and role can correlate with responsiveness - apart from overall rate, time-to-response differs
  • Language optimization around calls to action can help drive shorter time-to-response and levels of engagement
  • Does send time even matter, and if so how much?
  • And more


These new features are part of our push towards understanding customer audiences better and giving you the insights you need to take action and improve outreach. Try it out; tell us what you learn. We appreciate your feedback.

1*uLv8tUfQtWs1avueL5t0kw.jpegIn our third and final installment examining how artificial intelligence (AI) can transform marketing automation and audience engagement, we’ll discuss the power that machine learning offers marketers in driving highly adaptive, personalized messaging experiences when there are hundreds or thousands of different options.


In our previous posts, we’ve been discussing ways to test and optimize message variations, both in the context-free scenario where we have no customer data and the context-rich scenario where we have lots of customer attribute and behavior data.


Now we want to turn our attention to scenarios where we have a set of messaging experience options so expansive that it’s impossible to explicitly test all the possibilities. Think of this as a form of figuring out the Next Best Action to take for a given audience.


To make this concrete, let’s consider a lead-generation campaign that consists of a series of email messages where each message presents a small number of products in a particular order. Here we’ll assume we have no past history for these new potential customers. The goal is to learn the most effective message sequence that leads to the most significant number of customer conversions on average for new contacts.


So why is this hard? Consider the volume of options.


Let’s say that the maximum number of emails we can send to a given contact is three, and that in each email sent, we present three products in a particular order. This means that in any email message series, nine products are presented in total in a specified order.



(Above: One possible product ordering of nine products from three different product categories)


How many possible orderings are there if there are nine products in total available to present? Lots! There are 9 factorial or 362,880 options. We clearly can’t test them all!


So what do we do?


The trick is to exploit the fact that the performance of these product orderings are related.


If two product orderings differ only slightly in a single email in the series, we should expect that their performance should be similar on average. If we add data about product categories as well, we have additional similarity structure that we can use in our models.



(Above: Similar product orderings that should in turn have similar average performance)


Using machine learning, our goal would be to intelligently sample the space of possible product orderings by modeling the correlation in performance between the available options. Here we are pulling ourselves out of a tricky situation — one that would otherwise be impossible to address if left to humans alone!


With rich models able to capture the underlying dependencies in complex messaging experiences, the possibilities for optimization and automation expand dramatically. Marketers can ask deeper questions about experience design and ultimately deliver more value to current and future customers.


Examples like these motivate our thinking at Motiva AI around the need for humans and machines to work as a team, with each bringing their unique advantages to the task at hand. We’re looking forward to delivering these next generation capabilities to our customers so that they may serve their customers in whole new ways.




For more information on our product offerings, please visit our homepage or contact us at Give our Eloqua plug-in a try today!


In this series, we’re discussing ways in which artificial intelligence (AI) can transform marketing automation, allowing us to remove the guesswork, deliver a better customer experience, and yield improved ROI.


As we outlined previously, a core challenge arises in the definition of a marketing automation campaign. Today, a marketer must define a target population along with the message or sequence of messages to send. At Motiva.AI, our long-term goal is to replace this guesswork with machine learning that uncovers the best mappings between messaging experiences and customers over time, removing the need to manually define the relevant population. This would allow marketers to focus their energy on crafting compelling content and learning what most resonates with their customers.


In our last post, we described a first step toward this goal that discovers the most compelling message option to send to a given population. Motiva.AI applies a novel multi-armed bandit learning approach to achieve this end through adaptive experimentation over time. In that scenario, we assumed no knowledge about the individual customers.


In this post, we want to push the idea further. What if we could use all available customer data?


At a high level, leveraging customer data in a deeper way creates the opportunity to uncover meaningful populations that exhibit shared content preferences. This opens up new ways to understand customers, both individually and in larger segments, as well as tailor personalized messaging experiences.


So how do we accomplish that with machine learning?


In essence, the answer involves learning what relationships exist, if any, between customer attribute and behavior data and their responses to the available message options in an ongoing campaign.


Let’s take a specific example.


Imagine we have customer data for job role and industry. Let’s say we also have online behavior data that highlights their digital pathways through a website, including webpage dwell times and whitepaper downloads.


An adaptive marketing campaign would send regular batches of messages to customers and listen for responses as we described before. The primary difference now is that we’ll use the responses to learn a model that predicts a customer’s message preference based on the available attribute and behavior data. This might uncover significant relationships for example between the customer’s role, industry, and prior product preferences and the current message options promoting similar products. As those relationships grow stronger over the course of the ongoing campaign, the model would yield increasingly beneficial predictions for customers bearing similar attributes.


In machine learning terms, the change in the learning objective represents a shift from multi-armed bandit to contextual bandit learning. Here we’re bringing all available context into the learning task to support true customer-level, adaptive personalization of the experience.


By learning models that predict the likelihood of message engagement based on customer attribute and behavior data, we are automatically learning definitions of the underlying populations that are most likely interested in the associated message content. Coupled with an evolving model of message content similarity, the game changes. The possibility of continuous learning across campaigns without human intervention is within reach.


This is a great example of how bringing machine intelligence to a human team can create significant impact. It allows marketers the option over time to let AI adapt to emergent customer preferences dynamically — without manually having to guess at what will work with broadly defined populations. Customers get the highly tailored experience that they expect and respond to, which in turn helps increase marketing impact and ROI. In our next post, we’ll push these ideas even further into testing a more complex range of messaging experiences that could never be accomplished by humans alone.


Give our Eloqua plug-in a try today!

Artificial intelligence (AI) is touted by many as “the” solution to radically transform marketing automation. Yet specifics are often absent about how we move toward a more productive future.


In the next three posts, we’ll describe several significant steps forward and the associated machine learning capabilities that can bring them to life.


To understand where you can apply AI, consider how you define a typical email marketing campaign. In its simplest form, you define a population, email message, and the date and time of delivery. In a multi-step campaign, you specify sequences of messages often with time delays and conditional logic governing the delivery of each message. In other words, you take your best guess at what messaging experience will best resonate with the population.


Easy, right? Hardly.


The classic first step in removing the guesswork is conducting an A/B/N test to identify the most compelling message out of two or more possible options. The approach seems simple in principle: randomly select a fraction of the population to serve as the test segment, and then randomly allocate the test segment contacts to cohorts of equal proportions. Each cohort receives one of the messages under consideration. The message option that produces the most significant response in terms of a specified performance criterion, such as unique open or click-through rate, is deemed the winner.


Immediately questions arise about how to apply the test. How large should the test segment be? How do you calculate a measure of confidence in the result? When should you reject the result? What should you do after a rejection? Instead of burdening platform users with these details, machine learning can step in to deliver far better outcomes without the headaches.


The Motiva AI platform does this today by applying a machine learning procedure for incremental testing to remove two issues with A/B/N testing. Prior to conducting an A/B/N test, it’s impossible to know how many test contacts we’ll need to discern the best option with high confidence. That depends on the magnitude of the difference in performance between the best option and its closest competitor.


As depicted below, we address this by spreading campaign execution over time and avoiding the need to define a test segment. In essence, everyone is in the pool. Random subsets of the population get treated at a regular frequency over a specified number of days. As evidence builds, the platform decides whether further testing is required or sufficient evidence exists to commit to the current best option.


A representative Motiva campaign searching for the best message option in terms of potential lead rate, where a potential lead is defined as a click-through without a corresponding unsubscribe.


A more significant weakness we address is the lack of A/B/N testing efficiency. Due to the equal allocation of contacts to message options, a majority of the test segment receives an inferior option when more than two options are sent. Furthermore, as the number of options grows, the fraction of suboptimal message allocations increases.


Not good!


Motiva AI delivers greater efficiency and a better user experience by varying the allocation of contacts to message options based on the available evidence. As responses arrive during the campaign, the learning algorithm updates the daily allocations to send only the most viable options weighted relative to their expected performance. This results in higher overall response rates in Motiva AI-run campaigns without the need for human intervention.


While our work has benefited from the latest research in multi-armed bandit learning, we’ve had to tackle several nuances associated with this scenario. Two in particular are batch experimentation and delayed responses. Most classic multi-armed bandit algorithms assume immediate feedback after a single experiment. In our context, cohorts of audience contacts are treated simultaneously with unknown response delays. The Motiva AI platform integrates all available evidence in a principled manner and explores the most compelling options throughout the campaign with efficiency and ease.


In this post, we’ve discussed a first step in moving toward a more supportive, effective experience for marketing and service professionals leveraging AI. In this scenario, we assume no contact-specific context is available; so the objective is to identify a single message option that performs best for the population as a whole. In our next post, we’ll discuss incremental testing that aims to discover the best mapping of available message options to contacts by utilizing contact-level context and response history.

Send Time Optimization (STO) with Motiva AI and Eloqua

One useful tool in the marketing toolbox is paying close attention to campaign response data in order to improve send times and maximize performance. Done well, you can dramatically increase campaign performance by using that response data to configure time slots to optimally map to audience preferences. This post is about using Motiva AI Cloud for Eloqua to automatically determine those optimal send times within a campaign.


Send Time Optimization (STO) works whenever you run a Motiva AI Email Optimizer step on the canvas. As a refresher the Motiva EO is functionally like Eloqua's Send Email step, except that it accepts any number of message variations and automatically finds and invests in the best candidates to maximize response rates. It's multivariate, adaptive message optimization backed by machine learning. You can read more about that here or our Getting Started Guide.

Screen Shot 2017-10-13 at 10.46.48.png


Once you've configured the Motiva AI Email Optimizer, you can choose to restrict send times (just like the Eloqua Send Email step), or you can leave it wide open - it's your choice. Like this:

Screen Shot 2017-10-13 at 10.49.10.png


As Motiva runs its messaging experiments over a given audience and period of time, it also gathers response data (opens, clicks, etc.) from that step in your campaign.  Over the course of the campaign and as long as you leave the Motiva EO running, it will gather more and more response data, and automatically generate a report that represents audience behavioral responses over 7x24 grid. That's all seven days for all hours of each day. It looks like this (click on the image to enlarge):

Screen Shot 2017-10-10 at 14.10.09.png

The report will update itself each time a new Motiva AI experiment happens - which is configurable, but most users leave to the default one experiment per day.


With this report you can quickly get a sense of the volume of emails you're sending in a given hour-by-day window and how people are responding. Larger diameter circles mean that you sent relatively lots of emails in that period, and the color shading is response rate.  You can toggle between Unique Open Rates and Potential Lead Rate (which is like CTR, but more tightly defined as unique clicks - unsubscribes / total of emails successfully delivered).


tl;dr: Big Circles / Light Color = Not Good; Big circles / dark color = Good! But there are lots of situations where we're in between the two extremes. Look for relatively small volume / high response slots and where shifting send times from other slots might make a difference.


Here's where it gets interesting. Take a look at this chart. Where are opportunities for send time improvement?


Screen Shot 2017-10-11 at 11.17.37.png


Or how about this one?  Note that this campaign has Motiva AI running experiments and sending waves of emails essentially 24/7.

Screen Shot 2017-06-29 at 08.38.28.png

Once you're confident in the response patterns you're seeing - usually over a couple of weeks - you can use this feedback to optimize send time restrictions on the canvas easily. At the same time, it also helps you have a data-driven conversation about what's working with your campaign and what's not with your colleagues and teams.  Data ftw!


Try it for yourself.

We're excited to announce General Availability for Motiva AI Cloud app in the Oracle marketplace.  Motiva AI brings fully automated campaign optimization to Eloqua, and delivers better performing campaigns through machine learning.


Automatically find and exploit the best messages with multivariate testing.

You can test subject lines, secondary subject lines, copy / body text, design elements, graphics - or all of these at once. Anything you can save in the Eloqua Asset Library can be tested and optimized for you.  Motiva AI experiments automatically shift investment from lower performing towards higher performing messages for you. There's nothing you need to do except watch the awesome roll in.



Find the right send time.

Motiva AI will tell you when your best send time slots are for any given campaign. More on this here.



Understand your audience.

More often than not, Motiva AI exposes high-performing response among subpopulations in your segment. It may be the audience you have is actually three or four different sub audiences. This can be a great way to use data to influence the campaign and/or creative design process on your team. Persona development FTW!


Share your success.

You can share your great results with the rest of your team with beautiful reports. Export data to any platform.


Lighten your workload.

Stop trying to design the right valid a/b tests, exporting data, interpreting results, rerunning the campaign, etc. Motiva AI will do all that for you, and more.


Easy integration.

Five (5) minutes to set up, and you can drag and drop the Motiva AI widgets onto new or existing campaign canvases. The Email Optimizer action service simply replaces the Send Email step with a much more powerful capability: message optimization.


Have a go and let us know what you think. It's a free 30-day pilot, unlimited campaigns. We're always excited to see new use cases.

Last month I described a new fully automated machine learning plugin to support simple blast-type email campaigns in: Introducing Motiva AI: A new (and better) way to do automated A/B testing


We've now got the same machine intelligence capability for multistep campaigns. This addresses a long standing request for Oracle to support A/B testing on the canvas, but goes a good ways beyond that.


The challenge: A/B/N tests send the same number of messages for each variation to learn what option is best. This means that as the number of message variations grows, you are sending more customers an inferior message during the testing period. In contrast, Motion experiments with any number of variations over time, incrementally learning which work best, and adapting to maximize overall campaign performance. Further, A/B sample populations frequently suffer from bias that make results of the test questionable - at best. Sampling well is challenging, even for experienced practitioners.


But this is a task that artificial intelligence - and specifically machine learning - can solve nicely.


Motiva AI Cloud for the canvas works just like Motiva AI for simple email campaigns, only you use the Motiva AI Action Service instead of an Email Send step.  After you grant us permission to operate on your canvas, it looks like this:

Screen Shot 2016-10-25 at 07.35.33.png


It's just like any other valid cloud service, and very similar to the Email step - it's just that you're choosing multiple email assets to test over the population.  To be clear here: we are not futzing with the text of the email itself; we're simply using whatever you have stored in your Eloqua asset library. Motiva AI allows you to test any number of variables: subject lines, text, layout, graphics - whatever you can store in your asset library.  Here's what that configuration looks like with three variations of, say, a subject line we'd like to test:

Screen Shot 2016-10-25 at 07.37.00.png


Once configured, the Motiva AI step accepts contacts from e.g. a dynamic segment, and allows you to route contacts to subsequent steps - much as you'd do with an Email step. Like this:

Screen Shot 2016-10-25 at 07.47.20.png


In the above example, we actually have two totally separate optimizations running at two distinct drip-type steps.  You don't have to do this.  That second send step could just be a plain old single email send instead.  Just like for the simple email campaign, the way Motion works is to take in incremental audience responses (stored in the Activity DB) and improve subsequent send strategies day-by-day, learning which message treatments work better and shifting sends in favor of those winning treatments. With Motiva AI, you're testing and optimizing over the entire population's preferences - not some small population carve-out. More signal + machine intelligence = better campaign and funnel performance.


You have access to the same elegant reports as Motiva AI for simple campaigns.  We're constantly adding to these. See the previous post for a couple of examples.


Motiva AI Cloud for canvas is ready for real use. The result is a measurably better way to do campaign message optimization that doesn't add work for you. Our intention with Motiva AI is to begin to shift the way marketing is typically done - from undifferentiated generic blast to campaigns that listen, learn, and adapt to customers' preferences. This is only the first step.  Let me know if you'd like to join us on the journey; and see more at Motiva AI.

Ever wanted super powers?


Or at least better way to run A/B tests that automatically connect to your entire campaign (not just a small piece of it)? Ever wished that you had an extra hand... or three?


We've created an Eloqua plugin called Motiva AI that's like a virtual assistant for campaign optimization.  Give Motion your campaign segment(s) and email variations and it will define the right experiments across your population, find which messages most resonate, and shift your campaign to the best version(s). It all happens automatically.


You can test anything you can create - subject lines, language / content, format, graphics, etc.  Just store each unique version in Eloqua as you'd normally do. Motion takes care of the rest.


Here's how it works.


  1. Log in to Motiva AI using your Eloqua credentials single sign on.
  2. Configure a campaign: Give Motiva AI your target segments, email assets, and time window to operate.  When you're ready to go, hit "activate"
    Screen Shot 2016-09-12 at 15.24.23.png
  3. Motiva AI will configure the optimal experiments for you, and tell Eloqua which contact should get which message.  As Eloqua collects response data (opens, clicks, etc), Motiva AI learns and builds those results into future send strategy.
  4. Each day, Motiva AI updates its strategy based on campaign learnings.  You can track results as they come in.
    Screen Shot 2016-09-12 at 15.31.01.png
  5. At the end of the campaign, check out beautiful reports that tell what worked, and for whom.  These are actionable analytics, meant to help you and your larger team understand what's working and what's not for future campaigns. Here's a taste:
    Screen Shot 2016-09-12 at 15.32.14.png Screen Shot 2016-09-12 at 15.32.58.png



There's no manual test population setup, no wondering which version really won, and no need to set up another campaign to run the "real" version.  Just set it up, and fire away.  Motiva AI takes care of the rest. Your most effective campaign, every time.


Any current Eloqua user running campaigns can use Motiva AI. It's free to try out. More soon, including support for multistep campaigns.

Filter Blog

By date: By tag: