Skip navigation
1 2 3 4 Previous Next

Do It

540 posts

Let's say you collect Birthdays of your customers in format Year-Month-Date and you wish to isolate the "Month" in another contact field called "Month" for some custom emails to be sent month-wise. Below is the step-by-step guide to extract just Month from Birthday (Format: Year-Month-Date) using Contact Washing Machine and RegEx, please be mindful of testing the CWM configuration in sandbox first.

 

  1. Create a contact field named – MONTH per say in your instance.
  2. Drag and drop Contact Washing Machine(CWM) app in a new Campaign from action bar
  3. Double click on CWM app and click on pencil icon.
  4. Enter below steps on Configuration window. Please Select, Step name, Source Field as your date of "Birthday" (Format: Year-Month-Date) contact field,  “RegEx Extract” under Action Item and enter code “[-.\/](\d+)[-.\/]” and Destination field as "Month" in this case.

 

 

     5. Hit Save.

     6. Your configuration is saved; now test it via hourglass icon given at the top of the CWM configuration window. Add email addresses via”+” sign and hit “Run Test” to test your configuration. See in below snapshot - you will notice "Input Value as Birthday date" as Birthday - 2015(Year)-04(Month)-01(Date) and "Output Value as only Month" as "04".

Hope this will be helpful. Do let me know in case of any questions.

 

** Need more information on Contact Washing Machine App? Click below Oracle Help Centre doc on what is Contact Washing Machine app, how to download/install it and it’s actions - https://docs.oracle.com/cloud/latest/marketingcs_gs/OMCAA/Help/Apps/ContactWashingMachine/Tasks/InstallingContactWashingMachine.htm?Highlight=contact%20washing%20machine

The purpose of this article is to create a simple-to-follow guide to allow you to add a Datepicker widget into your Eloqua Landing pages.

 

The end result will look something like this:

I’m aiming this guide to be as simple as possible as I know that some people are not familiar with HTML at all. Hopefully with this step by step guide it’ll be made easy & worry free.

 

 

So if we make our basic form in Eloqua with email & a custom text field for date:

(Make sure you also set the HTML name of your date field to ‘date’ (all lower case))

We view HTML of the form by going to Actions > view Form HTML, & copying all of it from the form onto the clipboard.

 

We now create a new HTML Landing page, which will look like this:

<!DOCTYPE html>

<html>

  <head>  </head>

  <body>   

<-—form HTML goes here -->

  </body>

</html>

 

Paste the whole of your form code in between the body tags (as specified above).

 

You can now view your Landing page & it will just have the basic form on a white page.

For visual sake,  we’re now going to tidy it up to make it a bit neater.

On your Landing page scroll up in the code to find the closing </style> tag.

Just before it we add the following.

div{ width:300px; margin:0px auto;}

input{ border-radius:5px}

This sets the width to a more reasonable size & centralises the form.

The second line just softens the corners of the input boxes (it may add a shadow as well, depending on your browser).

 

Now we get to the meat of the tutorial: Adding the JQueryUI script onto the page to get the plugin working.

 

So essentially we need two script files added: the JQuery main JavaScript library & the JQueryUI script that will handle the datepicker widget.

We’ll also need to add the JQueryUI css file to handle the graphical element, - but worry not, I’m here to guide you through it....

 

So there’s two ways you can do this.

  1. Use the Google CDN versions that are available here: https://developers.google.com/speed/libraries/
  2. You can retrieve the raw files from http://jqueryui.com/ and https://jquery.com/ and you can host these files in your own instance’s file storage for personal use.

The first solution is much simpler & serves up the files pretty swiftly, however you are of course relying on Google for the files, and in some situations you might prefer to host the files so that you are totally self-reliant.

I personally have had no issue with the google served versions, so I will be stepping through that solution here.

From the google developers libraries you retrieve the JQuery .js file and the 2 JQueryUI files – the .js & .css one. They will look similar to the below.

 

JQuery

<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>

JQueryUI

<script src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.12.1/jquery-ui.min.js"></script>

 

<link rel="stylesheet" href="https://ajax.googleapis.com/ajax/libs/jqueryui/1.12.1/themes/smoothness/jquery-ui.css">

 

So now you have these files, you’re going to insert them into your HTML, right before the closing </head> tag like this:

 

Now we’re nearly set.

 

Remember when I said to set the HTML name of your date field to ‘date’? Well that just makes it a bit easier to locate, and as now you need to find it, the easiest way is to do this is to perform a CTRL+F find on ‘name=”date”.

This will seek out the <input field in question. It will have a value id=”field1” or something similar & we are going to alter it to say id=”datepicker”

 

Your revised input should now say something similar to
<input id="datepicker" name="date" type="text" value="" class="field-size-top-large" />

 

OK, - last step.

So back in your HTML, again, just before the closing </head> tag you’re going to add this snippet:

 

<script>

  $( function() {

    $( "#datepicker" ).datepicker();

  } );

</script>

 

And that’s it! – save your file and go and try your new datepicker widget.

 

Good Luck!

1*uLv8tUfQtWs1avueL5t0kw.jpegIn our third and final installment examining how artificial intelligence (AI) can transform marketing automation and audience engagement, we’ll discuss the power that machine learning offers marketers in driving highly adaptive, personalized messaging experiences when there are hundreds or thousands of different options.

 

In our previous posts, we’ve been discussing ways to test and optimize message variations, both in the context-free scenario where we have no customer data and the context-rich scenario where we have lots of customer attribute and behavior data.

 

Now we want to turn our attention to scenarios where we have a set of messaging experience options so expansive that it’s impossible to explicitly test all the possibilities. Think of this as a form of figuring out the Next Best Action to take for a given audience.

 

To make this concrete, let’s consider a lead-generation campaign that consists of a series of email messages where each message presents a small number of products in a particular order. Here we’ll assume we have no past history for these new potential customers. The goal is to learn the most effective message sequence that leads to the most significant number of customer conversions on average for new contacts.

 

So why is this hard? Consider the volume of options.

 

Let’s say that the maximum number of emails we can send to a given contact is three, and that in each email sent, we present three products in a particular order. This means that in any email message series, nine products are presented in total in a specified order.

 

 

(Above: One possible product ordering of nine products from three different product categories)

 

How many possible orderings are there if there are nine products in total available to present? Lots! There are 9 factorial or 362,880 options. We clearly can’t test them all!

 

So what do we do?

 

The trick is to exploit the fact that the performance of these product orderings are related.

 

If two product orderings differ only slightly in a single email in the series, we should expect that their performance should be similar on average. If we add data about product categories as well, we have additional similarity structure that we can use in our models.

 

 

(Above: Similar product orderings that should in turn have similar average performance)

 

Using machine learning, our goal would be to intelligently sample the space of possible product orderings by modeling the correlation in performance between the available options. Here we are pulling ourselves out of a tricky situation — one that would otherwise be impossible to address if left to humans alone!

 

With rich models able to capture the underlying dependencies in complex messaging experiences, the possibilities for optimization and automation expand dramatically. Marketers can ask deeper questions about experience design and ultimately deliver more value to current and future customers.

 

Examples like these motivate our thinking at Motiva AI around the need for humans and machines to work as a team, with each bringing their unique advantages to the task at hand. We’re looking forward to delivering these next generation capabilities to our customers so that they may serve their customers in whole new ways.

 


 

 

For more information on our product offerings, please visit our homepage or contact us at info@motiva.ai. Give our Eloqua plug-in a try today!

Dev team plan to fix the Excel issue in 18B. So exporting to Excel issue should be fixed after this release.

 

Now we are in 18A and 18B is scheduled on Fri May 18, 2018. Please kindly refer to Oracle Eloqua Release Center (https://community.oracle.com/docs/DOC-895287) for more details.

With GDPR fast approaching, many customers will be wanting to tidy up orphaned CDO records in order to be as compliant as possible.

This guide aims to demonstrate how Oracle Eloqua can accomplish this for you.

To avoid the potential deletion of incorrect data, a backup of your CDO is recommended before any records are actually deleted.

 

To do this you will need 5 assets.

  • A program builder
  • A program canvas
  • An update rule
  • A shared Filter
  • your chosen CDO*

* this solution works 'per CDO' so if you have 10 CDO's you'll need 10 of the above (with the exception of the update rule & the shared filter).

 

  Here is an overview of the solution:

overview.PNG

What this demonstrates is that you add an extra field to your CDO titled Unmapped Record,

You then pass ALL records through a Program Builder program which updates any records NOT mapped to a contact in your shared Filter. (your update rule will set that field to 'true'.)

After that the CDO record services evaluates all updated records & sends matching ones to the program Canvas.

The Program canvas deletes any unmapped records passed into it.

 

The Shared Filter:

 

Very simply your shared Filter just contains All contacts in your contact table.

Most simply this is achieved by filtering on contacts whose Email Address contains '@', however if your instance is enabled for blank email addresses then also you should include all records whose email address field is blank.

full filter.PNG

 

The Update Rule:

 

Create an update Rule (Audience > Tools > Data tools) of entity Type Custom Object, which will be configured to set the 'Unmapped Record?' field in your Custom data Object to equal the value ‘true’

Update Rule.PNG

The Program Builder Program

 

Now you create a program Builder Program that takes Entity Type: CDO Records (selecting your chosen CDO) & the flow needs to be as follows:

Program Builder.PNG

The decision step verifies if the record has a mapped contact within the specified filter (Which should include all contacts.)

 

If the Record has no mapped contact then the Update Rule is run in Step 100, setting that record's value in the Unmapped Record field to 'true'

The CDO service will detect that the record was modified and because the condition is now met, the record will be pushed into your Program canvas.

 

The Program canvas.

 

Very simply, this receives the record passed into it, verifies if the record has a mapped contact (it should not have) & then deletes the record accordingly.

If any records happen to sneak in that still have mapped contacts, they are excluded (I added a week wait step just for testing purposes)

Program Canvas.PNG

 

The CDO Record Services

 

To configure your CDO record services within your CDO:

Choose Custom Object > Custom Object Record Services

Choose Modified Data

Choose ‘Add Processing step

From the Add single Processing step option choose ‘Add to step in Program’  (this will route to your Program Canvas)

On the setup screen name your service.

Choose Custom Object records from Entity Type

In ‘This Processing Step Gets Executed’ set it to conditional:  only when the field Unmapped Record = “true”

Once this is set up you need to remember to Enable the service.

Enable CDO service.PNG

Once this is all tied together, you should go to your Program builder & in the start step (Step 000), Add Members to step & add ALL CDO records.

To do this choose source: your CDO & do not restrict the members.

 

Due to the performance limitations of CDO services, if large volumes of records are involved. pushing these through during prime business hours could have a performance effect on your instance.
For very large volumes of records it is recommended to pass them through the program Builder in smaller batches.

Good Luck!

Hi all

 

Having had some fun recently wrestling with gauges and their labels in Insight custom reports I though I would share the solution!

 

The default behaviour is to display the labels in % - and that might be fine for you, but if trying to troubleshoot, or present other data, it's best to be able to show actual values if you prefer. But how? It's not obvious in the properties at all. What you need to do is the following:

 

 

 

  • Edit Gauge Properties
  • Tab over to Titles and Label

  • Click on Aixs Labels
  • Stay on Display Options tab
  • Select Axis Labels (again)
  • Choose Actual Values

 

Hey presto your graph is now showing real numbers!

 

 

 

Hope that helps, and saves you a little time!

AAMABADGAAoAAQAAAAAAABBBAAAAJDc4ZWFmMDkwLWE2YzEtNGEzYS05Y2UxLWE0OGZmNzhkZDUyMQ.jpg

In this series, we’re discussing ways in which artificial intelligence (AI) can transform marketing automation, allowing us to remove the guesswork, deliver a better customer experience, and yield improved ROI.

 

As we outlined previously, a core challenge arises in the definition of a marketing automation campaign. Today, a marketer must define a target population along with the message or sequence of messages to send. At Motiva.AI, our long-term goal is to replace this guesswork with machine learning that uncovers the best mappings between messaging experiences and customers over time, removing the need to manually define the relevant population. This would allow marketers to focus their energy on crafting compelling content and learning what most resonates with their customers.

 

In our last post, we described a first step toward this goal that discovers the most compelling message option to send to a given population. Motiva.AI applies a novel multi-armed bandit learning approach to achieve this end through adaptive experimentation over time. In that scenario, we assumed no knowledge about the individual customers.

 

In this post, we want to push the idea further. What if we could use all available customer data?

 

At a high level, leveraging customer data in a deeper way creates the opportunity to uncover meaningful populations that exhibit shared content preferences. This opens up new ways to understand customers, both individually and in larger segments, as well as tailor personalized messaging experiences.

 

So how do we accomplish that with machine learning?

 

In essence, the answer involves learning what relationships exist, if any, between customer attribute and behavior data and their responses to the available message options in an ongoing campaign.

 

Let’s take a specific example.

 

Imagine we have customer data for job role and industry. Let’s say we also have online behavior data that highlights their digital pathways through a website, including webpage dwell times and whitepaper downloads.

 

An adaptive marketing campaign would send regular batches of messages to customers and listen for responses as we described before. The primary difference now is that we’ll use the responses to learn a model that predicts a customer’s message preference based on the available attribute and behavior data. This might uncover significant relationships for example between the customer’s role, industry, and prior product preferences and the current message options promoting similar products. As those relationships grow stronger over the course of the ongoing campaign, the model would yield increasingly beneficial predictions for customers bearing similar attributes.

 

In machine learning terms, the change in the learning objective represents a shift from multi-armed bandit to contextual bandit learning. Here we’re bringing all available context into the learning task to support true customer-level, adaptive personalization of the experience.

 

By learning models that predict the likelihood of message engagement based on customer attribute and behavior data, we are automatically learning definitions of the underlying populations that are most likely interested in the associated message content. Coupled with an evolving model of message content similarity, the game changes. The possibility of continuous learning across campaigns without human intervention is within reach.

 

This is a great example of how bringing machine intelligence to a human team can create significant impact. It allows marketers the option over time to let AI adapt to emergent customer preferences dynamically — without manually having to guess at what will work with broadly defined populations. Customers get the highly tailored experience that they expect and respond to, which in turn helps increase marketing impact and ROI. In our next post, we’ll push these ideas even further into testing a more complex range of messaging experiences that could never be accomplished by humans alone.

 

Give our Eloqua plug-in a try today!

Artificial intelligence (AI) is touted by many as “the” solution to radically transform marketing automation. Yet specifics are often absent about how we move toward a more productive future.

 

In the next three posts, we’ll describe several significant steps forward and the associated machine learning capabilities that can bring them to life.

 

To understand where you can apply AI, consider how you define a typical email marketing campaign. In its simplest form, you define a population, email message, and the date and time of delivery. In a multi-step campaign, you specify sequences of messages often with time delays and conditional logic governing the delivery of each message. In other words, you take your best guess at what messaging experience will best resonate with the population.

 

Easy, right? Hardly.

 

The classic first step in removing the guesswork is conducting an A/B/N test to identify the most compelling message out of two or more possible options. The approach seems simple in principle: randomly select a fraction of the population to serve as the test segment, and then randomly allocate the test segment contacts to cohorts of equal proportions. Each cohort receives one of the messages under consideration. The message option that produces the most significant response in terms of a specified performance criterion, such as unique open or click-through rate, is deemed the winner.

 

Immediately questions arise about how to apply the test. How large should the test segment be? How do you calculate a measure of confidence in the result? When should you reject the result? What should you do after a rejection? Instead of burdening platform users with these details, machine learning can step in to deliver far better outcomes without the headaches.

 

The Motiva AI platform does this today by applying a machine learning procedure for incremental testing to remove two issues with A/B/N testing. Prior to conducting an A/B/N test, it’s impossible to know how many test contacts we’ll need to discern the best option with high confidence. That depends on the magnitude of the difference in performance between the best option and its closest competitor.

 

As depicted below, we address this by spreading campaign execution over time and avoiding the need to define a test segment. In essence, everyone is in the pool. Random subsets of the population get treated at a regular frequency over a specified number of days. As evidence builds, the platform decides whether further testing is required or sufficient evidence exists to commit to the current best option.

 

A representative Motiva campaign searching for the best message option in terms of potential lead rate, where a potential lead is defined as a click-through without a corresponding unsubscribe.

 

A more significant weakness we address is the lack of A/B/N testing efficiency. Due to the equal allocation of contacts to message options, a majority of the test segment receives an inferior option when more than two options are sent. Furthermore, as the number of options grows, the fraction of suboptimal message allocations increases.

 

Not good!

 

Motiva AI delivers greater efficiency and a better user experience by varying the allocation of contacts to message options based on the available evidence. As responses arrive during the campaign, the learning algorithm updates the daily allocations to send only the most viable options weighted relative to their expected performance. This results in higher overall response rates in Motiva AI-run campaigns without the need for human intervention.

 

While our work has benefited from the latest research in multi-armed bandit learning, we’ve had to tackle several nuances associated with this scenario. Two in particular are batch experimentation and delayed responses. Most classic multi-armed bandit algorithms assume immediate feedback after a single experiment. In our context, cohorts of audience contacts are treated simultaneously with unknown response delays. The Motiva AI platform integrates all available evidence in a principled manner and explores the most compelling options throughout the campaign with efficiency and ease.

 

In this post, we’ve discussed a first step in moving toward a more supportive, effective experience for marketing and service professionals leveraging AI. In this scenario, we assume no contact-specific context is available; so the objective is to identify a single message option that performs best for the population as a whole. In our next post, we’ll discuss incremental testing that aims to discover the best mapping of available message options to contacts by utilizing contact-level context and response history.

Send Time Optimization (STO) with Motiva AI and Eloqua

One useful tool in the marketing toolbox is paying close attention to campaign response data in order to improve send times and maximize performance. Done well, you can dramatically increase campaign performance by using that response data to configure time slots to optimally map to audience preferences. This post is about using Motiva AI Cloud for Eloqua to automatically determine those optimal send times within a campaign.

 

Send Time Optimization (STO) works whenever you run a Motiva AI Email Optimizer step on the canvas. As a refresher the Motiva EO is functionally like Eloqua's Send Email step, except that it accepts any number of message variations and automatically finds and invests in the best candidates to maximize response rates. It's multivariate, adaptive message optimization backed by machine learning. You can read more about that here or our Getting Started Guide.

Screen Shot 2017-10-13 at 10.46.48.png

 

Once you've configured the Motiva AI Email Optimizer, you can choose to restrict send times (just like the Eloqua Send Email step), or you can leave it wide open - it's your choice. Like this:

Screen Shot 2017-10-13 at 10.49.10.png

 

As Motiva runs its messaging experiments over a given audience and period of time, it also gathers response data (opens, clicks, etc.) from that step in your campaign.  Over the course of the campaign and as long as you leave the Motiva EO running, it will gather more and more response data, and automatically generate a report that represents audience behavioral responses over 7x24 grid. That's all seven days for all hours of each day. It looks like this (click on the image to enlarge):

Screen Shot 2017-10-10 at 14.10.09.png

The report will update itself each time a new Motiva AI experiment happens - which is configurable, but most users leave to the default one experiment per day.

 

With this report you can quickly get a sense of the volume of emails you're sending in a given hour-by-day window and how people are responding. Larger diameter circles mean that you sent relatively lots of emails in that period, and the color shading is response rate.  You can toggle between Unique Open Rates and Potential Lead Rate (which is like CTR, but more tightly defined as unique clicks - unsubscribes / total of emails successfully delivered).

 

tl;dr: Big Circles / Light Color = Not Good; Big circles / dark color = Good! But there are lots of situations where we're in between the two extremes. Look for relatively small volume / high response slots and where shifting send times from other slots might make a difference.

 

Here's where it gets interesting. Take a look at this chart. Where are opportunities for send time improvement?

 

Screen Shot 2017-10-11 at 11.17.37.png

 

Or how about this one?  Note that this campaign has Motiva AI running experiments and sending waves of emails essentially 24/7.

Screen Shot 2017-06-29 at 08.38.28.png

Once you're confident in the response patterns you're seeing - usually over a couple of weeks - you can use this feedback to optimize send time restrictions on the canvas easily. At the same time, it also helps you have a data-driven conversation about what's working with your campaign and what's not with your colleagues and teams.  Data ftw!

 

Try it for yourself.

tl;dr
Firefox is slow, enters Servo.

update>about:config>layout.css.servo.enabled>true
Firefox is fast, you're welcome!

---------------------------------------------------------------------

 

Hey all,

 

When using Eloqua, optimizing for time is one of the things we all try to keep an eye on, as campaign work can be *challengingly* slow at times.

 

The main speed bump in the Eloqua experience is Firefox: not a speed demon to boot (compared to Chrome or Safari), it renders Eloqua visual elements in real time, which hogs system memory and slows everything down to a frustrating point.

 

With large canvases, I have seen Firefox shoot up into Gig territory of RAM used and +30% CPU usage, with two tabs open, leading to freezes and sometime system crashes...

Oh, I run an i7-6600U with 16G.
WTF?


But, but, why not use Edge, which is much better at handling html5, or Chrome, which is much better at everything?
Because Eloqua optimizes for FF primarily, and some modules or pop up windows  do not render in other browsers. Ugh.

So Firefox it is.

 

Now what if, there was a way to get FF to show any change we'd make in the blink of an eye, and refresh in actual real time canvas and other asset changes?

 

...

 

Behold: Quantum CSS, for a life-altering experience.
(Experimental Mozilla team stuff, but so far so good.)


----------------------------------------------------------------------

 

If you're ready to set the Fox on actual fire, do this:

>update FF to the latest available version (menu/?/About Firefox)
>relaunch

>enter "about:config" in the address bar

>ignore the warranty warning

>search for "layout.css.servo.enabled"
>double-click to enable the setting from "false" to "true"

>relaunch

 

That's it!

Your instance of Firefox should be significantly more responsive.

 

(I also have a few settings altered on top of, to promote speed in FF as it 's always driven me nuts, so there's always that too.)

 

Let me know how it works for you!

 

Cheers,

Morgan

 

-Credit: I learned about this via CSS-Tricks' email newsletter, an amazing resource to learn more about web things-

Since posting this article, Postman has updated its layout a little bit. Because of this, some screenshots are not correct anymore. However, the steps are still correct and valid. Once I have the time, I will update the screenshots to show the latest Postman version again.

 

This post is written to help everyone who is interested in starting with the REST & Bulk API's. The basic authentication is the easiest to start with, and OAuth2.0 access token and refresh token take some more time to set up. Oracle states throughout their documentation that basic Authentication should only be used for testing purposes and never for actual integrations (because of security reasons). In case you are building a custom integration with a back-end system, it is best to stick to the OAuth2.0 method.

 

 

Installing Postman

Before you can start with the API's, you should install Postman. This app makes API development faster, easier, and better. The free app is used by more than 3.5 million developers and 30,000 companies worldwide. Postman is designed with the developer in mind, and packed with features and options.

 

 

Basic Authentication

For the basic authentication you do not need to to much work to get it working, the main thing you need to do is encode your login details.

 

Create Authorization code by going to for example to www.base64decode.org, encode your sitename, username and password in the following setup:

siteName + '\' + username + ':' + password

 

For example, with the site name of "TESTCOMPANY", username of "testuser", and password of "testpassword", the value would be the base-64-encoded string:

TESTCOMPANY\testuser:testpassword

 

Within Postman you put the following information in the Header of your call:

Authorization: Basic VEVTVENPTVBBTllcdGVzdHVzZXI6dGVzdHBhc3N3b3Jk

  The URL you use to connect to is dynamic, you will have have to determine to which server you have to connect. To read more about it, please go to this support document: REST API for Oracle Eloqua Marketing Cloud Service

 

OAuth2.0 - Request Token

With the following steps, you are going authenticate using OAuth2.0. This is based upon the information Oracle provides within the Eloqua Developer Help Center. By doing these steps you are authenticating using the authorization code grant, however, you only have to request to token, Postman will take care of the Authorization Code and give your straight away (ok, not really straight away, since you first have to give permission) the Access and Refresh Token back.

 

Step 1 – Build app

  1. Go to setup – AppCloud Developer
  2. Click on “Create App”and fill in all required information. For Postman, the OAuth Callback URL is https://www.getpostman.com/oauth2/callback

Once saved open the app so that you see the following screen:

 

Step 2 – Set up Postman OAuth2.0

Set the authorization type to OAuth2.0 and click on “Get New Access Token”

Fill in the required fields and click on request token

    • Token name can be self chosen
    • Auth URL = https://login.eloqua.com/auth/oauth2/authorize
    • Access Token URL = https://login.eloqua.com/auth/oauth2/token
    • Client ID = available on the screen within the AppCloud Developer tab in Eloqua (which you opened after saving the App settings)
    • Client Secret = available on the screen within the AppCloud Developer tab in Eloqua (which you opened after saving the App settings)
    • Scope can be left blank
    • Grant Type = Authorization Code
    • Request Access Token locally = checked

 

The next step is to log in into Eloqua and approve the request for permission on the app that you have just created. Click on Log in, and enter your user details (make sure that this user has API rights within Eloqua). When this is successful you will see the Postman screen again, with a token created.

You are now able to use this token and request information from Eloqua. Make sure you are using the Token within the Header, in the screen where you can see the token you set it to be added to the header and you click on Use Token. There is a (1) placed behind the Headers tab, meaning it is successfully added.

If you then do the following GET on https://secure.[POD].eloqua.com/api/REST/1.0/system/user/[ID], you will get information back from Eloqua.

 

OAuth2.0 - Refresh Token

 

The Call

If the access token has expired, you should send your stored Refresh Token to login.eloqua.com/auth/oauth2/token to obtain new tokens, as in the following example:

POST https://login.eloqua.com/auth/oauth2/token

Authorization: Basic czZCaGRSa3F0Mzo3RmpmcDBaQnIxS3REUmJuZlZkbUl3

{

"grant_type":"refresh_token",

"refresh_token":"tGzv3JOkF0XG5Qx2TlKWIA",

   "scope":"full",

   "redirect_uri":"https://www.getpostman.com/oauth2/callback"

}

This request must authenticate using HTTP basic. Use your app's Client Id as the username and its Client Secret as the password. The format is client_id:client_secret. Encode the string with base-64 encoding, and you can pass it as an authentication header. The system does not support passing Client Id and Client Secret parameters in the JSON body, and, unlike basic authentication elsewhere, you should not include your site name. Learn more about basic authentication with Eloqua (source here).

The response                           

If the request is successful, the response is a JSON body containing a new access token, token type, access token expiration time, and new refresh token:

 

{

"access_token":"2YotnFZFEjr1zCsicMWpAA",

   "token_type":"bearer",

   "expires_in":3600,

"refresh_token":"MToxLUIyZHRNTUZsazIwNmZFTy1"

}

The access token you receive you will be able to use again until it is expired, the refresh token you get back you should save somewhere to refresh the next time.

 

 

Extra information

Expiration of tokens

You do not necessarily have to request a new access token every time you want to call something. Oracle has shared information on the expiration of these tokes within their Help center:

  • Authorization Codes expire in 60 seconds (intended for immediate use)
  • Access Tokens expire in 8 hours
  • Refresh Tokens expire in 1 year
  • Refresh Tokens will expire immediately after being used to obtain new tokens, or after 1 year if they are not used to obtain new tokens

Meaning that you can use the token you have initially requested for up to 8 hours before you have to request a new token.

 

Bulk API

Now that you have the API up and running you can start with understanding how the different API's work, or you continue to the next level: Adding contacts through the Bulk API. A very clear post about this can be found here.

 

Useful URLs

Basic Authentication on Eloqua Developer support
Oauth2.0 Authentication on Eloqua Developer support
Postman - API development tool
Encode your login details using Base64encode.org
Determining your Base URL

We're excited to announce General Availability for Motiva AI Cloud app in the Oracle marketplace.  Motiva AI brings fully automated campaign optimization to Eloqua, and delivers better performing campaigns through machine learning.

 

Automatically find and exploit the best messages with multivariate testing.

You can test subject lines, secondary subject lines, copy / body text, design elements, graphics - or all of these at once. Anything you can save in the Eloqua Asset Library can be tested and optimized for you.  Motiva AI experiments automatically shift investment from lower performing towards higher performing messages for you. There's nothing you need to do except watch the awesome roll in.

 

 

Find the right send time.

Motiva AI will tell you when your best send time slots are for any given campaign. More on this here.

 

 

Understand your audience.

More often than not, Motiva AI exposes high-performing response among subpopulations in your segment. It may be the audience you have is actually three or four different sub audiences. This can be a great way to use data to influence the campaign and/or creative design process on your team. Persona development FTW!

 

Share your success.

You can share your great results with the rest of your team with beautiful reports. Export data to any platform.

 

Lighten your workload.

Stop trying to design the right valid a/b tests, exporting data, interpreting results, rerunning the campaign, etc. Motiva AI will do all that for you, and more.

 

Easy integration.

Five (5) minutes to set up, and you can drag and drop the Motiva AI widgets onto new or existing campaign canvases. The Email Optimizer action service simply replaces the Send Email step with a much more powerful capability: message optimization.

 

Have a go and let us know what you think. It's a free 30-day pilot, unlimited campaigns. We're always excited to see new use cases.

Hello All,

 

I was wondering if there was a way to send a set number of emails from a PET table within a program? The only way we have found is to use an allocation switch.

 

For example: Using a Pet table of 50,000 would we be able to send just 10,000 within a program.

 

The only way we have found is using an allocation switch; we have tried a stage gate but it doesn't seem to work.

 

Thanks.

In order to create a page tags structure in Eloqua you would have to have your website fully tracked with Eloqua scripts.

Comprehensive instructions on how to do that can be found at Assets/Website setup/Tracking/Visitor Tracking/Visitor Tracking Package. Clicking the Generate button on lower right will provide and archive with the scripts and detailed instructions.

 

Now given that your website is tracked and you can dint it in the Eloqua Site Map, we can proceed with the tags

We have the following website structure which we want to include in our Eloqua web tagging page system.

in order to make our approach automated we will start by creating the main tags: home, services, Industries, Insights, Contact.

The base URL in the input box should be mysite.com and the levels down should be set to 1.

 

Now we are going to create an Auto Tagging rule for second tier: Services, Industries, Insights, Contact

There will be 4 rules.

The base URL will  be different for each, for example mysite.com/industry for Industries. The tags will be created automatically for all pages under it.

 

 

Now you have your Eloqua tags all set up for a website with the above structure. The plus is that you set up Auto Tag rules, so changes within the same structure will auto tag new pages. This is alot better than selecting Create page Tag and creating one tag a time, which is a system you will have to manually support at every site map change, either by hand or excel upload (which is also by hand ).

 

To add a plus to reporting on tags, you can create page tags groups. I would recomment one group for each base url within the site, like Service, Industries, insights. Just manually add the page tags withn the page tag group.

Hi,

 

The working scenario is that a contact in being forwarded and email and he clicks a link in an email and arrives to a page. It can be:

- that the page he arrives to is an unsubscribe page where he clicks a button to unsubscribe;

- the page is a prepopulated registration page;

In our case, that would be a problem because the page would pre-populate with the data of the original receiver of the email. Thus the person being forwarded the email by that office colleague could submit a form with someone else's data or unsubscribe the forwarder.

 

In order to prevent that we have to make some updates to the destination page. Basically we'll check if the visitor of the page has the same email address as the person who received the email. The Eloqua tracking script needs to be present on the page.

 

Create a Visitor Lookup in the Web Data Lookup section. Make sure the Data Lookup Type is Visitor and the data field is Cookui GUID. This will get the email adddress of the Visitor. Script is there in the Get Data Lookup Scripts section and looks like below.

<SCRIPT TYPE='text/javascript' LANGUAGE='JavaScript'><!--//

<div id="contactinfo">

</div>

function CreateRow(label, value) {

    var p = document.createElement('p');

    var b = document.createElement('b');

    p.appendChild(b);

    b.appendChild(document.createTextNode(label));

    p.appendChild(document.createTextNode(value));

    document.getElementById('contactinfo').appendChild(p);

}

function SetElqContent(){

    if (this.GetElqContentPersonalizationValue){

      CreateRow('Email Address: ', GetElqContentPersonalizationValue(''));

      CreateRow('Email Address: ', GetElqContentPersonalizationValue('V_Email_Address'));

  } else {

      CreateRow('Personalization functions not found','');

   }

}

//--></SCRIPT>

 

From here you can check this visitor email address againt the one coming from the email link, that is if you populate with field merges.

 

var eloquaEmailAddress = "<span class="eloquaemail" >EmailAddress</span>";

function verifyIdentity(cookieEmailAddress) {

    if(eloquaEmailAddress == cookieEmailAddress || cookieEmailAddress == undefined || cookieEmailAddress == "") {      

        // KEEP Identity based on Eloqua ID

    } else {

        // SET Identity based on Cookie GUID

        for (var i=0;i<document.getElementsByName('emailAddress').length;i++){

            document.getElementsByName("emailAddress").value = cookieEmailAddress;

        }

    }

};

 

If you populate using Contact Lookups, there appears to be a new option to get this solved. Basically this Contact lookup on a page will return lookup data if there's a match with the Visitor (we didn't give a try to this one yet).

Filter Blog

By date: By tag: