You’ve spent time developing a lead scoring model – but are you monitoring your results? In today’s business, things can change quickly: your target buyer personas, your market, your marketing campaigns, your website content, your CRM data, your sales model, etc. Is your scoring model still aligned?

 

In this article, I’ll describe how to get quick insight into the results of your lead scoring program using Segments in Eloqua10. For the purpose of this exercise, I’ll assume you’re using Eloqua’s best practice Co-Dynamic Lead Scoring model (for an overview of this model, read the Eloqua Grande Guide to Lead Scoring). This outputs scores such as A1, A2, B3, D4, etc., but you can easily apply this method to your own scoring model.

 

This article is lengthy, so here are some quick links to the different topics we’ll be covering:

 

 

Overall Lead Score Results

First, you’ll need to identify which Contact field is capturing the score. For those using the best practice model, this is named “Lead Rating - Combined”. Next, go to Contacts>Segments in the main menu, and create a New Segment.

 

For each value in the scoring model, create a unique Filter Criteria to identify that value, like so:

E10-MonitorScoring-01.png

Select the Filter Criteria from the right menu “Compare Contact Fields”, select the Contact field that contains the score, and enter the value, like so:

E10-MonitorScoring-02.png

Repeat this process for each of the values in your scoring model. Also add a Filter to find Contacts that have no value (blank) in this field. Give each Filter Criteria a descriptive label.

 

Depending on how many scoring values you have, this can be time consuming, but you only have to do it once. Don’t forget to save your Segment!

 

Here’s what your final Segment should look like:

E10-MonitorScoring-03.png

Right away, in this example, we can see the scoring results are out of whack: 5,020 Contacts out of the total 5,149 have a score of “D4”.

 

When you have a good spread across the values, it can difficult at a glance to see inconsistencies like this. In this case, I suggest creating a quick chart from the values in Excel or similar, like so:

E10-MonitorScoring-035.png

 

Implicit and Explicit Breakdown

You can use this same Segment method to further break down the Co-Dynamic scores for monitoring. Again, you’ll need to identify the Contact field containing the values for each (for those using the best practice model, these are “Lead Rating – Profile (Explicit)” and “Lead Rating – Engagement (Implicit)”). The resulting Segments from our example look like this:

 

Profile (Explicit) Breakdown

E10-MonitorScoring-04.png

Engagement (Implicit) Breakdown

E10-MonitorScoring-05.png

 

Note that in the Explicit Segment, I used Shared Filters – you may find that some of these Criteria are already set up as Shared Filters in your Eloqua install – take a look at what you have before proceeding.

 

This breakdown gives a little more insight into what’s happening at the micro level in your scores between the Profile and the Engagement.

 

 

Evaluating Profile Completeness and Engagement

When lead scoring results are so strongly skewed to the lower ratings as in our example, the recommended next step is to investigate the completeness of the criteria used. In other words, how many Contacts are in your database that can actually meet your scoring criteria?

 

“Profile Completeness” is defined as the percentage of Contacts that have a value for the fields in your definition of a complete profile, in this case, those Contact fields you’re using in your lead scoring criteria. As with the previous exercises, we’re going to use Segments to assess this metric.

 

Create a new Segment and add a Filter criteria to identify the presence of any value for each of the Contact fields used in your scoring criteria.

 

For Contact fields with free form, your rule will look like this (click any of the follow images to enlarge):

E10-MonitorScoring-06.png

An asterisk (*) is called a “wildcard” in this context, and it tells Eloqua to look for any characters in the field.

 

For Contact fields using a Select List, your rule will look like this:

E10-MonitorScoring-07.png

For Contact fields with a Data Type of “Numeric”, your rule might look like this:

E10-MonitorScoring-08.png

Once you’ve added all of your rules for the Contact fields, also include a rule to bring in all Contacts (you may already have a Shared Filter you can use) – you’ll need this value if you want to calculate the percentage completeness.

 

Here’s what our resulting Segment shows:

E10-MonitorScoring-09.png

We can see at a glance in this case that the completeness for each field is quite low; to demonstrate further, you may want to calculate the percentages like so:

 

Contact Field
Contacts with
Field Value

Field Completion
Rate

Department1703.3%
Job Role1322.5%
Type of Retailer1252.4%
Number of Stores160.3%
Annual Revenue160.3%
TOTAL Contacts5,194
Average Profile Completeness
1.8%

 

Conclusion: the scoring criteria may be accurate, but it’s difficult to assess the quality of the results due to highly incomplete profiles. Your next steps will be to develop programs to improve the completeness of your profiles, for example, assets requiring a form submission with those fields, list purchases, or manual data append projects.

 

For Engagement completeness, you can quickly create a Segment to assess your engagement scoring criteria by re-using the Shared Filters already in use by your lead scoring Program. Going back to our example, here are the Segment results:

E10-MonitorScoring-10.png

Again, it’s fairly easy to see that there aren’t many Contacts that meet the criteria for each of these Shared Filters. In table format with percentages, this is what we see:

 

Criteria
Contacts in Filter
Engagement Rate

Form Submission - Last 3 Days

10.0%

Form Submission - Last 7 Days

40.1%

Form Submission - Last 30 Days

150.3%

Visited High Value Web Content - Last 3 Days

10.0%

Visited High Value Web Content - Last 7 Days

20.2%

Visited High Value Web Content - Last 30 Days

90.0%

3+ Website Visits - Last 3 Days

10.0%

3+ Website Visits - Last 7 Days

20.0%

3+ Website Visits - Last 30 Days

70.1%

Email Click-through - Last 3 Days

0

0.0%

Email Click-through - Last 7 Days

0

0.0%

Email Click-through - Last 30 Days

0

0.0%

High Touch Event Attendance - Last 6 Months

0

0.0%

TOTAL Contacts

5,194

Average Engagement


0.1%

 

For most new Eloqua customers, your results will look quite similar to the above because you don’t have activity history yet for your Contacts. Make sure you’re rolling out campaigns to your database (emails with links to landing or other web pages) and driving traffic to your website that is likely to result in an Eloqua Form submission. Not only does this drive up your overall engagement, but it also performs the critical step of turning an unknown visitor to a known visitor, i.e., a Contact.

 

If you’ve been using Eloqua long enough to have history against your scoring criteria, your rules just may be too tight. Perhaps you don’t send out email campaigns as often as the Email Click-through criteria warrants, or drive as many return visits as you expect. Consider changing the time ranges or the activity quantity (e.g., 2 visits rather than 3) in your criteria.

 

 

Auditing Your Scoring Program

A common question from customers is how often they should audit the results of their scoring program. Our best practice recommendation is at least every 6 months.  As I mentioned, things can change quickly!

 

Periodically benchmark your results following the instructions above to identify trends – are the overall scores trending up or down? As part of your audit, you should further assess your results anecdotally by interviewing your sales team (is the scoring working/helpful?), reviewing against conversion rates (does an A1 truly move through the funnel more quickly than a C3?), and of course against any additional goals you set out in developing your scoring criteria.

 

We'd love to hear about some of your recommendations and best practices for monitoring and auditing your lead scoring programs. What are some of the benchmarks you use?