Skip navigation

Background:

Our previous lead scoring model put in place five years ago had been neglected.  The company grew from a scrappy start-up to an aspiring enterprise analytics firm, and the marketing team scrambled to keep up with ever-growing sales teams with an increasingly diverse set of needs.  The thousands of scored leads that Marketing sent to Sales over the last few years sat largely untouched in our CRM, and while some sales reps complained that they received too many leads of poor quality, other teams said they didn’t receive enough leads.

 

The Marketing team was hungry for change—and eager to stop the vicious cycle of manual list review and cherry-picking of “good” leads that seemed to follow every event and webinar. Our previous marketing automation platform was bursting at the seams with the volume of data we had, and only offered limited lead scoring management and reporting capabilities.  A key driver of our switch to Eloqua earlier this year was the opportunity to easily create, manage and report on multiple lead scoring models to better match the global and product considerations of our business.

 

Preparing for Launch:

After running a few campaigns (along with the obligatory accidental email send to a large portion of the database), we set our sights on developing and implementing four regional lead scoring models.  To learn lead scoring best practices and understand how the lead scoring module in E10 works, I took the Eloqua University Eloqua 10: Lead Scoring course and browsed the Eloqua 10 Lead Scoring Resource Center.

 

Defining and launching multiple scoring models required a combination of technical considerations for our Marketing and Sales Operations teams, as well as a large amount of change management to help “sell” the Sales teams on a new and improved lead scoring program—and why this time the results would be different.  While our Marketing team is not yet ready to commit to the number of quality scored leads we're providing to Sales, we've defined these success metrics (also listed in the attached lead scoring model launch template) to baseline, and eventually benchmark against, to measure our lead scoring success:

 

  • Number of scored leads created
  • Lead status summary by lead score
  • % of each lead score that convert and turn into opportunities
  • % of each lead score that close
  • Increased Sales acceptance rate
  • Decreased average number of days in the sales cycle
  • Increased revenue per deal

 

While the company isn’t new to the concept of lead scoring, both our Sales and Marketing teams were not ready to explore predictive lead scoring options.  We agreed that developing a solid lead scoring model would be a process. At first, Marketing and Sales would work together to define the demographics and behavior of an ideal lead, launch a model built on our hypothesis, and then use scored lead data to help refine the model.  

 

I configured the initial models for each of the four regions and then activated them to score contacts in the database.  There are a few options to test out lead scoring results before you start sending leads to Sales.  You can create a lead scoring view at the contact level to view how an individual’s activities fit with the model criteria, which is great for spot-checking the scores for a few contacts but not scalable for your entire database.

 

Another option for checking your model is within Insight.  The “Contacts” folder contains a helpful “Lead Scoring Dashboard” matrix that shows how all contacts in the database score against a model you can select via dropdown.  While this is helpful if you have multiple lead scoring models for different products, it doesn’t seem that you can adjust this report to only include contacts from a specific region. To allow our regional marketers the ability to analyze their model against their region’s unassigned contacts, I created a contact report in Insight using the Analyzer license.  To create your own, just pull in the contact fields you’d like to see and add the “Contact Model Definition” attribute and the “Profile” and “Engagement” attributes, plus whichever fields you use to distinguish an Eloqua contact from a CRM contact.

 

To finalize our lead scoring set-up, I found the web-based training (WBT) for the Eloqua 10: Program Builder Overview to be helpful in understanding how a shared filter could be used as a feeder to bring leads from each region into a lead scoring program based on their profile and engagement scores.

 

Then we were ready to start rolling out the regional models.  To help keep Marketing organized, I put together a simple checklist (attached below) to use before each launch and to keep Sales informed I created a cheat sheet (attached below) answering lead scoring FAQs.

 

When we’ve launched models, it has been critical for the Sales lead in each region to co-present with Marketing to help reinforce that this is a joint Sales and Marketing initiative. We’ve also developed a best practice where Sales leads will review a scored lead report in our CRM with their direct reports to see how many scored leads have been contacted, how many have converted, and how many are associated with opportunities.

 

While there’s still a lot of room to evolve our model based on the data we’re collecting and feedback from Sales, our Marketing team has taken one large step forward in our journey to deliver a lower quantity of leads with higher quality.

Filter Blog

By date: By tag: