We have had a lead scoring program running in program builder for several years now that was built in our previous E9 instance. It seems to run smoothly, however the criteria is a bit lax and sales has told us that a majority of the leads that are being sent over are still too early in the buying cycle. After DNB made it through the transition to E10 in December of 2013, we wanted to revisit the criteria and utilize the E10 lead scoring module to see how it compared, both in terms of ease and reporting. We were able to do this, but <spoiler alert> this story is still doesn't have a complete ending just yet. However, we have gained enough insight into our database to prove our point.
Below is a report of our lead score distribution as it is from the program builder program. We are hoping that by revisiting the criteria and recreating our model in the new module, we will reduce the number of contacts in the upper left quadrants (A1, B1, A2, B2, etc) so that we can continue to nurture those that are still making their way through the funnel.
After careful review of our business requirements, we decided to make a few changes to our scoring criteria in order to simplify from our existing program builder program. We removed criteria that checked to see if the account included a current customer or is in an existing agreement. We also deemed it unnecessary to have a field that checked whether the contact’s role matched their title (i.e., the decision maker was a director or above). We kept the basic metrics for title and our other criteria, but did have to make small updates based on the way the calculations are done in the new module; i.e., the scoring for titles were given a per cent of the weight instead of a certain number of points. We also had to change the way we accounted for some of the titles within the model. For instance, in order to not discount any of the multiple C-suite titles that are appearing today, we changed "contains" "CEO, CTO, CFO..." to "matches wildcard pattern" "C?O"
We have four main segments in our business, each with different sets of criteria that have been built into our program builder program (sales and marketing, risk, supply and general). The biggest shift in our design for the E10 module is that we originally wanted to keep utilizing one model which would include all four segments. However the main difference in criteria between each segment is the weights and % of the available score that we attribute to titles. In the new module, we are unable to assign different scores to the same title for different segments within a model, so after consultation with our account manager, we decided it best to build four different models.
I took the "Eloqua 10: Lead scoring" class in Eloqua University for the second time as a refresher right before I started this initiative. With the help of this class, I built all 4 models in the module with no hesitation. The class taught all of aspects of both best practices of lead scoring as well as configuration. Once all the requirements were agreed upon, it took me hardly any time at all to construct all four modules.
Based on the insight reporting after activation, the new scoring module and criteria has greatly decreased the number of high scoring contacts! A1’s went from 481 contacts to 71 and B1 from 705 to 176! Below is an insight report of our main model for comparison.
But, as I said our story doesn’t have a true ending. While we were in the process of building this out, our company hired a new CMO and at this time, our models are being reviewed to ensure sales, marketing, and our c-suite is in alignment. Therefore, we have not set up the final integration with SFDC for these contacts to be sent to sales.
Despite not having true closure, we are optimistic that we will be able to move forward. After reviewing the contacts in the reports, they appear to be high quality leads. We hope to test it by sending over the A1’s from this report to a team in sales to determine the lead quality.
So we can wait on the edge of our seats for the final conclusion…da Da DA