So you’ve decided to embark on building a state-of-the-art lead scoring program.
Most marketers seem to agree that scoring leads prior to sending over to Sales for follow-up is a good idea. If nothing else, the score attached to each lead creates a useful data point to reflect the expected value of that individual’s inquiry to the business, one to which both parties can relate and compare. If Sales is onside with the idea, it may be time to dig in and start building a new program (or re-building an existing one, as the case may be).
How to begin the conversation? The general consensus between Sales and Marketing that “we really should score our leads” is where the similarities with other companies end. While on the frontlines of dozens of lead scoring projects I have yet to find two companies with identical scoring programs. And then again not only are the models often totally different, but people tend to disagree on the design process itself: Should we invest heavily in quantitative analysis of our historical lead conversion rates, to uncover statistics that we can then use to design the program? Or just rely on gut feel?
There is no cookie-cutter formula, but whatever approach you decide to take your lead scoring should ultimately allow Sales to automatically score the leads based on their net potential business opportunity – according to a shared interpretation of the value of that lead. The program must also fit your lead flow process (whether it be “waterfall” or other methodology) and also be compatible with your specific Eloqua-to-CRM integration.
When I am invited to help companies broker the early stages of a lead scoring project, I like to ask a few qualifying questions which precede the formal requirements definition. If you are at this stage in your project youself, here is a quick list of questions you can use to prepare for the first planning meeting with your counterparts in Sales.
These questions typically generate a good conversation and set the stage for a successful project:
1. Why is lead scoring important to Sales? What is their vision? If nothing else this is a sanity check to make sure there is shared vision for what scoring will achieve for both parties.
2. Which leads represent examples of the most important ones for follow-up? Define these in terms of data: Customer status, company type, demographic (job titles, functions, etc), sales cycle, affinity for certain product interest.
3. Should all leads be scored using the same model regardless of source and type? Should any leads be scored as automatic “A” leads?
4. Does “behavioral scoring” matter to Sales? What type of measurable activity (email opens, clicks, trade show visits etc.) should “bump up” the score for a “B” lead to an “A” lead, if any?
5. Which leads are least important? It’s a very good idea to discuss what’s at the barrel-bottom and will receive the lowest rating. Or should we even pass these over?
6. Will Sales agree to provide feedback on the lead quality? Are they willing to save each
lead with feedback, such as “You said this was an (A, B, or C) lead, but it really turned out to be an (A, B, or C) lead.”)?
7. Are there any special routing or notification requirements for “A” leads (or B leads, C leads)?
This is just a sample but illustrates effective ways to shed light on the priorities and emergent features of the new program.
Once you have a chance to document the answers, typically this is where formal requirements are defined. At this point your technical experts should be involved (if not already) to review the proposed model and assess the state of Eloqua data, whether it’s feasible to score leads the way you intend and provide some alternatives that get close to the “ideal” scoring model that will serve both parties well.
Armed with your Eloqua-ready lead scoring model or “matrix”, time to circle back with Sales and review the whole program again to pressure-test it for any problems. When everyone approves the revised model, we get to work building out the program, get the CRM integration adjusted, test and review, and continue to make adjustments heading up to the launch.