Lead scoring is a methodology for ranking prospects based on their likelihood to convert, using a combination of demographic fit (firmographics) and behavioral engagement (activity signals).
Lead scoring assigns numerical values to prospects based on who they are and what they do. A VP of Engineering at a 500-person SaaS company who visited your pricing page scores higher than an intern at a 10-person agency who downloaded one blog post. The score determines how and when sales engages.
Scoring Models
Lead scores combine two dimensions. Fit scoring evaluates demographic match: company size, industry, job title, revenue, technology stack. Behavior scoring tracks engagement: website visits, email opens, content downloads, event attendance, product trials. A simple model might give 0-50 points for fit and 0-50 points for behavior, with leads above 70 routed to sales. More sophisticated models use machine learning to weight factors based on historical conversion data.
Lead Scoring Pitfalls
The #1 mistake is scoring based on vanity engagement. Someone who downloads five ebooks might be a content enthusiast, not a buyer. Conversely, someone who visits the pricing page once might be ready to purchase. CROs should validate scoring models against actual conversion data quarterly. If high-scoring leads aren't converting at higher rates than low-scoring leads, the model is broken and reps will lose trust in it fast.
Lead Scoring Tools
Most CRM platforms (HubSpot, Salesforce) include native lead scoring. Marketing automation platforms like Marketo and Pardot offer more sophisticated models. AI-powered tools like 6sense and Demandbase combine first-party scores with third-party intent data for predictive lead scoring. The trend is moving from rule-based scoring (manually assigning points) to predictive scoring (ML models trained on your historical wins).
Common Mistakes with Lead Scoring
Building a scoring model once and never updating it. Markets change, buyer behavior shifts, and what predicted a good lead two years ago might not predict one today. The most common failure: scoring heavily on content downloads because that's what marketing can measure easily. But a VP of Engineering who visits your pricing page once is probably a better lead than a marketing intern who downloaded 5 ebooks. Review your scoring model against actual conversion data every quarter and adjust weights based on what predicts pipeline.
Real-World Example
A marketing automation platform scored leads on a 0-100 scale. Leads above 70 went to sales. The problem: their model gave 20 points for downloading any content piece. Leads with 3 ebook downloads and a webinar hit 70 without ever showing purchase intent. AEs reported that 60% of leads scored 70+ were 'not ready to talk to sales.' The RevOps team rebuilt the model using regression analysis against historical closed-won deals. Result: pricing page views got 25 points, demo requests got 30 points, and content downloads dropped to 5 points each. Sales acceptance rate on scored leads went from 40% to 78%.
In Practice
A practical approach to lead scoring that avoids over-engineering: start with two dimensions scored 1-5. Fit score (1 = no ICP match, 5 = perfect ICP). Engagement score (1 = single touchpoint, 5 = pricing page + demo request + multiple visits). Route leads with combined score of 8+ to AEs immediately. Score 6-7 goes to SDRs for qualification. Score 5 and below goes to nurture. Review monthly: are 8+ leads converting at higher rates? If not, adjust the thresholds. This simple model outperforms complex 100-point systems because the team trusts and uses it.