Sales teams waste 67% of their time on accounts that will never buy. That's not a discipline problem — it's a scoring problem. Most B2B lead scoring models assign points based on firmographic fit (right industry, right size) while ignoring the strongest predictor of conversion: buying intent.
A 50,000-employee company in your target industry scores high in traditional models. But if nobody from that company has visited your site in six months, they're not a lead — they're a name on a list. Meanwhile, a 200-person company that's visited your pricing page four times this week, downloaded a comparison guide, and returned three days in a row is actively evaluating solutions. Traditional scoring often ranks the first account higher.
Intent-based scoring fixes this by weighting what accounts do over what they are. Here's how to build one that works.
Traditional Lead Scoring vs. Intent-Based Scoring
Traditional lead scoring is a checklist. Does the company match your ICP? Add 10 points. Is the contact a director or above? Add 15 points. Did they download a whitepaper? Add 5 points. Cross a threshold, send to sales.
This approach has three problems:
It rewards profile, not behavior. A perfect-fit account that passively downloaded one asset six months ago scores the same as one actively researching solutions right now. The profile match matters, but timing matters more.
It treats all actions equally. In most scoring models, downloading a whitepaper and visiting a pricing page earn similar points. But the intent signal is vastly different. Content consumption indicates interest. Pricing page visits indicate evaluation. These shouldn't be weighted the same.
It's static. Traditional scores only go up. An account that was active three months ago but has gone silent still carries those points. Without decay, your "hot leads" list fills up with cold accounts that scored well once.
Intent-based scoring addresses all three. It prioritizes behavioral signals over demographic fit, weights actions by their purchase-intent signal strength, and applies time decay so scores reflect current activity, not historical engagement.
Signals That Indicate Buying Intent
Not all website behavior is a buying signal. Reading a blog post might mean someone is doing research for their own content, not evaluating vendors. You need to distinguish between educational engagement and purchase intent. Here are the signals that reliably indicate buying behavior:
High-Intent Page Visits
Certain pages correlate strongly with purchase decisions:
- Pricing page — The clearest buying signal on your site. Someone checking pricing is mentally budgeting. Weight this 3–5x higher than a blog visit.
- Comparison/alternatives pages — They're evaluating you against competitors. This is active vendor selection.
- Integration/API documentation — They're checking technical compatibility with their stack. This is evaluation, not casual browsing.
- Case studies in their industry — Looking for proof that you've solved their specific problem.
- Implementation/onboarding pages — They're thinking past the purchase decision to what deployment looks like.
Blog posts, general "about" pages, and career pages are low-intent signals. Don't ignore them entirely — they indicate awareness — but weight them minimally.
Visit Patterns
Single visits mean little. Patterns mean everything.
Return frequency is one of the strongest intent signals. An account that visited once last month is mildly interested. An account that's visited three times this week is actively evaluating. Weight return visits within a 7-day window heavily — research shows that B2B buyers who visit more than three times in a week convert at 5–8x the rate of single-visit accounts.
Session depth matters too. A visitor who views one page and leaves is different from one who views seven pages across pricing, features, and integrations. The latter is doing a thorough evaluation. Track pages per session at the account level, not the individual level — multiple people from the same company doing shallow research indicates committee-level evaluation.
Multi-stakeholder visits are a strong signal that's easy to miss. When you detect visitors from multiple IP ranges or departments within the same company, a buying committee is forming. Two people from marketing and one from IT visiting in the same week? That's a cross-functional evaluation. This signal alone can bump an account's score substantially.
Content Consumption Velocity
Speed of content consumption indicates urgency. An account that downloaded your buyer's guide, visited three case studies, and read a technical whitepaper — all in two days — is moving fast. An account that consumed the same content over four months has a different pace and urgency level.
Track not just what content was consumed, but how quickly. Compress your content engagement into a velocity metric: content assets consumed divided by days of activity. High velocity = high urgency = higher score.
Building Your Scoring Model
Here's a practical scoring framework you can implement in most marketing automation platforms:
Fit Score (0–40 points)
This is your traditional ICP match. Keep it, but cap it at 40% of the total possible score. It answers "could this account buy from us?" not "will they?"
- Industry match: 0–10 points
- Company size match: 0–10 points
- Revenue range match: 0–10 points
- Technology stack compatibility: 0–10 points
Engagement Score (0–60 points)
This is where intent-based scoring differs. Weight it at 60% of total possible score.
- Pricing page visit: 15 points
- Comparison/alternatives page visit: 12 points
- Case study view (industry-matched): 10 points
- Integration/API docs visit: 8 points
- Demo page visit (without conversion): 10 points
- Blog post view: 2 points
- Return visit within 7 days: 8 points (per return)
- Multi-stakeholder activity (2+ people from same company): 15 points
- Content download: 5 points
Time Decay
Apply a decay function that reduces engagement scores over time. A common approach: reduce engagement points by 20% every 14 days of inactivity. An account that scored 50 on engagement but hasn't visited in six weeks decays to around 16. This ensures your sales team focuses on currently active accounts, not historically active ones.
Decay should only apply to engagement scores, not fit scores. A company doesn't stop being in your ICP because they went quiet.
Negative Scoring
Add score deductions for signals that indicate poor fit or low purchase probability:
- Career page visits: -5 points (they're job hunting, not buying)
- Student/educational email domain: -20 points
- Country/region outside your serviceable market: -15 points
- Competitor employee (identified via company domain): -30 points
Negative scoring prevents false positives. Without it, a competitor doing extensive research on your site could trigger a sales alert — wasting everyone's time.
Setting Thresholds That Work
The threshold at which a lead gets routed to sales is where most scoring models fail. Set it too low and sales drowns in unqualified leads. Set it too high and hot prospects go cold while waiting to cross an arbitrary line.
Don't pick a threshold based on intuition. Use your historical data:
Step 1: Score your last 100 closed-won accounts retroactively. What would their scores have been at the point they first engaged with sales? Look for a natural clustering — you'll typically see a concentration around a certain score range.
Step 2: Do the same for your last 100 closed-lost opportunities. Compare the score distributions. The ideal threshold is the point where closed-won density is high and closed-lost density is low.
Step 3: Add a "warm" tier. Instead of a single threshold (below = marketing, above = sales), create three tiers:
- Cold (0–30): Marketing nurture only. Automated email sequences and retargeting.
- Warm (31–60): Marketing-qualified. SDR team monitors and engages via personalized outreach when ready.
- Hot (61+): Sales-qualified. Immediate routing to account executive with full context.
Review and adjust thresholds quarterly. As your content library grows, scoring distributions shift. A threshold that was perfect six months ago might need adjustment as engagement patterns evolve.
Automating the Handoff to Sales
A scoring model is only as good as the action it triggers. When an account crosses your hot threshold, the handoff to sales should be instantaneous and information-rich.
What to include in the sales alert:
- Company name and key firmographic details
- Current score and what drove it (e.g., "Score: 72 — pricing page viewed 3x this week, comparison guide downloaded, 2 stakeholders active")
- Pages visited in the last 7 days, ranked by intent signal
- Known contacts at the account (from form fills or CRM data)
- Recommended talk track based on content consumed (e.g., "Interested in API integration and enterprise security — emphasize technical capabilities")
Route these alerts to the right rep automatically. If the account has an owner in your CRM, route there. If not, use round-robin assignment or territory-based routing. The worst outcome is a hot alert that sits in a shared inbox for 48 hours because nobody owns it.
Speed matters here. Research from Harvard Business Review found that responding to leads within five minutes is 21x more effective than responding after 30 minutes. Your scoring model identifies the right accounts. Your automation ensures sales acts on them before the buying window closes.
Common Scoring Mistakes to Avoid
Scoring individual contacts instead of accounts. In B2B, the buying decision is made by a committee, not a person. If three people from the same company each visit your site once, that should score higher than one person visiting three times. Aggregate signals at the account level.
Over-weighting form fills. A form submission is a clear signal, but teams often assign 30–50 points to a single form fill. This means any account that downloads one gated asset jumps near the hot threshold, regardless of other intent signals. Cap form fills at 10 points and let behavioral signals do the heavy lifting.
Never recalibrating. Your scoring model is a hypothesis. After three months, compare scores at the point of SQL conversion for accounts that closed vs. accounts that didn't. If closed-won accounts don't consistently score higher than closed-lost at the SQL stage, your model isn't predictive — it's decorative. Recalibrate weights based on actual outcomes.
Ignoring inbound vs. outbound context. An inbound lead that scored 45 is fundamentally different from an outbound prospect that scored 45. The inbound lead came to you — their baseline intent is higher. Consider applying a 10–15 point bonus for inbound origin, or maintain separate thresholds for inbound and outbound accounts.
Building complexity before proving the basics. Teams sometimes build 50-variable scoring models with machine learning before proving that a simple model works. Start with 10 variables. Prove it predicts conversion. Then add complexity where it improves accuracy. A simple model that the sales team understands and trusts will outperform a sophisticated model they ignore.
Using AI to Improve Scoring Over Time
Once you have 6–12 months of scoring data paired with closed-won/closed-lost outcomes, you can train a predictive model that goes beyond rules-based scoring. Machine learning models can identify non-obvious patterns — maybe accounts that visit your careers page before your product page actually convert at a higher rate (they're evaluating you as a long-term partner, not just a vendor).
Start with logistic regression or a gradient-boosted tree model. Feed it your historical engagement data and conversion outcomes. The model will learn which signal combinations predict conversion better than your manually assigned weights.
But keep the rules-based model running in parallel for at least two quarters. Compare predictions. If the ML model is more accurate, gradually shift trust toward it. If it's not, your rules-based model is already good enough — and far easier to explain to sales.
What to Do Next
Pull a list of your last 50 closed-won accounts. Go back to your analytics and CRM data to reconstruct what their website engagement looked like in the 30 days before they entered your pipeline. Look for patterns: Which pages did they visit? How many times did they return? How many stakeholders were involved? Those patterns are your scoring model's foundation — real intent signals from real buyers, not theoretical weights assigned in a brainstorming session.