Sales forecasting is the process of predicting future revenue based on pipeline data, historical trends, and rep-level deal assessments, and it's the metric CROs are most frequently judged on.
Sales forecasting predicts how much revenue your team will close in a given period. It combines quantitative data (pipeline value, stage conversion rates, historical patterns) with qualitative judgment (rep confidence, deal momentum, competitive dynamics). Accurate forecasting is the single most visible skill that separates great CROs from average ones.
Forecasting Methods
There are four common approaches. Bottom-up: sum individual rep forecasts based on deal-level assessments. Top-down: apply historical conversion rates to current pipeline value. Weighted pipeline: multiply each deal's value by its stage probability (Stage 3 = 40%, Stage 5 = 80%). AI-assisted: tools like Clari and Gong use machine learning on historical deal data to predict outcomes. Most CROs use a combination, triangulating bottom-up rep calls with weighted pipeline and AI signals.
Forecast Accuracy Benchmarks
Best-in-class sales organizations forecast within 5-10% of actual results. The industry average is 15-25% variance. Consistent over-forecasting (sandbagging) or under-forecasting (happy ears) are both problems. CROs track forecast accuracy by rep to identify who needs coaching. A rep who misses forecast by 30% every quarter either can't qualify deals or won't give you honest assessments.
The CRO Forecasting Cadence
Most CROs run a weekly forecast cadence: Monday pipeline review with managers, Tuesday deal inspection on at-risk opportunities, Wednesday/Thursday rep-level coaching, Friday updated commit to the CEO/board. Monthly and quarterly roll-ups feed into board reporting. Our job posting data shows 'forecasting accuracy' or 'revenue predictability' mentioned in 42% of CRO postings, making it the most-cited operational capability.
Common Mistakes with Forecasting
Relying on rep self-reporting without data validation. Reps are systematically optimistic about their deals. They'll call a deal 'commit' because the prospect said they liked the demo, not because they've confirmed budget, timeline, and procurement approval. CROs who build forecasts purely on rep calls will over-forecast every quarter. The fix: triangulate. Take the rep's call, compare it to stage-weighted probability, and overlay AI predictions from tools like Clari or Gong. Where all three agree, you have confidence. Where they diverge, you have a coaching conversation.
In Practice
A typical CRO forecast cadence looks like this. Week 1 of quarter: establish initial forecast based on pipeline and historical conversion rates. Weeks 2-4: weekly deal reviews with managers, focusing on commit and best-case deals. Weeks 5-8: tighten the forecast. Move deals between categories based on deal progress, not rep optimism. Weeks 9-12: lock the commit number and focus execution on closing committed deals. Every week, the CRO should be able to tell the CEO within 5-10% what the quarter will land. Anything worse than that means the pipeline data or the inspection process is broken.
Real-World Example
A CRO at a $60M ARR company inherited a team that had missed forecast by 15-25% for 5 consecutive quarters. She implemented a three-source forecast: rep call (bottom-up), weighted pipeline (mathematical), and AI prediction (Clari). Each week, she presented all three alongside each other. When they converged, confidence was high. When they diverged, she investigated. The rep call was consistently 15% higher than weighted pipeline, revealing systemic over-forecasting by the team. She recalibrated by requiring reps to justify commit deals against 5 specific criteria. Forecast accuracy improved to +/-6% within 3 quarters.