The dominant approach to sales forecasting - in which every deal in a pipeline is assigned a probability percentage based on its current funnel stage - is a prime example of the structural problems impacting sales organizations. The approach produces flawed insights, reinforces weaknesses and removes accountability and learning opportunities for salespeople, leaving them to play an increasingly passive role.
There is, however, a better way. We all have deals in an early pipeline stage that are rock-solid likely to close and deals at later stages that live on a wing and a prayer. Decoupling forecasting confidence from pipeline stages gives us the ability to learn, to react, and to break out of the cycle of bad forecasting baked into our processes.
This cycle of sales processes consistently producing wildly variant, often unpredictable and disjointed outcomes has been a millstone on sales success for the past 50 years, but as I said last week to Mike Donelly, my cohost on The Blackline Between Sales & Marketing Podcast, we have to stop saying that sales is broken. It doesn’t work - for buyers or sellers - but it isn’t broken, it’s doing exactly what it’s designed to do.
I noted this on the podcast. It’s time to accept that to fix selling, we must realize that the required changes are different in kind, not just in degree. You can’t lean into best practices based on a flawed process and expect to achieve the results you want. We must rethink and redesign the underlying structures that drive outcomes, and sales forecasting is Exhibit ‘A’ among the outdated, innovation-stymying, fundamentally flawed structures that need a serious overhaul.
Here are the descriptions of five real-world (deidentified) opportunities that were recently in Lift's project sales pipeline. Our pipeline has six stages - the sixth of which is closed - and we’re about to walk through the specifics of the opportunities and why pipeline-stage-based forecasting presents a completely broken picture:
Company A is looking to implement a new application in their tech stack and is considering us for the design and implementation work.
They have not yet finalized their decision as to which application they will be using. They’re down to the final two choices. Our opportunity to win the sale is dependent on which application they choose.
We’ve just learned that they are leaning heavily against the application we would implement.
This opportunity is in the fifth stage of the pipeline, with a full scope of work and proposal delivered and reviewed.
Company B is looking to us to help them improve and scale their customer acquisition efforts. The focus of the engagement is implementing the CRM they have in place, advising them on improvements to their overall sales approach, and training them when the implementation is complete.
We uncovered some red flags through the discovery and design parts of our sales process, one of which was the constant changing of desired expectations and misalignment within their organization.
Despite these red flags, we’ve determined that they are not big enough to justify disqualifying the opportunity, and we’ve moved through to making recommendations and providing our scope and proposal.
After our meeting to review our proposal, we agreed to meet in 10 days. In the interim, the members of the prospect’s team committed to watching two of our videos to understand the strategy behind our recommendation, to discuss our proposal internally so that at the meeting, we could clarify any final issues that would prevent them from making a decision.
It’s 24 hours before the meeting, and none of the videos have been watched, and no one on their team has accessed the proposal.
This opportunity is in the fifth stage of the pipeline.
Company C is an existing customer for whom we provide ongoing implementation support.
In a recent review, we identified an area of conflict that, if addressed, could improve performance for their sales team.
In the completion of the diagnosis stage, it was determined that in the near term, at least, the identified problem is a relatively low priority. The customer told us they would still like to consider addressing it and asked us to define what would need to be done to solve the problem and provide a proposal.
This opportunity is in the third stage of our pipeline.
Company D is also an existing customer.
They’ve identified an important area that needs to be addressed and have asked us to determine the best plan of action and provide a proposal.
The customer has determined that addressing this problem is a very high priority and has asked us to work as fast as possible to provide our recommendations.
This opportunity is in the third stage of our pipeline.
Company E is a brand new prospect. They’ve learned about us by following me on social media, subscribing to our blog, and regularly utilizing the resources we provide online.
A VP has reached out to us because the executive team has prioritized developing a true revenue operations capability. It was our content that directly led to them prioritizing this.
We just finished our first discovery conversation with the senior team and have agreed to conduct a complete analysis to make recommendations and, ultimately, a proposal for them. The prospect has committed to sharing specific examples of their current processes and has provided access to core elements of their tech stack for our review. Everything about this company and the discovery meeting indicates there is a solid fit.
This opportunity is in the second stage of our pipeline.
You’re the CRO, and you have a board meeting next week where you need to provide an updated forecast. How do you put that forecast together?
Realizing that these five opportunities are just a fraction of your pipeline, reviewing every deal is not viable. Besides, this is one of the primary reasons your company has invested so much in your CRM. Putting together a forecast for any reason should be easy, shouldn’t it?
To keep things simple for this illustration, I’m going to isolate the forecast on only these five opportunities, value each opportunity at $50,000, and assume that all five opportunities have the same targeted close date.
The dominant approach to pipeline forecasting would attribute a “likelihood to close” probability for each stage, where the probabilities increase with each step. The backend calculation would look something like this:
Stage | Probability | # of Opps | Opp Value | Forecast Value |
1 | 10% | 0 | $0 | $0 |
2 | 20% | 1 | $50,000 | $10,000 |
3 | 40% | 2 | $100,000 | $40,000 |
4 | 70% | 0 | $0 | $0 |
5 | 90% | 2 | $100,000 | $90,000 |
Total | 5 | $250,000 | $140,000 |
Traditional forecasting would double the weighting for opportunities A and B over C and D and more than 4x the weighting over opportunity E.
What’s more, it would forecast the same probability for the two existing customer opportunities (C & D) when it’s clear that both opportunities are not equally likely to be won.
Several years ago, I read Annie Duke’s book Thinking In Bets. She shared that one of the most powerful things anyone can do to improve the quality of their decisions and actions is to establish a confidence level for what they think. For example, it’s not good enough for a sales rep to say, “I think we’ll win this deal.” They should also share how confident they are in that statement.
After reading the book, I did a little test and started asking reps how confident they were each time they made the statement, “I think we’re going to win this one.” The average probability in my tiny, very non-scientific survey was 33%. I was shocked by this. They sounded 100% confident, but when I made them “make a bet,” that confidence disappeared.
I immediately went into our CRM and added a new property - Forecast Confidence. To keep things simple for everyone, I made a simple rubric for establishing the confidence level:
Using this rubric, let’s forecast again:
Confidence Level | Probability | # of Opps | Opp Value | Forecast Value |
1 | 5% | 2 | $100,000 | $5,000 |
2 | 33% | 1 | $50,000 | $16,500 |
3 | 50% | 0 | $0 | $0 |
4 | 67% | 1 | $50,000 | $33,500 |
5 | 95% | 1 | $50,000 | $45,000 |
Total | 5 | $250,000 | $100,000 |
There are at least four direct, damaging implications created by the traditional approach:
The first and most obvious is that we’re delivering flawed forecasting, with a high likelihood of overestimating the strength of a pipeline. (In this case, we’d be overestimating by 40%.) And people wonder why the expected lifetime of a typical VP Sales or CRO is so short.
What’s worse than assigning an 80% probability of winning two opportunities when the actual blended probability is 21.5%? You spend far more time and risk far more resources than you should. And where does that time and resource allocation come from? Earlier stage opportunities that have far more potential but need more “care and feeding.”
What should a sales rep do regarding a late-stage opportunity that they believe has an 80-90% chance of winning? Whatever they’ve been doing. To use a football term, you play a prevent defense.
But what should they do if it’s viewed as a 10-20% chance? One option is to opt-out, either formally or informally. The other is to change course.
Let me share with you what we’ve been doing with the first opportunity I described above. We realized that they were heavily leaning in the direction of making a choice that not only would eliminate us from consideration but was one we truly believed would be the wrong decision for them. For those reasons, we set about redefining the perception of that choice. As the final decision has not been made, it’s too soon to tell, but the approach has increased our chances.
This is the implication that kills me. Everyone knows that the traditional approach doesn’t work, but they don’t feel like they can do anything about it. So what do they do? They ignore stage-based probabilities until it’s time to put the forecast together and then scramble to put something together. It never gets better.
We regularly review wins and losses, but we do it a bit differently than most. First, we look at opportunities that have been won but were forecast with a confidence level of 1-3 within 30 days. (The period can vary depending on the context of the sales/buying process.) We want to know why we thought we wouldn’t win and apply what we learn to other low-probability scenarios that could benefit.
We then look at losses that were rated 3-5 before closing. We’ve likely overallocated time, attention, and resources to these opportunities. What led us to think we would win? What can we learn from that?
This review has helped us continually make improvements to our processes. Far more important, though, is the improvement it brings to the sales rep. For example, the rep starts forecasting from the first touchpoint, automatically thinking about what has to happen to increase confidence and what could be happening that would decrease confidence.
Our pipeline reviews are drastically different from the vast majority of reviews I’ve seen (and I’ve seen hundreds).
First, we’re not asking for updates or other data that is (or should be) readily available in the CRM. Our reviews are ongoing, mini-assessments. One of the primary variables we’re testing is forecast confidence:
What’s beautiful is that as these reviews occur over time, reps begin to consider these things without active prompting. As a result, they become better sales strategists even when there is no other training or coaching intervention.
We’re also able to improve our approach individually and organizationally continually. For example, we’ve created a series of videos to be used for low-confidence opportunities. These videos serve as a test (i.e., do they watch the videos?) and free our reps to spend more time on higher probability opportunities. By the way, it’s not at all unusual that we learn that the videos we’ve created for low probability opportunities are so effective that we begin to utilize them for higher probability opportunities as well.
Predictive scoring applications are growing in popularity. However, while I love the promise of the technology, I have two major concerns with them.
The first is their accuracy. If you’re going to rely on automated, algorithmic scoring systems, they’d better be accurate. Accuracy requires volume and quality data.
I am aware of some applications in some situations that appear to be very accurate (at least, some people I know who are using them tell me). To be clear, I am 100% for using valid predictive scoring applications for purposes of purely forecasting. If there’s a system you can rely on that gives you a good signal on the likelihood of winning business, by all means, use it for forward-looking forecasting.
This leads to my more significant concern. I do not doubt that these applications will become increasingly accurate and valuable for a growing group of companies/sales organizations.
But, even when they’re accurate, unless they can tell you why I would be careful in using them with my sales team, I would likely continue using my current forecasting confidence approach. The reason is that when salespeople are responsible for establishing their confidence levels on an ongoing basis, they create a self-learning loop. They become smarter salespeople. When an application tells me that this opportunity is likely to close and this one is unlikely, but I don’t actively contribute to that determination, I’m likely to become more passive in managing the process, and I’m not going to get smarter.
Simply put, I want sales reps to become smarter.
Oh yeah, I also want to make better forecasts.