Skip to content

How to Optimise your Go-to-Market Machine

How to Optimise your Go-to-Market Machine

 

“Prevention is better than cure.”

It’s a common adage, accepted by many as the best approach to everything from your own healthcare to maintaining the smooth running of your car.

Can it equally apply to your SaaS business? Absolutely!

To really optimise your Go-to-Market machine and ensure it’s in tip-top shape, you must be proactive in identifying problems…don’t wait until they find you, or they’re already bigger than you want them to be! 

Here’s a handy guide on how to root out, understand and solve those minor fissures that inevitably arise during rapid scaling, so the whole machine continues to get stronger as it grows.

Below we’ll dig into a simple but robust framework that covers off all the areas that materially impact your Go-to-Market success, including:

  • Execution: health checks to assess the people & processes that will help you scale
  • Targets: the right and wrong ways to set KPIs and measure what’s working (or not)
  • Objectives: evaluating strategy and top level company direction
  • Data sources: the right way to build a balanced picture for making decisions
  • Resource allocation matrix: how to optimise your scare time and money for maximum returns
  • Change: how to implement change that works, and sticks

To begin, we turn to execution.

Review Execution (weekly to monthly)

There are typically two issues that can arise at the execution level of your Go-to-Market strategy: people & processes.

Let’s take the latter first.

If you’re growing rapidly, you may find that you’ve taken that growth for granted and not codified how it’s being achieved. Which means when a new team member tries to replicate the success, they tackle things in an entirely different manner, with a less positive result. This is inherently not set up for repeatable, scalable success.

If the definition of insanity is repeating the same action while expecting a different result, then in SaaS the inverse is true: expecting the same results from many different approaches.

Enter process. Process doesn’t have to be a Kafkaesque nightmare involving 12 different stakeholders, 4 layers of approval and an algorithm set to ‘no’, which is what many people fear. 

It can be lightweight codification of the key steps taken to consistently achieve the results you’re looking for. For example, you might want to establish a checklist to ensure each piece of content written is optimised for SEO; or that your Account Executives have established a quantifiable pain before adding a prospect to their pipeline.

In essence, processes are a way of distilling the common elements of successful execution so others can follow them

These last 5 words are critical. The second health check you should conduct after ‘do we have a process in place?’ is ‘is it being followed?’

In the fast paced, innovative world of startups, you may find a process exists on paper, but it’s not effectively operationalised so that your team follows it.

There are two ways to solve this: communication and technology.

If you’ve established that a new process should be followed, have you now added it to your onboarding schedule so all new starters are aware of it? Have you communicated it in your team (or even company) all hands, depending on the importance of it? Have you created a training module and ensured everyone has taken it and passed? Have you added it as a regular 1:1 item, so your direct reports know it’s important? These are a few of the simple steps you can take to help build muscle memory around the new process, so that it starts to become second nature.

If this isn’t enough, and you’re sure the process is a critical component of your future success, then you could also look to bake it into whatever tech systems you’re using to execute the work. This is typically in Salesforce (or whatever CRM you’re using) for the sales team. For example, not allowing a lead to convert into a contact without an Account being attached, or stopping a deal being closed/lost without a reason being selected. 

It’s important you don’t default to this for every process, otherwise you end up with bloated systems (see the Kafkaesque nightmare scenario above), and this creates a host of other problems, from lack of adoption to bad data as people try to shortcut too many requirements.

Once you’ve reviewed processes, you can consider the people element of executing your Go-to-Market plans.

Cohorts are the most useful way for you to spot potential people issues, which is why it’s typically best practice to make commercial hires in pairs.

This gives you a useful reference point, so you can see if one person is considerably more successful than a directly comparable peer.

What if you don’t have a cohort to measure against? Let’s say you have a single product marketer? In this case, the KSAR model is really useful.

Knowledge, Skills and Attitude (KSA) is a very common framework for assessing candidates and employees against a set of objective expectations. I like to add in Results to complete the picture.

Here’s how to test for each.

The biggest clue to there being a knowledge gap is the prevalence of ‘what’ questions. 

This is indicative that they’re lacking the context and basic understanding for them to make well informed, autonomous decisions – and it’s your job to fill those gaps! 

Note here there’s a big difference between ‘what’ questions and a pause because they’re looking for a simple answer, versus probing questions that are challenging the status quo to see if there’s a better way to do things. 

E.g. “I know that because X, we currently do Y…but what about if we tried Z because…” is very different to “what is x process?” or “what is the target?” or “can our product do that?”

A skills gap may present itself in the form of many ‘how’ questions e.g. “great, how would like this done” or “how would you achieve this?” 

As the demands of the role increase, you may start to notice a gap between the calibre of their work and your expectations, or more support and help is needed to achieve the goals you need.

This is a common scenario as earlier stage startups grow and mature, demanding more specialist skill sets from the generalists who made up the bulk of earlier employees. It may be a signal you need to hire in a functional specialist to help with the skills gap emerging.

A shift in attitude isn’t always obvious. Most people don’t get a personality transplant overnight and wear their newfound bad attitude on their sleeves. Instead it’s often a subtle shift in motivation that can be missed. 

To catch those gradual shifts, activity metrics measured over time are a useful yardstick. Has [activity-level KPI] taken a sudden (or gradual) dip over time? And if so, why? Remember that you’re managing people, not cogs, and events in their personal life may be affecting their motivations in work – take the time to see if you can help support them through a slump.

Engagement is the other signal. Are they less proactive in bringing new ideas to the table? Quieter in meetings? Less inclined to join the team for lunch or other social gatherings? You need to be attuned to these shifts in behaviour. Remember there could be underlying issues affecting the individual that you can help and support them with (not always work related).

Finally, consider their results. Are they meeting expectations? Are they improving or declining versus a previous time period? Are they inconsistent? 

Results normally follow naturally from your team executing good, clear processes and scoring well against the KSA model. 

But what if they’re not? What if you have well documented and operationalised processes and a team with unimpeachable KSARs…and yet you’re not achieving the higher-level business objectives you need?

Review Targets (Monthly to Quarterly)

If everyone’s doing what they’re supposed to, and you’re hitting the targets you’ve set, but your objectives aren’t being met…this is a good time to review those targets.

The surest signs of targets being incorrect are weak correlations.

In any SaaS Go-to-Market strategy, you’ll likely have a funnel of targets that should be strongly correlated. If you produce x-MQLs, you’ll generate y-SQLs; x-Qualified Pipeline will yield y-New Business etc.

If you’re tracking these in monthly or quarterly cohorts, you can keep a close eye on the strength of the correlation between your key SaaS metrics, to ensure you’re always measuring the right targets.

What happens if you notice a divergence?

If, for example, you start to notice that leads are up 5%, MQLs are up 15%, but SQLs and pipeline are down 10% after a few quarters of everything tracking in-line, you know where to start looking for issues in your Go-to-Market system, with a few targeted questions:

  • Does our MQL scoring system still make sense? Has it recently changed?
  • Did we put more resources into a new channel or vertical that’s generating more leads (and many more MQLs) but not meeting our SQL criteria?
  • Has there been a change in the SLA between the SDR and Demand Gen team?
  • Did we lose a superstar SDR or have a new cohort or SDRs learning the ropes?

Reviewing the correlation between your targets this way gives you a laser focus on where to look, and what to ask, before you go too far off track!

But what if there are no weak correlations…and you’re still not hitting your objectives?

You could be experiencing a version of Goodhart’s law, which states that “when a measure becomes a target, it ceases to be a good measure.”

What does this mean in practical terms?

It generally means that you may have noticed that certain behaviours are correlated to successful outcomes, but instead of codifying these into processes or best practices, they become explicit targets instead…only eventually the context is lost, and you’ll find teams being managed to KPIs that don’t actually relate to the original behaviour and intent that made them useful in the first place.

Let’s look at a couple of examples:

It’s best practice to optimise each piece of content for SEO…but this becomes an explicit target to have 100 organic visits for every blog post. This will completely alter the way your team goes about creating content, blindly optimising their output for a KPI that won’t necessarily correlate to the overall health of your GTM strategy.

Similarly you may target the SDR team on dials because you notice the most proactive and persistent SDRs have higher call volumes. But instead of instilling a culture of hustle and tenacity (what you really want to achieve), you train the SDRs to hit their dial KPIs as quickly as possible (regardless of the output), with the opportunity cost that they could have found more effective ways to achieve the overall goal for setting well qualified meetings with prospects.

In other words, be careful what you measure, because it might actually take you off-track.

But what if your destination was wrong in the first place? Well, this is where we need to periodically review your objectives!

Review Objectives (Quarterly to Annually)

If you’re executing a flawed strategy flawlessly, you’re just heading further in the wrong direction, faster.

And this scenario can happen more easily than you might imagine.

Your Ideal Customer Profile at $1m ARR might be the entirely wrong ICP at >$10m ARR. Your ‘grab market share (and margins be damned!)’ strategy might be less attractive in the post Covid-19 world (or pre-IPO). Building a new self-service acquisition strategy may not fit your product experience, or lead to horrible unit economics. Setting your objective to ‘go enterprise’ may be premature and lead to terrible churn.

Getting your objectives wrong is the most serious (and expensive) problem, because it means the whole organisation has been pointing in the wrong direction. 

To run a health check on your objectives, you’ll want to conduct periodic stakeholder interviews; and keep an eye on the relationship between your 4 most important metrics: your North Star, New ARR, Net Retention and Unit Economics.

Let’s start with stakeholder interviews (this should include Individual Contributors, team leads, executives and clients), and can follow some simple questions:

Internal:

  • Why was this objective set?
  • Does it still make sense?
  • What evidence do you have to support that?
  • How would you know if it stopped making sense?

External:

  • Why did you choose to buy x
  • What pain/opportunity were you looking to solve for?
  • Which alternatives did you consider?
  • How closely are we meeting your expectations?
  • Would you recommend us to your peers/colleagues?

These are by no means exhaustive, but what you’re looking out for in the Internal stakeholder responses is a sense of clarity (good) or shrugs (bad).

If folks are saying things like:

  • ‘well it was set because we had x set-up back in [year]’ (referencing out-of-date contexts or constraints)
  • ‘Oh yeah, so-and-so was really passionate about y’ (referencing a departed leader)
  • ‘I’m not sure/don’t know/it’s above my pay grade’ (disconnected from the strategy)
  • ‘It’s been really successful for z, so…’ (blindly copying someone else’s playbook)

Then you have signs of trouble.

And if clients are unsure about their buying reasons or ambivalent about your value, this isn’t a recipe for repeatable, scalable growth and would trigger an immediate deep-dive on ICP and product-market fit (particularly if paired with poor net retention metrics, discussed more just below).

Now let’s consider those four key metrics and what they may signal:

North star metric is out of whack with your New ARR or net retention…has the core product experience (i.e. what your market values most) shifted? Has it become victim to Goodhart’s Law and become over-optimised, to the detriment of the wider product experience?

Net Retention heading in the wrong direction…what’s different about your new cohorts? Are clients being oversold? Is your product less competitive? Does your ICP need updating?

Unit Economics becoming less healthy (even if ARR is rising)…are you struggling to operationalise new channels? Is there a channel-product fit issue? Is your market less profitable than it was/you thought it would be?

New ARR/New Business Revenue is decelerating…do you really have product-market fit? Is your model (or a part of the machine) broken? Wrong ICP? 

Thankfully, if you catch them early enough, you can adjust your objectives accordingly to get everything back in balance and your SaaS business back on course.

Sources: people & data 

Above I’ve spoken quite a lot about the various places to look for signals that you may have cracks in your GTM operations. But it’s worth being explicit about the importance of (almost) always considering two vital sources of information when trying to identify and understand problems. 

For many SaaS leaders, who pride themselves on being data-driven, there is a danger they will focus on metrics to the exclusion of everything else. This is known as the McNamara Fallacy, after the former president of Ford Motor Company and United States defense secretary (serving during the early years of the Vietnam War).

This can lead to some serious mistakes. For example, imagine you see SQLs trending downwards while MQLs continue upwards. If you don’t ask all of the contextual questions previously suggested, your conclusions will be incredibly limited and will be hugely biased on the way you read the data:

  • The SDRs must be losing their way, fire their manager or replace them
  • MQLs are a useless vanity metrics and inbound demand gen is failing

Or perhaps pipeline is growing exponentially and your ARR is starting to lag:

  • The sales team are less effective, we need an upgrade
  • We don’t have enough AEs to keep up with demand, lets hire more

What you might be missing is that the pipeline volume is up, but the quality is way down, and you’re about to spend >£6-figures on more expensive AEs who will arrive and immediately be faced with junk opportunities.

This is why people are always an equally valuable source of insights. They can help you to qualify and contextualise the data.

They can also shed light on less quantifiable sources of potential friction – team misalignment, a bad hire, politics, poor management, stress etc. which won’t be evident in data alone

Resource allocation matrix

We’ve spent some time thinking through how to systematically spot and better understand potential problems in your Go-to-Market operations. But that’s like fielding only a defence on your football team. You also need to know when and where to press an advantage, so you’re scoring goals too. 

Enter the resource allocation matrix, something I developed several years ago, and have found incredibly useful ever since.

resource allocation matrix

It takes into account two vectors: volume and conversion.

This can work across many different functions: marketing may look at traffic volume and lead conversion rates or lead volume and MQL conversion rates; SDRs may look at email or call volumes and meetings booked conversions across different cadences; AEs might look at volume of discovery calls and closed/won rates across different ICPs; and at an exec level you may look at hiring volume (headcount growth) against conversion to new ARR.

Here’s how you run through it:

Core. As builders, we often enjoy fixing things; and as leaders it’s our natural tendency to want to solve problems. It can therefore feel counterintuitive, but it makes more sense to invest more time and money where things are already going really well, than diverting resources to problem areas. Leaving fires burning can actually be the right things to do. I think of this as pushing on an open door, where the output-to-input required is massively favourable. 

You need to keep an eye out for diminishing returns, but you’ll be surprised at how far you can juice a particular ‘core’ area. This is nicely explained by Brian Balfour in the 3rd part of his excellent series on scaling to $100m ARR, where he talks about the Power Law of distribution:

the power law of distribution

More. Next you can turn your attention to where you’re seeing strong conversions. Why go here before fixing poor conversions in the high volume quadrant? In my experience, solid conversions are a great signal you’ve found some kind of initial traction or ‘fit’, which is generally harder to achieve that volume, which can often be more easily improved with money/bodies.

So if you’ve already achieved some level of fit in your GTM operations, you should now be able to scale this by adding more headcount or budget; and assuming the conversion rates don’t diminish with scale, you can quickly spin out a new ‘core’.

For example, you may have been almost fully reliant on outbound for pipeline generation as you explored how to drive inbound prospects…and after several iterations you discover that a certain group of keywords are driving a decent conversion from PPC…now you can start to invest more in PPC, expand the keywords etc.

Or you may have been tentatively exploring selling to enterprise. Once you’ve got a couple of reps consistently hitting quota, now you can consider hiring more and more to scale up your enterprise sales team.

Explore. Fixing leaky buckets are typically more work to fix, which is why they’re 3rd on the resource allocation matrix. This is because they often represent a misalignment between the source and destination, which means you may have to significantly alter one or the other to plug the leak.

Think of churn because you were selling to the wrong profile of client; or driving a ton of SMB leads from Facebook…for an enterprise solution with a 6-figure sticker price.

Occasionally it can be a quick fix – you’re asking for too much info up front on your contact forms, or you need to tweak the product onboarding experience – but frequently you’ll need to deep dive into the issue (hence ‘explore’) and the solution may be a significant adjustment.

Ignore. No traction and no volume to optimise? Now you’re looking at banging against a locked door. It’ll take a huge amount of energy to even open it a crack. This is where it takes guts to say ‘no’ to the ‘great opportunity’ your colleagues keep telling you about. Strategy is often saying no, to focus on the areas of greatest return.

Only if you see a really compelling case that the payoff could be genuinely transformative to your business would you focus resources in this quadrant. And then, you’d be smart about testing the hypothesis in a lean and agile way, to minimise your investment until you’ve found some traction on which to build.

Other times (if you’ve already made that initial investment), it’s knowing when to say ‘enough is enough’ and canning the project. This is hardest of all, and even the most rational leader will fall victim to the sunk cost fallacy once in a while. But if you find yourself putting x project, or y team consistently into this bucket, you need take a good hard look and decide if the juice really is worth the squeeze…

Changes

Let’s take a quick look at how you might frame your thinking about the best way to fix problems or optimise opportunities. Here are the questions I’d run through:

Why do you think this is the right solution? What evidence exists to support this? What other viable alternatives have been considered and why were they discarded?

How can you test this as quickly and cheaply (but robustly as possible)? How will you roll out, implement and communicate the change(s) so you’re sure it doesn’t fail due to People or Processes?

What would indicate you have indeed hit on the right solution? What leading indicators can you put place as your early warning system that things are on/off track?

What impact does it have on other teams? Who needs to be Responsible Accountable Consulted Informed?

What is the opportunity cost? (See the Resource Allocation Matrix).

Is this incremental or fundamental change? How easy is it to reverse (is it a Type 1 or Type 2 decision?)

Wrapping up

Go-to-Market machines are ever-more complicated, and as you scale, the number of failure points only increases. While this is somewhat good (having a single point of failure is nail-biting), it means your ability as a leader to spot cracks will become ever-more challenging.

And yet to scale to hundreds of people and hundreds of millions in revenue, you need to keep your system as optimised as possible, with the cumulative effect of many tiny problems being a drag on your growth rates and team performance. 

This framework can help act as your early-warning system, to methodically find, understand and solve issues as they naturally arise.