Select Page

Here’s one of my favorite clickbait headline tropes – Company X boosted the leads they got on traffic by 25% after changing their ‘contact us’ button from red to green.

Sounds awesome right? How many 25% boosts could be hiding right under your nose? I’m practically planning my next vacation already!

Don’t get me wrong, these are real results and often come from smart, well respected experts. They’re intellectually rigorous and would pass muster even from a PhD in Statistics.

The problem is, it probably doesn’t apply to anything you’re trying to do with your law firm’s marketing. And you could be wasting money, delaying your results or worse if you think they do.

Data Dogmatism

The word data gets thrown around a lot in marketing these days. We’re in the age of machine learning and Don Draper has been thrown out in favor of a server farm in Mountain View.

Data are objective and objective is undeniable. It makes about as much sense to argue against the numbers as it does to argue that the sun rises in the west. So we stopped having the spirited discussions over what creative would play out better in the market in favor of running them both and seeing what people went for.

Largely I think the practice of marketing is better off for it. Data driven is objective and that works. Except when it doesn’t.

Company X who could afford the Stanford trained statistician and the $5000/month split testing software is working in a totally different world than your law firm unless your last name happens to be Morgan, Cellino or Barnes.

You both have data, but only the biggest guys have statistical significance.

The ugly truth about statistical significance for law firms

If you’ve taken a course in statistics or can calculate poker odds in your head, skip this section. Otherwise hang on for a bit.

Let’s say you have a coin. We all know the chances of heads or tails is 50/50. You flip that coin twice and it lands heads both times. By the data, coins land on heads 100% of the time.

Now that’s an objective fact but we know it’s not representative of the true likelihood.

If the coin was thrown 10 times it’s more likely that it would be 5, and it’s even more likely if it was thrown 100 times that it would be 50, and so on. A million coin tosses would be pretty darn close to 500 thousand heads. Over large numbers, the true likelihood of something overcomes randomness, which dominates at small numbers.

Here’s the problem: most law firm marketing projects are closer to 2 coin tosses than 1 million by almost any measure you slice it.

Let’s play with some numbers

Let’s take the solo practice that’s spending $500 per month on adwords. In most practice areas this would get you about 50 clicks and if you’re really hot with conversion rate optimization about 10 client opportunities.

They try out their red button green button test and get 6 conversions for red vs. 4 for green. What a win! That’s 50% where those dopes in the article only got 25%! Or is it?

Running a chi-square test (check this calculator out here)

We find out that it’s not. Not only that but there’s a 48% likelihood that this outcome was pure chance.

Let’s say it’s a mid size firm spending $5000 and add a zero to all of those.

here’s still a 2.53% chance this was attributable to randomness but that’s a level most marketers including myself can live with.

How this could be losing your firm money

There’s no crime in not having statistically significant data. The problem comes in making marketing decisions as if the data were significant.

The first way this can go sideways is making decisions that aren’t backed up. If you pick a winner based on statistically non-significant data, you’re running the risk that the lower performing variation is winning – leaving your conversion rate, click through rates or anything the test was on underperforming for as long as you keep it, maybe even years!

This can be compounded by the second and more insidious issue – taking an incremental approach to your marketing decisions.

Large data sets afford the ability to be incremental. With tens of thousands of trials, you can run a red/green button test and get statistically significant lifts. There’s an allure in the data driven approach of almost letting machine learning take over the decision making process for you. Just plug in elements and let the machine chop away bit by bit as your conversion rate keeps going up.

The problem with that is letting go of ability to go for the big changes. Incremental changes require a lot less courage and after getting used to that approach you might lose your ability to taking a creative leap with big potential rewards.

Small changes make small results. Where you end up seeing big changes are (not surprisingly) on big things like overhauling a site design, changing up an offer or switching up creative from end to end.

Making data driven decisions on a smaller budget

Change the parameters you’re working with.

For example, that solo might have a tough time optimizing a “low funnel” data set like leads because there are fewer of them, but may be able to target a “high funnel” data set like clicks because there are more of them.

For example – same solo firm looking at the 1000 impressions needed to get those 50 clicks, assuming one ad moved from 1.5% click through rate to 3.5%.

Now we’re talking. But there’s something built into those test results that’s worth mentioning.

Quick note – the way significance is calculated, the difference has to be larger with smaller data sets. 1.5% to 3.5% conversion rate is over a 100% difference, which leads to our next point…

Make bigger changes

From a practical perspective this is how we rank the largest potential impacts at Casefuel (examples in parentheses)

  1.     Channel (Adwords/call now > facebook/download/SMS marketing)
  2.     Offer (Start rebuilding credit today vs. Start for $0 down)
  3.     Headline (We fight for men’s rights vs. Trusted Boston Family Law Attorney)
  4.     Demographics and geography (City vs. Statewide)
  5.     Landing page template (Single column vs. 2 column)
  6.     Above the fold landing page elements (Header image)
  7.     Minor elements (colors, fonts, body text)

As you notice, we aren’t testing button colors until we’ve gotten juice out of bigger levers. It’s hard to come up with ways to switch your offer, new headlines or creative ways to target new people.

Run longer tests

It’s not about spend per month, it’s about total trials for a split test. You might not be able to run a new test every week if you’re a smaller budget, but you might be able to run one and get statistically significant results every month or quarter.

Defeating Dogma and getting real results

Ironically, the statistics PhDs actually have an easier job when it comes to split test suggestions because with the amount of data they’re working with, you don’t need a huge shift!

It takes creativity to make big changes and guts to accept what’s possible if your results end up being worse than what you started – which will happen – you have dust yourself off and try again. But frankly – with the budgets most law firms are working with that’s the only way to play.

Don’t be blinded by the science. In the best case scenario the data dogma approach can enable lazy thinking when it comes to split tests, at worst it’s being used like a swinging pocket watch to separate hard working law firms from their money.

For more information on topics like this – make sure to check out the Law Firm Growth Podcast