top of page
Evidence-Based Leadership with A/B Testing
FRIDAY, OCTOBER 20, 2023  |  

Businesses and organizations often make decisions based on passing trends, fashionable notions, and the success stories of renowned CEOs. Meanwhile, both traditional models and cutting-edge solutions frequently fall short of delivering their promised outcomes. This predicament presents a significant challenge for managers, business leaders, consultants, and policymakers: How can we avoid following fleeting trends and quick fixes, and instead, rely on solid, dependable evidence to support our organizations?  In response to this dilemma, evidence-based management has emerged with the aim of enhancing the quality of decision-making.

It achieves this by leveraging rigorously assessed evidence from various sources, including organizational data, expert insights, stakeholder values, and scientific literature. Evidence-based management elucidates the specific competencies necessary to collect, comprehend, and utilize evidence to make more well-informed choices within organizations.


Evidence-based management essentially means operating in the realm of facts and reality. Google’s distinctive approach to its people strategy was grounded in evidence-based management, which enabled them to distinguish between what they "hoped to be true" and actual reality. The concept of Evidence-based management seeks to enhance the decision-making principles taught in business schools and the frameworks that business managers and leaders routinely employ. Had Enron’s board, risk management department, accounting, finance, and legal teams embraced the principles and practices advocated by evidence-based management, Enron would likely still be thriving today. As we transition from the era of intuition to the age of evidence, leaders require guidance for data-driven decision-making.


Application. Why do some leaders appear to be better decision-makers than others?  Why do some teams appear to outperform others?  Why are some leaders and teams more innovative, collaborative, co-creative than others?  And, why is it that sometimes you can spend tons of money and try and recruit top talent, and put that team up against a lesser qualified, lesser-funded team and they lose?  Pick almost any sport and you’ll find the recurring story of the team that spent tons of money to acquire top talent—only to find it doesn’t work out so well. 


As a professor and as an experimental social psychologist, I’ve been obsessed with questions like these for the better part of two decades, but as a writer I’ve been lucky enough to collect the stories of some of these high-performing leaders and teams.  And when you combine the research with a lot of those different experiences you find a couple things about the way those leaders and their teams operate—the way they make decisions—what sets them apart. 


You also find quite a few misconceptions about what makes for high-performing leadership and teamwork.  For instance, one prevailing belief is that people in teams need to like each other first, before they can be productive.  “Well, we don’t have enough money to get them to like each other.  So, let’s have our team do some—team-building exercises.  I know, let’s have them go on a retreat—and just talk.  Let’s get the ladder and do that trust fall exercise first.  Or better yet, let’s have them all take a personality test first and learn about each others’ personalities.”  Has anyone actually taken a look as to whether these things actually drive key objectives, key results KPIs?


What is clear is that the very best decision-makers find, in one way or another, a way to promote an evidence-based culture within the contexts in which they work.


The thing that has fascinated me about great decision making through the promotion of an evidence-based culture is that it’s: 1) not a particularly new idea; and 2) it’s certainly not at all a secret.  The high proficiency and performance of an evidence-based culture is extremely well-documented.  That’s why it amazes me when I hear so many people selling the new secrets of success, the new secret culture code, the new secret three-step solution you’ll ever need.  No, there really are no secrets with regard to evidence-based reasoning and communication.  And, the great thing is that so very few organizations actually adopt and promote evidence-based culture.  And, if you choose to do so—your competition will be very unlikely to imitate you.  And, they’ll likely stay—right where they are.


Promoting evidence-based communication in the workplace begins at the top, with Evidence-Based Leadership.  Evidence-Based Leadership is a way of thinking and making the most informed decisions possible.  It means 1) subjecting your business practices and decisions to the same rigorous analysis you would expect to use as a scientist or medical lab technician; and 2) Understanding that every day is another opportunity for companies to use better information to gain an advantage over the competition.


One of the most important components of evidence-based leadership and communication is ensuring that actions are guided by solid research.


When I think about actions guided by solid research, I can’t help but think another professor—a man by the name of Gary Loveman.  For whatever reason, in 1998 Harvard Business School professor, Gary Loveman, packed it all up and traded his career as a professor for the chief operating officer of Harrah’s casinos.  At that time Loveman knew very little about the details of casino operations.  But he turned to studying the industry carefully.  He arrived with a professor’s commitment to rigorous analysis AND he soon made this a part of the company’s culture.  In fact, with Loveman’s leadership, the first policy he instituted was a new firing policy.  There were only three ways to get fired at Harrah’s: 1) steal from the casino, 2) sexual harassment, or 3) institute a program or policy without first conducting an experiment to support it.


Now, casinos produce lots of data—on all sorts of things (like revenues, occupancy, profitability, and staff turnover).  Loveman was determined to use those data, and to collect more information by constantly running small experiments, to uncover facts that would help the company make more money.  Loveman and his colleagues soon discovered that much of the conventional wisdom in the industry was wrong and changed company practices to reflect what they learned.


In one of Loveman’s first experiments, Harrah’s offered one group the conventional promotional package worth $125 (a free room, two steak dinners, and $30 worth of free chips); BUT, it had been proposed that perhaps a $60 chips promotion would work just as well—this way the casino wouldn’t actually lose anything since their likelihood of winning back all the chips was high anyway—so customers in another group were only offered $60 worth of free chips.

In the end, the $60 promotion didn’t work as well as the $125 offer.  It worked twice as well—it generated twice than gambling revenue that the $125 offer did, and at less than half the cost.  Harrah’s soon learned that its most profitable customers were locals, often older retired or semi-retired people, who visited the casino frequently to play for entertainment.  These people weren’t as interested in discounted rooms and meals—they just wanted the complimentary chips.

Loveman devised a deceptively simple evidence-based framework that’s been very powerful: 1) know your customer; 2) target the most valuable among them; and 3) build their loyalty around your brand.  This evidence-based formula led to the tripling of revenues in a number of casinos under Loveman’s watch and his actions guided by research; profits continued to grow and so did the stock price.


A/B Testing.  Arguably the most valuable tool at the disposal of evidence-based leaders and decision-makers is A/B testing, often referred to as split testing.  A/B testing involves a randomized experimental process where two or more versions of a variable are distributed to different groups, units, stores, or presented to distinct segments of website visitors simultaneously.  The aim is to determine which version has the most significant impact and positively influences business metrics.  In essence, A/B testing eradicates guesswork in the optimization process and empowers optimization experts to make decisions rooted in data.  In A/B testing, ‘A’ denotes the ‘control,’ which represents the original variable under examination, while ‘B’ signifies the ‘variation,’ indicating a new iteration of the original variable.  For instance, the version that demonstrably improves your business metric(s) is hailed as the ‘winner.’  Implementing the changes found in this winning variation on your tested page(s) or element(s) can effectively enhance your website and boost your business’s return on investment.


The metrics for conversion vary according to the specific key result or key objective relevant to your company or organization.  For example, in the context of eCommerce, it might revolve around product sales, while in a B2B setting, it could focus on generating qualified leads.


A/B testing is an integral part of the comprehensive Conversion Rate Optimization (CRO) process, which enables you to accumulate both qualitative and quantitative insights from users.  This collected data can be further utilized to gain insights into user behavior, engagement rates, pain points, and overall satisfaction with website features, including newly introduced elements and revamped page sections, among others.  If you’re not implementing A/B testing on your website, you’re undoubtedly missing out on significant potential business revenue.


Why Any Leader Should Seriously Consider A/B Testing.  While B2B enterprises may currently find themselves dissatisfied with the influx of unqualified leads each month, eCommerce stores are grappling with a persistently high cart abandonment rate.  Simultaneously, media and publishing houses are wrestling with the challenge of low viewer engagement.  These core metrics for conversion are commonly hampered by issues such as leaks in the conversion funnel and drop-offs on payment pages, among others.


It's worth noting that the average conversion rate for online customers, which quantifies the proportion of visitors who transition into actual customers, stands at a mere 2% across the digital landscape.  In other words, a staggering 98% of website visitors are unlikely to convert into customers.


A straightforward concept, A/B testing involves the presentation of multiple variations of a webpage to real-time traffic and the subsequent measurement of the impact of each version on visitor behavior.  Through the implementation of A/B testing, organizations can enhance the efficiency of their marketing strategies and user experiences, often achieving the remarkable outcome of doubling or even tripling their conversion rates.  Notably, testing has played a pivotal role in the successes of industry giants like Google, Amazon, Netflix, and other prominent tech companies.  Even during the 2012 Presidential race, both Barack Obama and Mitt Romney assembled dedicated teams to engage in A/B testing for their campaign websites.


In the past, marketing teams grappled with the challenge of harnessing the potential of A/B testing, primarily due to its reliance on expensive engineering and IT resources.  However, a new wave of technology has emerged, allowing marketers to conduct A/B tests independently, thereby swiftly establishing itself as one of the most potent tools for data-driven decision-making.


The adoption of an A/B testing mindset within a company or organization typically unfolds through four discernible stages:


Intuition and Common-Sense (i.e., Conventional Management Strategies): Initially, decisions are guided by intuition, with limited user feedback and scant data.  Decision-makers rely on intuition, and data utilization is minimal, often accompanied by rudimentary data processing methods.  Causal inferences are drawn from situations that may not necessarily warrant them, and data is not given due respect.


Data-Driven: As the organization progresses, decision-makers begin to incorporate data to complement their intuitive judgments, particularly in scenarios where confidence is lacking.  However, the data may still lack depth and sophistication in data processing.  Causal relationships are sometimes inferred inappropriately.


A/B Testing: At this stage, the company introduces the practice of A/B testing and aligns itself with the well-established practices of successful enterprises.  While A/B testing garners followers, the infrastructure remains in its infancy, and the statistical methods employed may raise questions.  The primary objective here is to obtain a numeric result, although the correctness of the number is not always guaranteed.  There's often an implicit trust in numbers derived from A/B tests.


Sound A/B Testing: The company reaches a level of maturity in A/B testing, gaining a thorough understanding of its fundamentals.  As a result, the organization adopts sound practices, producing reliable and trustworthy numerical data.  With this newfound expertise, they make well-informed decisions regarding whether to proceed or not.


Why not short-route to better decision-making through better information provided by sound A/B testing today?


Leave a reply/comment:

Thanks for submitting!

Connect with John

  • ResearchGate White
  • GoogleScholar
  • LinkedIn
  • Wake Logo
  • Twitter
  • Youtube
bottom of page