There is a downside to businesses that focus heavily on standardization, optimization, and driving out variability: Such organizations leave themselves vulnerable to underinvesting in experimentation and variation, which are the lifeblood of innovation. Good experimentation helps firms better manage myriad sources of uncertainty (such as, does the product work as intended and does it address actual customer needs?) when past experience can be limiting. And it is only through such experimentation, which might include structured cause-and-effect tests, informal trial-and-error experiments, and rigorous randomized field trials, that companies can unlock their true capacity for innovation.
When W. James McNerney Jr. became CEO of 3M in the early 2000s, he quickly went about remaking the company into a leaner, more efficient version of itself. He tightened budgets, let go thousands of workers, and implemented Six Sigma, the rigorous process-improvement methodology. On the surface, McNerney’s plan seemed sensible enough. After all, such measures had worked so well at General Electric, where he served as a senior executive for more than a decade. But something was getting lost in 3M’s aggressive drive toward peak efficiency. The company, which had invented Thinsulate, Scotchgard, Post-it notes, and a host of other blockbuster products, was starting to lose its innovation edge. One telling statistic summarized the problem: In the past, one-third of sales had come from new products (released in the past five years), but that fraction had since fallen to one-quarter.1
3M is hardly alone. Many companies have been on a quest to cut waste and increase efficiency. To support that effort, they have adopted quality-control programs like Six Sigma and have encouraged managers to maximize the utilization of resources, to standardize processes, and so forth. And as the worldwide economy took a downturn in the late 2000s, those trends only accelerated. Unfortunately, as 3M discovered, methodologies that were originally designed to stamp out manufacturing variability can sometimes have unintended consequences for innovation when they are applied to the organization as a whole. Indeed, eliminating variability can also drive out experimentation, and experimentation is the lifeblood of innovation.
If anything, companies should experiment more and not less. This is true even when business slows (or, as some might argue, especially during a market downturn). Otherwise, a company’s pipeline of new products, services, and business models could dry up, leaving it extremely vulnerable to the competition. As we shall see, those companies that maintain their experimentation when business is slow will be all the more prepared when the market eventually picks up. Moreover, it is important to note that experimentation has never been cheaper. Computer simulations and rapid prototyping, for example, enable companies to run relatively inexpensive experiments that can answer myriad “what if” questions.2 Also, many companies now have a direct link to their customers, often through the Internet or other IT tools, and this enables timely feedback from various experiments. And analyzing the results of such tests can be done much more cheaply and efficiently than in the past, thanks to more powerful, sophisticated, and cheaper IT tools. Why, then, have some companies been reluctant to increase their experimentation activities?
Innovation Requires Experimentation
Part of the problem is that the average executive does not fully appreciate the crucial role and importance of experimentation. This fundamental issue goes to the very heart of how we learn what we do. To acquire knowledge, we can rely on passive activities (such as reading) or we can participate more actively through observation, exploration, and experimentation. Observers wait for changes to be induced and then carefully study what has been presented to them. Exploration assumes a more proactive role but still lacks the manipulative character of an experiment. In the sciences, astronomers are perhaps the most patient observers while anatomists take on a more active role when they dissect plants or living organisms.
In a perfect business experiment, managers separate an independent variable (the presumed “cause”) from the dependent variable (the “effect”) and then manipulate the former to observe changes in the latter. Ideally, this will then give rise to learning about the relationships between cause and effect that can then be applied to or tested in other settings. In the real world, however, things are much more complex: Environments are constantly changing, linkages between variables are complex and poorly understood, and the variables are often uncertain or unknown. Managers must therefore not only move between observation, exploration, and experimentation; they must also iterate between experiments. This overall process can be daunting, even for experienced managers, and that explains why so many companies have chosen instead to rely on the intuitions of executives and other so-called experts. Gary Loveman, the CEO of Caesars Entertainment, has a term for that: “the institutionalization of instinct.”3 But managerial intuition can often be misleading (if not downright wrong) and is simply no substitute for rigorous experimentation. And experimentation need not be an overwhelming task if companies follow some basic principles.
When managers know all the relevant variables, they can use formal statistical techniques and protocols to develop the most efficient design and analysis of experiments. Such structured experiments, which can be traced back to the first half of the 20th century when they were first deployed in the agricultural and biological sciences, are now being used both for incremental process optimization as well as for studies in which large solution spaces are investigated to find an optimal response of a process.4 The techniques have also formed the basis for improving the robustness of production processes and new products.5
But what if the relevant variables are uncertain, unknown, or difficult to measure? In such cases, experimentation must be much more informal or tentative. With informal experiments (and when very small samples are used), the objective is often hypotheses generation rather than rigorous testing. A manager might be interested in investigating what type of employee bonus will lead to higher productivity, or a software designer might want to know if changing a particular line of code will remove a software error. Such trial-and-error types of experiments occur all the time and are so much an integral part of innovation processes that they become like breathing – people conduct them but are not fully aware of the fact that they are experiments.
It is important to note that good experimentation goes well beyond the individual tests and their protocols. It is about the way in which companies use such tests to manage, organize, and structure their innovation processes. Specifically, it is about how firms can learn so that they can better manage various sources of uncertainty when past experience can be limiting. Such sources of uncertainty include those with respect to the R&D process (does the product work as intended?), production (can it be effectively manufactured?), customer needs (does it address actual needs?), and the business itself (does the opportunity justify the investment in resources?). Only by using experimentation to manage those types of uncertainty can companies unlock their capacity for innovation. Indeed, experimentation is inextricably connected to innovation, and managers need to understand that fundamental link. Simply put, there can be no innovation without experimentation. Or, in other words, no product or service can be a product or service without first having been an idea that was subsequently shaped through experimentation.
And that is where companies like 3M make fundamental mistakes when they try to apply a technique like Six Sigma to innovation. In short, companies cannot treat R&D like manufacturing because the two processes are inherently different. In manufacturing, the tasks are mostly repetitive and the activities are reasonably predictable. Not so in R&D, where the tasks are often unique and the activities are difficult to anticipate because of changing project requirements or discoveries along the way. Such fundamental differences have profound implications. In repetitive processes like manufacturing and transaction processing, the goal is to minimize any system slack in order to achieve peak efficiency, and the relationship between added work and required time is straightforward: Add 5% more work, and it may take 5% more time to complete. Things are not so simple with processes that have high variability, such as R&D. The amount of time that projects spend on hold, waiting to be worked on, rises sharply as the system slack decreases.6 Add 5% more work in R&D, and it might take 100% longer to complete the project. And if R&D work involves experiments, long delays will inhibit feedback and learning. The bottom line is that when R&D employees are working near full tilt, project speed, efficiency, and output quality (namely, innovation) will inevitably suffer.
Learning to Embrace Failures
Another difficult lesson about experimentation (and innovation) is that, to be successful, companies need to be willing to fail. Only through such failures can they discard bad ideas, such as a confusing software interface, a packaging concept that will lead to considerable waste, a drug with dangerous side effects, and so on. And eliminating what does not work will then free people to pursue other solutions. But many companies instead have a “get it right the first time” mentality, which discourages people from pursuing breakthrough ideas. The result: incremental product improvements rather than innovative offerings.
That said, companies have to be smart about how they fail. For one thing, failing early on – when an idea is far upstream so that it can be scrapped without wasting considerable time and resources – is far preferable to failing late in the process. The lesson here is that early “failures” can lead to more powerful successes faster. IDEO, an innovative product development firm, even has a motto for that mindset: “Fail often to succeed sooner.”
Companies, however, are not typically set up to embrace failure. “Our experience has been that most big institutions have forgotten how to test and learn. They seem to prefer analysis and debate to trying something out, and they are paralyzed by fear of failure, however small,” argued T. Peters and R. Waterman in the bestseller “In Search of Excellence.”7Peters and Waterman wrote those words about 30 years ago, but their keen observation could hardly be truer today.
Of course, not every business has an acute fear of failure. Some, in fact, have a healthy acceptance that failure is just part of the process. Google, for instance, runs myriad experiments to continually improve its search algorithm. In 2010 alone, the company investigated more than 13,000 proposed changes, of which around 8,200 were tested in side-by-side comparisons that were evaluated by raters. Of those, 2,800 were further evaluated by a tiny fraction of the live traffic in a “sandbox” area of the website. Analysts prepared an independent report of those results, which were then evaluated by a committee. That process led to 516 improvements that were made to the search algorithm from the initial 13,000 proposals. In other words, Google’s failure rate is higher than 95%.8 “This is a company where it’s absolutely okay to try something that’s very hard, have it not be successful, and take the learning from that,” contends Eric Schmidt, former CEO of Google.9
The Value of Randomized Field Trials
When conducting experiments with customers like Google does, companies have a powerful tool at their disposal: the randomized field trial. These tests have been invaluable in medicine, helping researchers determine whether a particular treatment is effective or not. The basic concept is simple. Take a large population of individuals with the same affliction and randomly select two groups. Administer the treatment to just one group and closely monitor everyone’s health. If the treated (or test) group does statistically better than the untreated (or control) group, then the therapy is deemed to be effective.
Similarly, randomized field trials can help companies determine whether specific changes (such as a new layout for a chain of retail stores) will lead to improved performance (a significant bump in sales). Consider, for instance, Capital One, the financial services company. From its inception in 1988, the company has based its very existence on the use of randomized field tests to investigate potential innovations, no matter how seemingly trivial. Capital One has used controlled experimentation to test just about everything, including new product and service offerings, operational changes, marketing campaigns, and so on. The company might, for instance, test the color of the envelopes that product offers are mailed in by sending out two batches (one in the test color and the other in white) to determine any differences in the responses.10
Randomized field tests are indeed a powerful experimental tool, but they are not without their challenges. For the results to be valid, the field trials must be conducted in a statistically rigorous fashion. Specifically, people need to be assigned to either the test or control group through a selection process that is purely random to help ensure that the two groups don’t differ in any pertinent way other than with respect to the independent variable being studied. Otherwise, other variables could easily skew the results. In the Capital One envelope example, if a larger percentage of men had been assigned to the test group, a lower response for that group might have nothing to do with envelope color; it might simply mean that men are less apt to respond to such offers. Moreover, both the test and control groups must be representative of the larger customer base. If, for instance, both the test and control groups in the envelope test had contained a much higher percentage of women, then Capital One wouldn’t know for sure whether the results were applicable to men. Thus randomization plays an important role in experimentation: it helps to prevent systematic bias, assigned consciously or unconsciously, and evenly spreads any remaining (and possibly unknown) bias between test and control groups.
Such caveats notwithstanding, randomized field trials have become standard practice in direct marketing, and they have even begun to spread to unlikely areas like the gaming industry. Caesars Entertainment, which operates Harrah’s, Caesars, and other casino resorts, regularly uses controlled experiments to develop and fine-tune its various marketing efforts. The company might, for instance, test which perk – a complementary meal versus a free night of lodging – would ultimately induce customers to spend more during their stays. Gary Loveman, the CEO of Caesars Entertainment, has famously stated that there are three ways to get fired from the hotel and casino company: theft, sexual harassment, or running an experiment without a control group.11
Going Against Instincts
Given how valuable experimentation can be, the question must be asked: Why don’t companies experiment more? Certainly the drive toward increased efficiency has been an issue, but another factor might also be at play. Consider how senior management often has strong incentives to focus on the near term and get rewarded for sticking to plans. But innovation activities can be highly variable and difficult to plan and predict, especially over short timeframes. Dan Ariely, the noted behavioral economist, contends that businesses often shy away from experimentation because they are not good at tolerating short-term losses in order to achieve long-term gains. “Companies (and people) are notoriously bad at making those trade-offs,” he argues.12 And, as mentioned earlier, such business myopia becomes all the more acute in bad times, when market conditions force many companies to tighten their belts. But not all businesses have fallen into that trap.
Take, for example, ams, the Austrian-based manufacturer of analog semiconductors. Employing around 1,200 people in more than 20 countries, ams develops and manufactures sensors, wireless chips, and other high-performance products for customers in the consumer, industrial, medical, mobile communications, and automotive markets. Typical applications require extreme precision, accuracy, dynamic range, sensitivity, and ultra-low power consumption.
To maintain its technical edge, ams implemented a major initiative for business experimentation in January 2007. Throughout the company, all employees were encouraged to run experiments and submit them to a central coordinator. These activities have distinct learning objectives, and they do not include regular tasks such as feasibility studies and normal project work. Of the proposed experiments, the company approves about two-thirds, but their costs are not measured or accounted for in timesheets or work statements. The point is that management does not oversee the experiments: employees come up with the ideas, design the tests, and run them – all in addition to their normal responsibilities.
To document those activities, the company publishes annual proceedings of the experiments. As of November 2012, ams had documented 369 completed tests, of which more than 80% were technical in nature, about 10% were organizational, and the remainder related to marketing and sales. Bonuses have been awarded to the best experiments, with success measured by learning objectives or outcomes. So far, those bonuses have reached a total of 124,000 euros. In addition, ams has also run company-wide experiments such as a “24h Day” event during which employees dropped all their regular duties and spent 24 hours nonstop to work on their own ideas.
It should be noted that many of those experiments have become the starting points of new projects, product improvements, patents, and new product proposals. As such, they help ensure that ams will have a healthy number of offerings in the pipeline for when the economy recovers. In other words, while other businesses were cutting back their innovation activities, ams not only stayed the course; it upped the ante by launching a major initiative, successfully managed the delicate balance between efficiency and building a culture of experimentation, and empowered its employees to try out new things. And, as a result, the company should be ready for any market upswing, whereas other businesses could easily be caught off-guard.
According to various accounts, companies have been hoarding cash. In mid-2012, for example, corporations in the S&P 500 had stockpiled about $900 billion.13 Certainly, such financial liquidity has its advantages, and investing recklessly on ill-advised initiatives should never be encouraged. But, when it comes to innovation, being too frugal can also have its drawbacks, particularly if the result is that a company’s pipeline of new products and services begins to dry up. And that’s the danger facing extremely efficient businesses that value standardization, optimization, and low variability: They leave themselves vulnerable to underinvesting in experimentation and variation.
That lesson was something that 3M learned the hard way. After CEO McNerney left the firm, the new CEO George Buckley began to undo some of his predecessor’s actions. He increased the R&D budget substantially and freed research scientists from the grips of Six Sigma. “Invention is by its very nature a disorderly process,” explained Buckley. “You can’t put a Six Sigma process into that area and say, well, I’m getting behind on invention, so I’m going to schedule myself for three good ideas on Wednesday and two on Friday. That’s not how creativity works.”14 Buckley’s wise words capture, in a nutshell, why innovation (and experimentation) will never be a process that is entirely predictable nor highly efficient. Other executives and companies would do well to remember that simple managerial truism.
About the Author
Stefan Thomke is the William Barclay Harding Professor of Business Administration at Harvard Business School in Boston.
References
1. Brian Hindo, “At 3M, a Struggle between Efficiency and Creativity,” BusinessWeek (June 6, 2007).
2. To understand how new technologies have changed the economics of experimentation, see Stefan Thomke, Experimentation Matters: Unlocking the Potential of New Technologies for Innovation (Harvard Business School Press, 2003).
3.“The Experimenter,” Technology Review (February 18, 2011).
4. Beginning with Ronald Fisher in 1921, many articles and books have been written on experimental design. Douglas Montgomery’s textbook Design and Analysis of Experiments (Wiley, 1991) provides a very accessible overview and is used widely by students and practitioners.
5. Techniques for improving product and process robustness (also known as Taguchi methods) are discussed in Madhav Phadke’s book, Quality Engineering Using Robust Design (Prentice Hall, 1989).
6. Stefan Thomke and Don Reinertsen, “Six Myths of Product Development,” Harvard Business Review, May 2012.
7. Thomas J. Peters and Robert H. Waterman Jr., In Search of Excellence: Lessons from America’s Best-Run Companies (Harper & Row, 1982): pages 134-135
8. http://www.google.com/competition/howgooglesearchworks.html
9. http://techcrunch.com/2010/08/04/google-wave-eric-schmidt/
10. Jim Manzi, Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society (Basic Books, 2012): pages 144 – 145.
11.“The Experimenter,” Technology Review (February 18, 2011).
12. Dan Ariely, “Why Businesses Don’t Experiment,” Harvard Business Review (April 2010).
13. “Dead Money,” The Economist (November 3, 2012): p. 71-72.
14. Brian Hindo, “At 3M, a Struggle between Efficiency and Creativity,” BusinessWeek (June 6, 2007).
[/ms-protect-content]