How pollsters fight for relevance in the age of big data

Elections are only 5% of a pollster’s business. In the other 95%, competition is growing, resources are shrinking—and no one agrees what to do next

 
People standing in lines, being categorized by overlaid check marks and X's
(iStock)

It took a couple of big disappointments to set Nanos Research on the path from small market research firm to nationally recognized pollster. In 2002, the Toronto-based company had submitted proposals for two large corporate contracts and had secured the favour of VP-level executives. But Nanos didn’t win the work in either instance. “The CEOs both said, ‘If Nanos is so good, why haven’t I seen them in the news?’” Nik Nanos recalls. “When it happened twice in one week, I remember coming back to the office and telling my brother John—who I’m partners with—‘We’re going to be in the national polling business, starting today.’”

But while political polling has brought market research companies like Nanos public attention and name recognition, it’s not always positive. David Coletto was sitting in the SUN TV newsroom when the first results from the Alberta general election of 2012 started to trickle in. Earlier that day, the CEO of Abacus Data had told viewers that Danielle Smith of the Wildrose Party would be the province’s next premier, a prediction that polls from his firm and many others supported. Instead, voters gave Alison Redford’s Progressive Conservatives another majority. “I felt ill at times, and yet I still had to perform on TV and explain what was happening,” he recalls. “I was getting asked by journalists, live on-air, ‘Why were the polls so wrong?’ and I didn’t have an answer at that point.”

In the wake of Alberta 2012 and another widely miscalled provincial election in British Columbia the following year, Canadian pollsters were subjected to the scathing critiques of columnists and bloggers of all political affiliations. Their methods were pooh-poohed and their reliability called into question. In the tight three-way race of this year’s federal election, the focus has shifted from the reliability of any single research firm to the wisdom of the collective pollster mind, with news outlets arguing over who can aggregate the polls in the most reliable fashion.

Reasons have since been offered for the high-profile provincial misses—in the case of Alberta, a last-minute swing in voting intentions from Wildrose to the PCs; in B.C., the failure of those who said they’d vote NDP to show up on election day. And though pollsters have called several subsequent elections correctly, methodological concerns persist among amateur statisticians, although their theories are often contradictory or inaccurate.

Despite all the fury, political polling accounts for less than 5% of the revenue of most market research firms. “It’s a loss leader to generate some marketing and get people to consider us,” says Coletto. Yet these high-profile misses are simply the most overt symptom of an industry in turmoil. Market research firms face a threatening combination of technological disruption, upstart competitors and evolving customer expectations. But there seems to be no consensus within the industry about how to respond to these challenges—and that may only be making the situation worse.


 

When market research first rose to prominence in Canada in the 1980s, random digital dialing generated lists of numbers for call centre agents to try. And when the phone rang, most callees were happy to give a few minutes of their time and their opinions to the person on the other end of the line. “Telephone surveys were the most popular way to do surveys, and there was a barrier to entry in the business, because you needed money and access to a call centre in order to do the work,” explains Nanos. The giants of the industry at the time—Angus Reid’s eponymous firm, American invader Gallup, Allan Gregg’s Decima Research and Goldfarb Consultants—carved up the market between them, delivering the authoritative word on everything from elections to electronics brands.

Those days are gone. Callees now see sharing their opinions as a waste of time, not a novelty. The Pew Research Center reported that response rates were down to 9% in 2012, a quarter of what they were just 15 years earlier. Coletto blames the growth of telemarketing. The lower response rates have increased costs, because survey companies must contact a larger number of people to get the same number of answers. Meanwhile, the barrier to entry that telephone calling produced has been wiped away. “Anyone can now put up a website and an online survey for almost no cost, and they can get respondents in there somehow,” says Marshall David Rice, an associate professor at York University’s Schulich School of Business.

Traditionalists dismiss web-based surveys, which favour the young and tech-savvy. They don’t like Interactive Voice Response, which collects answers via speech recognition or respondents’ input on telephone keypads, either. (It’s much easier to hang up on a robocall than a human.) But even live agents using random number dialing are arguably not as effective as they once were. Some polling critics believe phone surveys oversample an older segment of the population who still answer their landlines. But this accepted wisdom isn’t necessarily true; in fact, phone polling may have the opposite problem. Since the percentage of phone numbers that connect to mobile phone–wielding young people and elderly landline users is about equal, it’s actually the youthful folks who may dominate a sample created by random digit dialing, says Marc-David Seidel, an associate professor at the Sauder School of Business at the University of British Columbia. He blames this, and not the oft-cited low turnout among NDP supporters, for the blown election call in British Columbia.

Sampling has also become less random because many research companies get respondents from panels—lists of people who have agreed to participate in surveys. After the sample has been chosen and the survey conducted, researchers apply their secret sauce: weighting, or adjusting the data to be representative of the population being surveyed. But if you’re looking for a consensus in the industry as to which combination of respondents produces the most accurate results, tough luck. As Rice wryly notes, market research firms have a habit of expounding the virtues of whatever methodology they’ve chosen to employ.

Meanwhile, the massive amount of information readily available could empower businesses to cut out the research middlemen altogether. Today, companies can track customers in-store via their smartphones, interact with them on social media and understand their responses using artificial intelligence–powered sentiment analysis. “Twenty years ago, I was at the table by myself,” says Nanos. “I’d be hired to do a study, and I’d come in and tell them, ‘This is the world, the way it is.’ Fast-forward to now—there’s a data mining expert, and there could be a social media sentiment expert, and then there’d be Nik.”

But big data and the technologies that produce it are simply new ways of connecting with people, says Darrell Bricker, Global CEO of Ipsos Global Affairs and one of the industry’s most experienced names. Framed this way, big data is the next stage in the evolution of market research. “There’s way more data than there has ever been,” he says, “and it’s the job of people in market research to try to find a way of taking all of this data, reducing it to a story and explaining to our clients how they should change their business as a result of what we learned through this.”

Polling can also help market research firms better understand the public—who, when they’re not voting, are consumers. “Political polling is a big help to have a holistic understanding about what’s happening in the economy and society and everything else,” says Bricker. His 2013 book, The Big Shift, written with John Ibbitson, was inspired by the sociological and demographic changes he’s observed in his three decades in the industry. Polling is “an interesting laboratory for testing out methodologies,” he says, such as who tends to respond to telephone versus online surveys, or different demographic groups’ values, which are starting to be recognized as an important factor in consumer decisions.

But Bricker cautions that voting-intention surveys and consumer research are so different that the transferability of conclusions from one to the other is low. Corporate clients rarely want information on the general population, unless it’s to figure out their brands’ market share. Consumer preferences are more stable than electoral ones—people are volatile in the voting booth and predictable at the checkout. Those differences tend to insulate market research firms’ corporate business from the effects of high-profile electoral misses.

Whatever firms do learn about market research from their polling triumphs and tribulations, they’re keeping it to themselves. “You have pollsters always fighting each other, it seems,” observes Coletto. “We never can really get along, and we’re so competitive that we can’t just put down the gloves and try to solve these problems.” He contrasts the prickly Canadian market research world with his contemporaries in the United Kingdom, who he says undertook a tough process of collective reflection after miscalling the 1992 election and are doing so again after this year’s big miss, in which the polls failed to predict a Conservative majority (the British Polling Council is about to launch an inquiry led by an independent statistician).

There have been some attempts to bring the industry together. Bricker is chairman-elect of the soon-to-be-formed Canadian Association for Public Opinion Research, an organization aiming to re-establish public confidence in polling. But the new group doesn’t include everyone in the industry, and some research firms have expressed their reservations rather openly, with Mainstreet Research filing a complaint with the Competition Bureau over the possibility that the association will seek to set and enforce its own industry standards.

In a perverse way, the kinds of high-profile electoral misses that led to the very loss of public faith in polling may actually be good for market research firms and their corporate clients. High-profile but low-revenue, these mistakes force pollsters to question their methods and address their problems publicly, while leaving the bottom line intact. “Traditional market research still has the best track record and is more empirically mature than the other [methods],” insists Nanos. “The era of blind faith is gone. There’s faith—it’s just not blind faith.”

MORE ABOUT MARKETING & MARKET RESEARCH:

Comments are closed.