JUST Capital’s Survey Research Is Not Political Polling: It’s a Blueprint of Society’s Expectations of Corporate America

(Redpixel/Adobe Images)

After yet another election cycle where the pollsters seemed off the mark, many Americans were left wondering whether they can trust not only political polls, but polling in general. As Managing Director of Survey Research at JUST Capital, I take this concern very seriously. Research is at the heart of our work, but it differs in both methodology and substance from political polling. JUST’s survey research, fielded in partnership with The Harris Poll, strives to present the most accurate assessment of what the public expects from America’s largest companies and their business leaders. I talk about the means by which we achieve those insights in more detail below.

How is our survey research different from political polling?

In 2020, presidential polls and forecasts showed that Joe Biden would not only beat Donald Trump, but by a wide margin, and that Democrats had a better-than-average chance of picking up enough seats to change the balance of power in the Senate. Even though the outcome of the presidential election was eventually close to what was projected, there were key misses in predicting which states would flip (like Florida and Texas) as well as in down ballot races (such as Maine’s senate seat).

Rarely have political polls forecasted presidential elections by a pinpoint margin; in fact, the margin of error has been remarkably consistent throughout the modern era of polling. Polling around the 2016 election is largely remembered as a failure, but as Nate Silver wrote in a 2018 article in FiveThirtyEight, “polls of the November 2016 presidential election were about as accurate as polls of presidential elections have been on average since 1972.”

Now, in an already tumultuous year, the perceived mis-projections of 2020 have been compounded by those of 2016, leading to a new narrative that “the polls cannot be trusted.” This means that the polling profession in general and my work specifically will surely come under increased scrutiny. But although I cannot control the media narrative about the industry at large, I can offer increased insight into the work we do at JUST Capital.

I am responsible for managing the design and execution of all stages of the research that drives forth our mission. A critical piece of our work involves capturing the sentiments, preferences, and values of the American people regarding just corporate behavior to help align their priorities with the priorities of corporate America.

All of our work is informed by public opinion research, which we refer to as “polling” as a form of shorthand. While public opinion research and political polling are of the same stripe, they ultimately are very different animals. Specifically, the research that JUST Capital undertakes does not predict outcomes (e.g. the winner of a political race), but is a means of understanding the American public’s opinion about the issues that matter most to them regarding business behavior. In this sense, it is more akin to consumer market research than political polling.

While political polling and our research both take a snapshot of the opinions of a random sample of Americans at a moment in time, the approach we take with our Annual Survey is in actuality a form of consumer choice modeling, which requires respondents to discriminate between items and make tradeoffs between most and least important among a subset, yielding the relative priority of each item. The methodology of our Annual Survey is arguably more robust than political polling in the following ways:

  • Number of respondents: We reach out to 4,500 people for our Annual Survey, more than double that of many political polls.
  • Type of sample: Our respondents are derived from a probability sample, meaning any one American has an equal probability of being chosen as a respondent as any other American. Most – but not all – political polls do not rely on probability samples, as outlined in the section below titled, “The election polling wasn’t representative.”
  • Methodologies: Additionally, the quantitative work we do is paired with qualitative interviews and focus groups, which enables us to have more substantive discussions and understand attitudes and motivations at a deeper level.

In a nutshell, JUST’s public opinion research differs from political polling because of who we survey and the methodologies we use.

Why was the election polling wrong?

Now that we’re a couple weeks removed from Election Day (or Week, really), the assessment from many reputable sources is that in actuality the polls were pretty accurate. However, elections are habitually high stakes, with the public demanding a level of precision that is virtually impossible to realize. Nate Silver of FiveThirtyEight cautioned that we all should modify our expectations of what polls do because, “if you want certainty about election outcomes, polls aren’t going to give you that.” What transpired in both 2020 and 2016 is that the polls couldn’t deliver an adequate level of accuracy to satisfy the court of public opinion. So what went wrong?

The election polling wasn’t representative

Fundamentally, an election poll must be representative of the people who actually vote. This year saw historic levels of voter turnout, both in-person and via absentee ballots. Past voting habits and likelihood to vote are key questions from which polling projections are estimated, yet both metrics are self-reported and thus subject to error. One key challenge of 2020 – much the same as 2016 – is that most polls did not survey enough Trump supporters to be representative of that group, resulting in an overestimation of Biden’s lead at both the national and state level.

Why didn’t pollsters find enough Trump supporters? The case of the “Shy Trump Voter” (the idea that Trump voters would rather lie to a polling outlet perceived as liberal) is a popular narrative, but is far less likely to drive sampling error than the increasingly pervasive problem of non-response. Non-responders are those who either cannot or will not respond to surveys, and it is unfortunately a growing problem for the survey community at large. The profile of non-responders generally differs in a meaningful way from that of responders, leaving pollsters with a higher level of sampling error. In the case of the 2020 election, non-responders were significantly more likely to be Trump voters. As the New York Times’ Nate Cohn noted, there is more evidence, then, that Trump supporters suspicious of an outlet they think is ideologically opposed to them would rather not participate than lie.

One way to ensure the representativeness of underrepresented groups is to use probability-derived samples, which simply means any American has an equal probability of being chosen as a respondent as any other American. This equal probability applies to folks in sparsely-populated areas, or those without internet access, or those who do not generally respond to surveys. But data reliability is costly, and as such many political pollsters instead rely on upweighting certain demographics to overcome challenges in not reaching enough of a population.

For the past five years, JUST Capital has drawn the sample for its Annual Weighting Survey from the National Opinion Research Center (NORC) at the University of Chicago, which uses probability sampling methods to achieve a representative sample frame projectable to 97% of U.S. households. Representativeness of the sample is based on two factors: oversampling of traditionally underrepresented populations, and follow-up efforts to ensure that traditionally non-responsive households are included. As such, we capture perspectives across generational and ideological divides, varying income and education levels, race, gender, and more, giving us the utmost confidence that our public opinion research is truly representative of all Americans.

The election polling made too many assumptions

Though many political polls vary in length, complexity and construction, most political polling is derived from the methodology standardized by Gallup, which matches the country’s demographic mix to voter intent to make predictions. This methodology may have worked well in the past, but in recent years it has resulted in little more than yielding a strong signal toward who has the better chance of winning versus a definitive call on who absolutely will win.

A central tenet of research holds that the more information you have about a target audience, the more accurate your projections. Market researchers are really good at sizing an audience they want to sell to, and a key part of that involves building attitudinal and behavioral questions into their modeling. Conversely, in most political polling, scrutiny of voters’ motivation is pretty deficient. In other words, a lot of polls miss the nuance about how and why people behave the way they do. In The New York Times postmortem, “a fundamental mismeasurement of the attitudes of a large demographic group,” is one key reason contributing to the errors in the 2020 polls.

Take for example the “Latino vote.” Most pollsters expected to see the same patterns of voting behavior throughout this demographic group. Yet one of the biggest surprises on election night was the GOP’s performance in Florida, particularly among Cuban Americans in Miami-Dade County. Here we see in stark relief the dangers of modeling outcomes based solely on ethnicity: The misconception that Latinos are a homogeneous population resulted in a miscall for Florida. If pollsters had bothered to look closer at Latino subgroups, they would have learned that the reasons that drove Puerto Ricans from New York to vote for Biden (e.g. social justice issues, the possibility of future Puerto Rican statehood – are very different from the reasons that turned out Cuban Americans to vote for Trump ( e.g. the perpetual hangover of life under Castro’s Communist regime drove voters away from Biden’s agenda, branded as “socialist” by the right).

Writing in the New York Times, David Shor agreed that making too many assumptions is akin to “garbage in, garbage out,” saying, “Demographic factors are not enough to predict partisanship anymore.” If the polling profession wants to right its reputation by improving its level of accuracy, it’s time that pollsters take a page out of the market researcher’s playbook and broaden their mix of methodologies to include more qualitative and attitudinal research.

What we’ve learned from all this

The President of Pew Research Center, Michael Dimock, recently wrote a blog outlining how the industry can and should move forward. “Good pre-election polls try to get inside people’s heads,” he wrote. “They attempt to understand the reasoning behind Americans’ values, beliefs and concerns. They explore…which factors are motivating them to vote for a particular candidate, or whether to vote at all.”

One recent discussion thread among members of The American Association of Public Opinion Research (AAPOR) put it even more bluntly: “Where was the qualitative research to be added to the quantitative polling of voters in the 2020 campaign?”

JUST’s mission is supported by a robust and rigorous research program that polls Americans on the issues they believe U.S. companies should prioritize in order to embrace just business behavior. Though the research that we do is very different from political polling, the fundamental goal is the same, to represent the voice of the public. We do this by capturing the sentiments, preferences, and values of the American people through qualitative discussions coupled with quantitative surveys. This mix of methodologies allows us to mine insights that help us help companies build a more just marketplace and become a positive force for change.

We continuously strive for diversity in our methodology, and to that end, in 2021 we will be forging a path forward by pairing traditional polling with broader insights gathered from other methodologies, such as additional qualitative work, sentiment analysis, natural language processing,  and social media listening. Taken together, they can, according to Emilio Ferrara of the University of Southern California Information Sciences Institute, “give us a window into enthusiasm among populations that polls are missing.” In addition, JUST will be adding a layer of oversight to our research by forming an independent Polling Committee composed of industry leaders and academics.

I am supremely confident of the quality of our public opinion research, but now more than ever we approach that work with humility as we strive to make our public opinion research even more robust, allowing us to have a deeper and richer insight into the thoughts and opinions of the American public.

Additional reading

For a deeper dive on the 2020 election and polling more broadly, I recommend the following sources:

Have questions about our research and rankings?  We want to hear from you!