top of page

Our Unity candidate selection data

When we launched this campaign, we set out to be fully open and transparent in our process. This was because we feel it's the right thing to do, but also because there are - let's face it - any number of disingenuous, deceptive, and downright false "strategic voting" efforts every election.

We've seen plenty this time, as well:

  • Province-wide polling used to imply that a specific candidate was a strategic vote in their riding

  • Past election results used to imply that a candidate was the closest to defeating the PCs in their riding this time, four years later

  • Claims that 2018 performance in provincial seat count should be extrapolated to 2022 performance in individual ridings

  • Questionable "internal polling", completely arbitrary polls with no sources, and in one case, even a truly outlandish "poll" with absolutely zero sourcing or even numbers and federal logos and party names.

On the one hand, this indicates that the parties in question understood the appetite for strategic voting!

On the other, these conflicting and competing claims that are not backed by evidence tend to cancel each other out and leave people frustrated and confused.

We determined to do it in as data-driven and transparent a way as possible.

To start, our volunteers worked to reach out to candidates across the city, particularly in target ridings, to interview candidates and ask questions about them, their campaign, and their connection to the riding. They sent that information back to our research team. As we did not inform candidates that their data would be shared publicly (they would never have agreed to this mid-campaign), we won't share their answers here, but you can see what questions we asked at this link:

In addition to the qualitative data gathered from campaigns, we also collected a great deal of numerical data. We looked at past election results and trends over time, any public local polling we could find, and a wide array of polling sources. These sources were assessed for reliability by looking at their weighting and modeling as well as their past accuracy. They were then added to an expansive model to incorporate each into a cohesive frame through a pivot table (which our research team can explain better than we can) and used to make first our prioritization decisions and, eventually, to inform our final calls. Trends were monitored as well as the polls themselves. You can find a read-only copy of our data sheets, with all crosstabs at the bottom, on this page:

In a few cases, campaigns really stood out and made the critical difference. Two in particular were exemplary: Doly Begum in Scarborough Southwest impressed us with a massive field operation and deep grassroots commitment, with a very well-organized team and many, many conversations. Mazhar Shafiq in Scarborough Centre similarly fielded a remarkable and impressive field operation that had reached a great many voters. Both made a point of noting that they were running as progressives, as well.

We also collected Unity pledges and gave our pledge signers the opportunity to indicate their preferred party, if they did not feel they had to vote to stop Ford. This information helped us determine how many votes we were flipping one way or the other, which will help us analyze our impact when all is said and done.

Our research team included data analysts and experts who put in long hours and late nights. We cannot claim, and do not expect, to be perfect. But we feel we have honoured our commitment to be guided as much as possible by data, to incorporate qualitative judgments such as campaign and candidate strength in an evidence-based way, and to be transparent about our process.

111 views0 comments

Recent Posts

See All


bottom of page