JGHali
Active Member
Looking at the pre-election polls, I can't see any evidence that Forum was "most accurate". EKOS was actually a lot closer, estimating support for the Liberals at 37.3% near the end where Forum had them at 41% (compared to 38.6% in the result). Add to that the gross oversampling of the older cohorts and we do not have a very reliable poll.
If I may repost my comment from elsewhere:
There are so many problems with this poll, polling generally, and, especially, how the media reports them. It's true that big shifts in voting preferences happen - but not all the time, and usually only in the context of, well, some other event, be it a scandal or closely watched campaign event (which certainly is not where we are now!). My point here is simply that a reliable poll should have relatively consistent results over short "quiet" periods with fairly limited variability. Instead we have Forum providing results that swing wildly (especially looking at the strata), with unknown sampling weights, non-response, and sampling frame. Often the media (and pollsters who offer comment) ascribe such swings to voter "volatility" when in fact what it shows is the degree of error and bias in their results. Then when a pollster is seen to "get it right" by being closest to the election result, this is ascribed to superior methodology even while it has just as much to do with being lucky.
This kind of poll grossly oversamples certain strata, yet apparently doesn't account for the intrinsic sampling (and non-response) bias this represents. All the post-hoc sampling weights in the world will not fix a sample with major bias from the get-go. Specifically, that 18-34 age sample is weighted according to its population share, but given how small it is (74), the error margins are enormous, to say nothing of the obvious, non-random non-response bias in a poll which obtained results from 74 people under 34 and 466 over 65. Considering that the 18-34 group is still actually larger than the 65+ group, the problem should be obvious, even accounting for lower participation amongst younger people.
(And to be clear, I'm using the proper statistical definitions of error and bias.)
If I may repost my comment from elsewhere:
There are so many problems with this poll, polling generally, and, especially, how the media reports them. It's true that big shifts in voting preferences happen - but not all the time, and usually only in the context of, well, some other event, be it a scandal or closely watched campaign event (which certainly is not where we are now!). My point here is simply that a reliable poll should have relatively consistent results over short "quiet" periods with fairly limited variability. Instead we have Forum providing results that swing wildly (especially looking at the strata), with unknown sampling weights, non-response, and sampling frame. Often the media (and pollsters who offer comment) ascribe such swings to voter "volatility" when in fact what it shows is the degree of error and bias in their results. Then when a pollster is seen to "get it right" by being closest to the election result, this is ascribed to superior methodology even while it has just as much to do with being lucky.
This kind of poll grossly oversamples certain strata, yet apparently doesn't account for the intrinsic sampling (and non-response) bias this represents. All the post-hoc sampling weights in the world will not fix a sample with major bias from the get-go. Specifically, that 18-34 age sample is weighted according to its population share, but given how small it is (74), the error margins are enormous, to say nothing of the obvious, non-random non-response bias in a poll which obtained results from 74 people under 34 and 466 over 65. Considering that the 18-34 group is still actually larger than the 65+ group, the problem should be obvious, even accounting for lower participation amongst younger people.
(And to be clear, I'm using the proper statistical definitions of error and bias.)