The effects of poll weighting

Valcazar

Just a bundle of fucking sunshine
Mar 27, 2014
32,700
60,742
113
I've discussed before how much the weighting of a poll can affect the result, but its nice to see a piece lay it all out in an easy to see way.


There's an 8-point shift in the result depending on the choices made about weighting and I think it is something people tend not to appreciate about polling.
 

Butler1000

Well-known member
Oct 31, 2011
30,426
4,635
113
I've discussed before how much the weighting of a poll can affect the result, but its nice to see a piece lay it all out in an easy to see way.


There's an 8-point shift in the result depending on the choices made about weighting and I think it is something people tend not to appreciate about polling.
Um, I think the word "appreciate" is wrong in this instance.

More like, take with a barrel of salt.

Once opinion comes in to adjust results it's no longer purely scientific. It now has a "Russian judge component" that makes some "sports" results really exhibitions and not real competition.

I would much prefer a poll where the same larger sample of volunteers(minimum 10,000 state wide, 250,000 national) are continually polled four times a year as to where they are. Not just support but on several individual policy topics. And see how they change with reflection time.
 
  • Like
Reactions: southpaw

Valcazar

Just a bundle of fucking sunshine
Mar 27, 2014
32,700
60,742
113
Um, I think the word "appreciate" is wrong in this instance.

More like, take with a barrel of salt.
I stand by my word choice, but yes - people don't take it into account nearly enough I find.
In general, people don't understanding polling and samples and statistics very well at all.
This is a problem.

Once opinion comes in to adjust results it's no longer purely scientific.
That's not correct.
Science involves ALL kinds of judgments about measurements.
I do agree that polls that won't PROVIDE the raw data should be viewed with more suspicion than those who do.

It now has a "Russian judge component" that makes some "sports" results really exhibitions and not real competition.
It has always had this component.
How to weight a sample given what is known is ALWAYS part of the discussion in polling and sampling anything.

I would much prefer a poll where the same larger sample of volunteers(minimum 10,000 state wide, 250,000 national) are continually polled four times a year as to where they are. Not just support but on several individual policy topics. And see how they change with reflection time.
That's lovely, but has all kinds of barriers to making it work.
First off, the cost for that would be enormous.
You may not think cost should be a factor, but it will be.
Secondly, the sample will then have to be weighted since it will be (by its very nature) a skewed sample since they are all volunteers.
(Maybe you can get around that by calling people randomly and asking them to volunteer until you get your numbers.)
Thirdly, you will lose people to follow up, so you have to build in something to accommodate that.
(You could of course be picking such a huge sample just to allow for that - you could have massive drop off with a sample that size and still get accurate results.)

Now, all that said, these do exist (with smaller sizes).
Whenever you see a "tracking poll", that's what they are doing - the same sample of people repeatedly asked questions over time.
YouGov does something like this, maintaining a huge pool across multiple countries that they can contact again and again.
Then you have people doing things like this: https://derivativepolling.com/ where they take the same pollster and track that specific pollster over time to look for trends.

None of these are new problems, and people have been trying to find ways to do this well forever.
It sounds like you might want to dig into the methodology of some of the tracking pollsters and look at their sample sizes and rate of recontact and see if one of them at least gets close to what you want and keep an eye on them in particular.
 

Butler1000

Well-known member
Oct 31, 2011
30,426
4,635
113
I stand by my word choice, but yes - people don't take it into account nearly enough I find.
In general, people don't understanding polling and samples and statistics very well at all.
This is a problem.



That's not correct.
Science involves ALL kinds of judgments about measurements.
I do agree that polls that won't PROVIDE the raw data should be viewed with more suspicion than those who do.



It has always had this component.
How to weight a sample given what is known is ALWAYS part of the discussion in polling and sampling anything.



That's lovely, but has all kinds of barriers to making it work.
First off, the cost for that would be enormous.
You may not think cost should be a factor, but it will be.
Secondly, the sample will then have to be weighted since it will be (by its very nature) a skewed sample since they are all volunteers.
(Maybe you can get around that by calling people randomly and asking them to volunteer until you get your numbers.)
Thirdly, you will lose people to follow up, so you have to build in something to accommodate that.
(You could of course be picking such a huge sample just to allow for that - you could have massive drop off with a sample that size and still get accurate results.)

Now, all that said, these do exist (with smaller sizes).
Whenever you see a "tracking poll", that's what they are doing - the same sample of people repeatedly asked questions over time.
YouGov does something like this, maintaining a huge pool across multiple countries that they can contact again and again.
Then you have people doing things like this: https://derivativepolling.com/ where they take the same pollster and track that specific pollster over time to look for trends.

None of these are new problems, and people have been trying to find ways to do this well forever.
It sounds like you might want to dig into the methodology of some of the tracking pollsters and look at their sample sizes and rate of recontact and see if one of them at least gets close to what you want and keep an eye on them in particular.
Oh, I get all the barriers, and the cost. But I wonder if something like that would have enough value to both political parties and some Investment houses, think tanks, and University programs(both as volunteers and teaching tools) to be able to fund it.

And yes the logistics would be incredible. And I do know about tracking polls. This is just those writ large. It's why I suggested state level as the base. 10000 or so is not that big, and you would only run it say as a 5 year to 10 year program before adding a new group. State lvl university can be the base. And the above mentioned funders are always looking for information.
 
  • Like
Reactions: Valcazar

Valcazar

Just a bundle of fucking sunshine
Mar 27, 2014
32,700
60,742
113
Oh, I get all the barriers, and the cost. But I wonder if something like that would have enough value to both political parties and some Investment houses, think tanks, and University programs(both as volunteers and teaching tools) to be able to fund it.

And yes the logistics would be incredible. And I do know about tracking polls. This is just those writ large. It's why I suggested state level as the base. 10000 or so is not that big, and you would only run it say as a 5 year to 10 year program before adding a new group. State lvl university can be the base. And the above mentioned funders are always looking for information.
If someone could get the money together, I don't think anyone would object to people running it.
It would only be a tiny bit more accurate than most other polls overall BUT might give you some good strong numbers on some of the cross tabs.
The other major problem with it from a political point of view is that running it 4 times a year won't tell you much about the specific election at play. (It would keep costs down, though.)

As a long term research project on American political attitudes it could maybe get some funding.
 

Butler1000

Well-known member
Oct 31, 2011
30,426
4,635
113
If someone could get the money together, I don't think anyone would object to people running it.
It would only be a tiny bit more accurate than most other polls overall BUT might give you some good strong numbers on some of the cross tabs.
The other major problem with it from a political point of view is that running it 4 times a year won't tell you much about the specific election at play. (It would keep costs down, though.)

As a long term research project on American political attitudes it could maybe get some funding.
The point would, imo be there to contrast the election day to day polling. The first complaint is there are just the mood in that moment, generally taken after a gaffe, statement or scandal exposed. Have that as a baseline, and yes those crosstabs would be huge, would imo give analysis a better place to judge how things change. Especially over say primary season leading to the election. It would 4 polls of that nature, timed to say come out say Feb, May, August and October.
 

Frankfooter

dangling member
Apr 10, 2015
91,688
22,201
113
I've discussed before how much the weighting of a poll can affect the result, but its nice to see a piece lay it all out in an easy to see way.


There's an 8-point shift in the result depending on the choices made about weighting and I think it is something people tend not to appreciate about polling.
Thanks, that's a good read.

Combine that with the issues with polling over cells vs land lines, online polling, biases with pollsters like Angus Reid and then people commissioning polls for the results and its no wonder nobody trusts them or has any idea where the numbers really sit.

 

Valcazar

Just a bundle of fucking sunshine
Mar 27, 2014
32,700
60,742
113
The point would, imo be there to contrast the election day to day polling. The first complaint is there are just the mood in that moment, generally taken after a gaffe, statement or scandal exposed. Have that as a baseline, and yes those crosstabs would be huge, would imo give analysis a better place to judge how things change. Especially over say primary season leading to the election. It would 4 polls of that nature, timed to say come out say Feb, May, August and October.
Which is why you would have to fund it as a long-term research project.
It would be mostly useless for campaigning or for analyzing a race except as an occasional check point against other polls taken that particular month.

Again, I'm all for more data that is useful for more than horse race analysis.

I presume you are a fan of the more long term panel stuff Pew and Gallup does, then?
 

Butler1000

Well-known member
Oct 31, 2011
30,426
4,635
113
Which is why you would have to fund it as a long-term research project.
It would be mostly useless for campaigning or for analyzing a race except as an occasional check point against other polls taken that particular month.

Again, I'm all for more data that is useful for more than horse race analysis.

I presume you are a fan of the more long term panel stuff Pew and Gallup does, then?
Yup.
 

basketcase

Well-known member
Dec 29, 2005
61,454
6,700
113
It should be pretty obvious that polling is like that quote about democracy; it's a shitty way to describe public opinion but is better than any other way I know.


And of course people with an agenda will distort things for their own benefit. Can't remember the name but there's a new polling agency in Canada with an authentic sounding name that's run by a guy with no background or experience in the field.
 

Valcazar

Just a bundle of fucking sunshine
Mar 27, 2014
32,700
60,742
113
It should be pretty obvious that polling is like that quote about democracy; it's a shitty way to describe public opinion but is better than any other way I know.
Pretty much.

Sampling is sampling - you have to pay attention to how it is being done.

And of course people with an agenda will distort things for their own benefit.
Absolutely.

Can't remember the name but there's a new polling agency in Canada with an authentic sounding name that's run by a guy with no background or experience in the field.
There is a long history of people throwing out lots and lots of shitty polls.
People also misunderstand what polls can and cannot tell you.
 
Ashley Madison
Toronto Escorts