I stand by my word choice, but yes - people don't take it into account nearly enough I find.
In general, people don't understanding polling and samples and statistics very well at all.
This is a problem.
That's not correct.
Science involves ALL kinds of judgments about measurements.
I do agree that polls that won't PROVIDE the raw data should be viewed with more suspicion than those who do.
It has
always had this component.
How to weight a sample given what is known is ALWAYS part of the discussion in polling and sampling anything.
That's lovely, but has all kinds of barriers to making it work.
First off, the cost for that would be enormous.
You may not think cost should be a factor, but it will be.
Secondly, the sample will then have to be weighted since it will be (by its very nature) a skewed sample since they are all volunteers.
(Maybe you can get around that by calling people randomly and asking them to volunteer until you get your numbers.)
Thirdly, you will lose people to follow up, so you have to build in something to accommodate that.
(You could of course be picking such a huge sample just to allow for that - you could have massive drop off with a sample that size and still get accurate results.)
Now, all that said, these do exist (with smaller sizes).
Whenever you see a "tracking poll", that's what they are doing - the same sample of people repeatedly asked questions over time.
YouGov does something like this, maintaining a huge pool across multiple countries that they can contact again and again.
Then you have people doing things like this:
https://derivativepolling.com/ where they take the same
pollster and track that specific pollster over time to look for trends.
None of these are new problems, and people have been trying to find ways to do this well forever.
It sounds like you might want to dig into the methodology of some of the tracking pollsters and look at their sample sizes and rate of recontact and see if one of them at least gets close to what you want and keep an eye on them in particular.