Question: What Do Titian, Virginia Woolf, and Polling Have In Common?


Answer: Beyond being together in this article, not a damn thing.

_____________________

Pic of the day:  Diana and Actaeon by Titian

________________________

Yet there are moments when the walls of the mind grow thin; when nothing is unabsorbed, and I could fancy that we might blow so vast a bubble that the sun might set and rise in it and we might take the blue of midday and the black of midnight and be cast off and escape from here and now.

Virginia Woolf, The Waves

________________________

Santorum is done.  Finished.  Dead in the water.  It’s been nearly three weeks since he has lead in any polls anywhere, for what little that’s worth.  The last one that I read where he had a lead was a poll with a remarkably small voter pool.  Only 301 people polled.  You couldn’t get reliable poll numbers with a sample that small a number of people polled here on Staten Island, population 500,000.  To use that for a national poll is ridiculous.

But then again most polls are pretty damn ridiculous to begin with.  Think about this.  The largest sample size for any poll is 1,200 people polled by gallup.  That is .000009% of the voting population, using the amount of people who voted in the 2008 Presidential election as a template.

I just don’t see how it can be accurate, at least not for the nation as a whole.  And yet the numbers these people come up with are overall at least somewhat correct.  Perplexing.  Most polls from 2008 (to use that election as an example) were close, but they were a fair bit off in a number of instances, and one could make the presumption that it was too small a sample size hurt at least a bit.

Don’t remember it, you say?  Let me remind you.  Gallup had then Senator Obama winning by 11 percentage points, as did Reuters/Zogby.  Both with a MOE (margin of error) under 3 points.  Battleground had the Senator Obama winning by only 2 %, with an MOE of 3.5%.  And of all the major pollsters, 15 organizations, only 2 of them nailed it.  Two.

He actually won with 52.9% of the vote, and by significantly less than the 11%, and more than the 2% given in the above examples.  It does surprise that the pollsters get as close as they do, but the fluctuations in those numbers, and the reason that a lot of these guys miss when they do is due to a number of factors, methinks.

An important one beyond small sample size is wording of questions.  I’ve read a number of these things, and you would be surprised at how biased most of these things are.  Even the ones that claim to be non-partisan have some level of bias in them.  I have not ever seen a single one without it, and that bias leads the answerer to take that bias into consideration when answering the questions asked.

If you ask a series of  questions about a candidate early in a poll, then ask about that candidate and whether someone would vote for them, the prior questions asked will obviously affect those answers, which in turn affects the poll numbers, which in turn affect the people who read and believe the poll.  It’s called the snowball effect.

Pollsters seem to not realize this.

__________________________

That’s it from here, America.  G’night.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s