OTTAWA - Canada's notoriously competitive pollsters have some surprisingly uniform advice about the parade of confusing and conflicting numbers they're about to toss at voters ahead of a possible spring election: Take political horse race polls with a small boulder of salt.

"Pay attention if you want to but, frankly, they don't really mean anything," sums up Andre Turcotte, a pollster and communications professsor at Carleton University.

He has even more pointed advice for news organizations that breathlessly report minor fluctuations in polling numbers: "You should really consider what is the basis for your addiction and maybe enter a ten-step program."

And for fellow pollsters who provide the almost daily fix for media junkies: "I think pollsters should reflect on what this does to our industry. It cheapens it."

Turcotte's blunt assessment is widely shared by fellow pollsters, including those who help feed the media addiction to political horse race numbers.

Some of the most prominent pollsters are pondering ways to break the habit. They're considering cutting back the number of media surveys they provide or even pooling resources to provide bigger, better quality polls at regular -- but less frequent -- intervals.

"The way it's working now is a real dog's breakfast. It's not working," says Ekos Research president Frank Graves, who provides bi-weekly surveys to the CBC.

There's broad consensus among pollsters that proliferating political polls suffer from a combination of methodological problems, commercial pressures and an unhealthy relationship with the media.

Start with the methodological morass.

"The dirty little secret of the polling business . . . is that our ability to yield results accurately from samples that reflect the total population has probably never been worse in the 30 to 35 years that the discipline has been active in Canada," says veteran pollster Allan Gregg, chairman of Harris-Decima which provides political polling for The Canadian Press.

For a poll to be considered an accurate random sample of the population, everyone must have an equal chance to participate in it. Telephone surveys used to provide that but, with more and more people giving up land lines for cell phones, screening their calls or just hanging up, response rates have plummeted to as little as 15 per cent.

Gregg says that means phone polls are skewing disproportionately towards the elderly, less educated and rural Canadians. Pollsters will weight their samples to compensate but that inevitably means "messing around with random probability theory" on which the entire discipline is based.

Just as pollsters made the transition to phone surveys from door-to-door polling, they're now in the midst of another transition -- to online polls. According to an industry survey by the Marketing Research and Intelligence Association (MRIA), online polling supplanted telephone surveys in 2009.

Online polling can be extremely accurate for a survey of, say, readers of a specific newspaper. But they're more controversial when it comes to surveys of the general population, which is what political polls purport to be.

An online poll is conducted using a sample taken from a panel of respondents set up by the polling company. The panel includes only those connected to the Internet, who tend to be younger, better educated and urban. Participants are self-selected and enticed by ads that sometimes offer small cash or other inducements.

As a result, the MRIA, the industry's voluntary self-regulating body, warns that online poll results "may be skewed." Under the association's code of conduct, reporting margins of error, which can only be applied to random samplings of the entire population, is "misleading and prohibited" for Internet surveys.

But that hasn't stopped some pollsters or media outlets from reporting margins of error for online surveys. Or from comparing results of phone polls to online polls, as if there were no difference.

Jaideep Mukerji, vice-president of public affairs for Angus Reid Public Opinion, which conducts online political surveys, argues they generally produce comparable or more accurate results than phone polls. He maintains that pure margins of error are no longer attainable for phone polls either, given the huge drop in response rates.

Gregg says both phone and Internet polls produce "highly imperfect" samples. There are ways to test how well a sample reflects the total population but that costs money so, instead, pollsters continue to use standard tables to calculate margins of error "as if we're generating perfect samples and we're not any more."

So why do pollsters continue to trumpet their imperfect data to the media?

Money. Or more precisely, the lack of it.

When Gregg started polling in the 1970s, there were only a handful of public opinion research companies. Polls were expensive so media outlets bought them judiciously.

Now, Gregg laments almost anyone can profess to be a pollster, with little or no methodological training. There is so much competition that political polls are given free to the media, in hopes the attendant publicity will boost business.

Turcotte says political polls for the media are "not research anymore" so much as marketing and promotional tools. Because they're not paid, pollsters don't put much care into the quality of the product, often throwing a couple of questions about party preference into the middle of an omnibus survey on other subjects which could taint results.

And there's no way to hold pollsters accountable for producing shoddy results since, until there's an actual election, there's no way to gauge their accuracy.

"I believe the quality overall has been driven to unacceptably low levels by the fact that there's this competitive auction to the bottom, with most of this stuff being paid for by insufficient or no resources by the media," concurs Graves.

"You know what? You get what you pay for."

The problem is exacerbated by what Gregg calls an "unholy alliance" with the media. Reporters have "an inherent bias in creating news out of what is methodologically not news." And pollsters have little interest in taming the media's penchant for hype because they won't get quoted repeatedly saying their data shows no statistically significant change.

"In fact, they do the exact opposite. They will give quotes, chapter and verse, and basically reverse and eat themselves the next week," says Gregg.

"You just say, 'Oh geez, the gender gap is gone' (one week) and then, 'Oops, sorry, it's back (the next week).' It's unconscionable."

Gregg, who rose to prominence as a Progressive Conservative party pollster, recalls the initial "giddy" feeling of being treated like a media celebrity. Now, he feels partly responsible for creating a bunch of "mini-Frankensteins."

"You've got this situation where the polling profession has sort of fallen in love with the sound of its own voice and says things, quite frankly, that the discipline can not support."

But it's not all the pollsters' fault. Turcotte says journalists used to be more knowledgeable about methodological limits and more cautious about reporting results. Now, they routinely misconstrue data and ignore margins of error.

The MOE, as it's known in the biz, is usually relegated to the tag line at the end of a poll story, advising that the survey is considered accurate within plus or minus so many percentage points, 19 times in 20. It's rarely explained what that really means.

Take a poll that suggests Tory support stands at 35 per cent, the Liberals at 30. If the MOE is, say, 2 percentage points, that means Tory support could be as high as 37 and the Liberals as low as 28, a nine point gap. Or the Tories could be as low as 33 and the Liberals as high as 32, a one point gap.

If support falls within those ranges the following week, it should be reported as no change -- but rarely is. A two or three point change is more likely to be touted as one party surging or the other collapsing.

Worse, the media often trumpet shifts in provinces or other small sub-samples of the population, like urban women or educated males. But with MOEs of as much as 10 percentage points, seemingly huge 20-point fluctuations are actually statistically meaningless.

"I've seen pollsters comment one week, you know, 'The Tories are dead in Quebec' only to have this magical resurrection the week after and there's a pressure to sort of explain that and you come up with saying, 'You know, well, (Prime Minister Stephen Harper) made this statement or he wore this tie,"' says Mukerji.

"I think if you take a step back and look at the general trend, there hasn't actually been all that much that's changed, quite frankly, in the party standings."

Ironically, journalistic fascination with polls stems largely from a desire to access to the same kind of data as political parties, to better understand their strategies and motivations. But a media poll -- a couple of questions put to a sample as small as 1,000 -- simply can't compare to the much larger party surveys, which allow pollsters to drill down into the regional and demographic sub-samples with much greater accuracy and to divine how best to move target voters.

Liberal party pollster Michael Marzolini's last poll surveyed 5,000 people and asked 150 questions. He says raw horse race numbers are virtually irrelevant in plotting strategy.

Politicians like to say the only poll that really counts is on election day. But Marzolini worries the plethora of shoddy media polls, an annoyance between elections, can be self-fulfilling during a campaign.

With little or no time between polls and the media fixated on the flood of horse race numbers, he says voters don't get a chance to reflect on platforms or leaders or their campaign messages. Hence, "the only movement in the polls is in fact motivated by the previous polls."

Poll have always had some influence on elections, helping to drive strategic voting. But Marzolini fears the sheer volume now is creating a situation in which the media -- and, by extension, voters, -- "just want to get the score for the game; they don't want to watch the game."