More on why factcheck ratings comparisons aren't useful RT @monkeycageblog Is Politifact Biased Against Republicans? bit.ly/13iqCME
— Brendan Nyhan (@BrendanNyhan) May 29, 2013
CMPA hinted (but, as usual, did not outright say) that these results arise (at least in part) from PolitiFact's liberal bias. U.S. News & World Report contributing editor Peter Roff took that idea and ran with it, writing:
As the first person to empirically demonstrate the liberal, pro-Democrat bias in the Washington press corps, [CMPA Director] Lichter's analysis is worth further study and comment. [my emphasis]
Politifact [sic] isn’t randomly sampling the statements of Republicans and Democrats. They’re just examining statements they consider particularly visible, influential, or controversial. The data are consistent with any number of interpretations and so we can’t say all that much about the truthfulness of political parties, about any biases of Politifact, etc.
But while I agree that we don't know how biased fact checkers are (simply because we've never measured their bias relative to some reasonable baseline), I've always disagreed with John Sides' and Brendan Nyhan's opinions that the newsworthiness bias among fact checkers makes the sample of fact checked statements too biased for comparisons to be useful.
Fact checkers cover statements that actually matter to people because they want people to read what they write. What matters to the public is the truthfulness of the statements that politicians make about issues that we find important. If we just took a random sample of political statements, we'd come across a lot of innocuous ones about things that aren't that important. While a systematic analysis could come up with some reasonable way to weight statements by the importance of the issue to the public (or by some other rubric), looking at fact checkers' statements is a good first pass.
As I design the methods for SoundCheks, a fact checking (well, fallacy checking) research institute I dream of founding and making successful, I'll think a lot about my statement sampling methods. And I've always been thinking about how to measure fact checker bias compared to nonprofessionals and obvious partisans.