Over at Science and The Media, the blog of my university class (of the same name), a small debate is going on about balance in science reporting (obviously hot button because of climate change).
ScientistMags stated in reply: “I think it’s time to balance out the scales of critique by highlighting good science journalism. It would also be an opportunity to demonstrate critical thinking in practice to people who normally do not think about the credibility of what they come across.”
I think this is a really good point that should be taken up. However, it brings up the hoary old MOP vs MOE debate. Measures of Performance in communication are relatively easy. We can objectively assess the number of good (basically correct and informative) media pieces versus the number of bad ones (sensationalised, wrong, misquoted, wrongly biased).
What are harder are Measures of Effectiveness. However, I suspect the risk of mis-educating the public through bad science is higher than the risk of no education at all from not reading good stories; that is to say, bad stories are probably more effective than good ones.
The highlighting of bad science sells books by the truckload, so we know that people are reading that. So, how do we measure the positive effect of good science (and science journalism) so that we can get out their and reinforce this work, and most importantly, measure its effectiveness?