You are currently browsing the category archive for the ‘climate change’ category.
Naomi Oreskes is here in Australia promoting her new book, co-authored with Erik Conway, called “Merchants of Doubt: How a handful of scientists obscured the truth on issues from tobacco smoke to global warming“. I will be reading it, definitely. Tonight I went to her public talk at the University of Western Australia. She is a very good speaker, clear and concise, conveying precisely what it is she means to say and not confusing any of the issues. Impressive. Oreskes is a Professor of Science and History at the University of California, San Diego.
Her thesis surrounds scientific uncertainty and how that has been used by a group of scientists to create doubt in the minds of people about big issues like the dangers of tobacco smoke, and the realities of climate change (or global warming, if you prefer the older, arguably more-correct terminology). It is an eye-opening study of recent history.
If you’ve read my blog before, you’d know that I have an interest in the role of uncertainty in science. I see it as especially critical to the communication of science, and so this talk was particularly interesting. Good scientists embrace uncertainty. So much so that they use sophisticated statistical techniques to quantify it. A good scientific study knows its limits.
Uncertainty, in the scientific sense, does not equate to doubt as to the ability of a study to illuminate our understanding of the world. However, it does appear to be very useful in making scientific findings hard to understand for the general public. In the public’s eye, it very well may be that scientific uncertainty is interpreted as ‘doubt’. This is a shame, because truly doubtful scientists will say that they are doubtful (doubtful here implying that the results are dubious as to their implications). Doubt is not what is meant by the error bars of science. Those error bars simply demonstrate just how precise the findings are. If there are ‘overlaps of error bars’, it is very likely that the result will not be ‘significant’ and so the scientist might not have anything definitive to say as to the results.
This, however, has not been the case in climate science, as Oreskes makes plain. Climate science, unlike most fields of science, has been very definite indeed as to global warming. It is happening, and it is almost certainly contributed to (if not entirely caused by) humans. The level of agreement amongst scientists is extraordinary. Unfortunately, along the margins, the error bars and minor disagreements have been interpreted as doubt as to the general findings and implications. Oreskes’ contribution is to say that this might have its roots in the political ideology and personal motivations of some influential individuals, rather than actual doubt in scientific circles.
Oreskes has conducted an historical study, using the mainstay of historical techniques. What she speaks of is the actual historical record of the individuals concerned. In that sense, what she says should be uncontroversial. Her interpretations may remain controversial, but some of the things said by the scientists she writes about have to be seen to be believed.
Which ever side of the mythical climate fence you sit on, the historical record remains. It does not paint a pretty picture of the deep motivations of the anti-global warming movement. It also carries some important warnings. We need to be careful with science and how it is used in the public domain. This is a lesson that applies to both ‘sides’. Equally.
Interestingly, unlike how these things usually go, there were no ‘skeptic’ questions asked. Is this because her work has revealed a particularly inconvenient truth? Is the history of science a domain where skeptics fear to tread?
There are a few dramatic climate-change related videos going round at the moment, frequently going after the shock factor. Whilst the shock factor is not always effective, this one below I think is very good, because it has a cognitive component – linking polar bears with your behaviour. No longer are they stuck on a melting iceberg, and this delivers a nice little way of thinking about your carbon footprint:
Over at Science and The Media, the blog of my university class (of the same name), a small debate is going on about balance in science reporting (obviously hot button because of climate change).
ScientistMags stated in reply: “I think it’s time to balance out the scales of critique by highlighting good science journalism. It would also be an opportunity to demonstrate critical thinking in practice to people who normally do not think about the credibility of what they come across.”
I think this is a really good point that should be taken up. However, it brings up the hoary old MOP vs MOE debate. Measures of Performance in communication are relatively easy. We can objectively assess the number of good (basically correct and informative) media pieces versus the number of bad ones (sensationalised, wrong, misquoted, wrongly biased).
What are harder are Measures of Effectiveness. However, I suspect the risk of mis-educating the public through bad science is higher than the risk of no education at all from not reading good stories; that is to say, bad stories are probably more effective than good ones.
The highlighting of bad science sells books by the truckload, so we know that people are reading that. So, how do we measure the positive effect of good science (and science journalism) so that we can get out their and reinforce this work, and most importantly, measure its effectiveness?
I am interested in the information that is lost as scientists proceed from experiment to publication. The real factors that slip through the cracks of expediency. What is more important in the communication of research, the method, or the factoid results that come from it? Are we too trusting in the scientific method? Has peer-review become a substitute for a wider interrogation of method?
These are just a few questions going through my mind as I read “Simplification in Scientific Work: An Example from Neuroscience Research” – a 1983 article by the late Susan Leigh Star. I was particularly struck by an early observation in the article that “published scientific conclusions tend to present results as faits acomplis, without mention of production of decision-making processes.” I am not sure that this is so true today, but I am intrigued by the possibility that it is exactly that loss of information (as research is presented with a higher degree of ‘granualrity’) that opens a door for skepticism in the wider community. When a large body of research by multiple scientific schools tends to agree on a matter, there is sometimes an impression given that they are all doing exactly the same experiments. Whilst the broad methods are the same, of course expedient decisions are made and this causes subtle differences. These are not always thoroughly explained, even if they are justified. I can’t help but think that something in this is relevant to the skeptical program in climate change. Is this what lets in the calls of “conspiracy”?
More to read, more to do. I have a few other things on my plate, but this is an intriguing line of research.
We have a lawn at home, and we only water it on our water-restricted watering days (except very occasionally by hand if its a 40+ degree day). It keeps the garden cooler, and we assumed it was also a good carbon sink. Well, it is a good carbon sink, but it seems that all the grooming efforts eliminate that or worse:
I know that ours is only a garden lawn, and this study focussed on public parks, but some of the results must be transferrable – we mow our lawn and only a month or so ago we applied some fertiliser for the summer. We use an electric mower, so that probably helps as it is only drawing baseload from the power station (which is of the super-green, brown-coal-fired type!) – arguably better than an inefficient 2-stroke mower. But it calls into question the "greenness" of the backyard lawn, which does after all take up a considerable portion of the garden.
So, what do you do?
You could replace it with synthetic turf, but that would have all sorts of issues relating to its manufacture. Or you could replace it with gravel and the odd plant, but that would not have the cooling effect and would have even less carbon storage capacity. Perhaps its best to keep the lawn and not fertilise and mow less. Its a tricky area in a climate where everyone wants to do their bit.
As always with these issues, there is more than one aspect. Here we balance CO2 emissions against water security. Where we live, water is scarce and resources are stretched. Electricity generation produces tonnes of CO2. But it will be water that gets us first. Increasing population demands greater water supply. In an area at least 50% supplied by groundwater, much of which is "fossil water", it is obvious that a limit will be reached. What is worse, increasing water demand forces government to install desalination plants, further increasing electricity demand. A vicious cycle is established. SO, ultimately water demand reduction could have a double benefit – reduce the need to enhance supply and reduce expanding electricity demands.
Where does this leave the lawn? Well I guess it means replace it with a garden bed full of drought friendly plants that require little water. Either way, it reminds us that reducing water use is one of the best things you can do for the environment. Still, that lawn is a nice thing to have…
One of the great frustrations in science is getting good data. Collecting it yourself can be a boring, longwinded and seemingly pointless exercise, especially if you are collecting data on multiple variables when you know you’ll only use a few (a common thing in geology). Getting legacy data from others can be even harder. Incomplete data sets, different files, wrong formats, wrong headings etc etc… These are all issues we face. However, having complete data is golden – you don’t want your work usurped by another on the basis that they had more complete data and so could see the real picture.
So it seems in the climate change debate, we now see a real problem emerge. One that actually does cause a few problems for the climatologists who have provided the evidence for “AGW”. New Scientist recently published a piece correctly (in my opinion) highlighting this as a significant concern, but one with some seemingly intractable barriers to resolution. Large chunks of important data are sitting with commercial rights within the vaults of institutes around the world. Governments would pay penalties for their general release. This is not good for the science and only fuels speculations from the deniers. It is indeed a pity that the deniers can’t get their hands on it because then they could do the same tests and come to conclusions that add to the debate. However, all this should not be mistaken as a conspiracy – it is normal in many scientific fields to have data sets locked up under commercial arrangements (or government legislation). Science has worked around this for years and continues to do so. Climate science itself has worked successfully under this regime too. Perhaps this is just another storm in a teacup.
We’ve had government bailouts for banks, perhaps its time for governments to put some money and legislation behind freeing up these data sets completely. Pay-off the commercial interests, legislate for data freedom. It would be a nice shot in the arm for a needlessly troubled science. I suspect only the deniers have anything to fear.
This news piece came out today. The president of the NSW Farmers Association is quoted:
“We’re very concerned that the ability for people to put the other side of the argument in the science debate has been totally gagged over the last year or two and the only way you can bring that out and have it out in to the open without the vilification that’s occurred in the past is to have a royal commission”
Now, a piece-by-piece dissection of what is obviously a politically motivated statement is probably not neccesary, BUT, how many on this “other side” are being villified? Who is actually doing the gagging? Is it really happening at all?
Perhaps the rational climate scientists would be interested in all that too, might shut a few people up. Here again we have an example of someone not understanding that science is not simply a debate where someone wins (presumably on points – which to some extent would have the deniers ahead at the moment). This is a crazy notion – that a Royal Commission could “answer” the question. It isn’t answered now and it won’t ever be answered in time to satisfy these continuous critics.
Forcing scientists in front of a trained lawyer will not reveal anything more that having other scientists read and criticize their work. This happens every day already. That applies to both “sides” of this “debate”. Frankly some better informed journalists and feature writers might help – people who can present a balanced picture without creating the impression of “balance”.
There seems to be a common thread amongst sceptics out there that science is done via something that looks a little like the Council of Nicaea. That is to say, that a committee of scientists decides what is “doctrine” before instructing publishers what to print. There is confusion between the ways the legal system (or political system) works and how science works.
Lets have a look at some typical tasks in a scientist’s professional life:
1. Data collection. This can be the longest and hardest (and most boring) phase. This is where hours are spent over test-tubes, or, in my case, hours in the hot sun staring closely at rocks. Whilst you may be thinking about the end-game in this phase, the task is usually so routine that bias hardly exists (if it does, it is because the method itself is biased, or you’re just sloppy). Actually, there will be mistakes, but these tend to revert to the mean, so will be cancelled out in the final analysis.
2. Hypothesis generation. I put this after data collection to bait some people, but actually, it has to be said that hypotheses are generated throughout the scientific process. The important thing is that you are only testing the original hypothesis whilst conducting an experiment designed to test that hypothesis. Other ones must wait for other experiments. There is no harm in “hypothesis-driven research” – this is what science is. However, this is different to biased research driven towards a pre-determined conclusion. Note the difference – a hypothesis is actually tested, a pre-determined conclusion is circular.
3. Data analysis. Here comes the statistics. So you have the data, and you see patterns. Are they significant? This is a technical, statistical question that determines whether you can use your data (gathered in 1. above) to test the hypothesis (2. above). If there is no significant result, then there is no support for the hypothesis from your results. THIS DOES NOT MEAN IT IS DISPROVEN. It is more like an absence of evidence, which, as the saying goes, is not evidence of absence. If the results show a statistically significant result, then you can compare it with your hypothesis. Now a hypothesis can be disproved – proposing that the sky looks blue and finding it to look green would be an example. Unfortunately the opposite does not apply. If your result concurs with your hypothesis, it lends support to it, but does not prove it. It can never prove it due to a quirk of inductive logic that demonstrates that no matter how many positive examples you show in support of a proposition, since the set of possible examples is infinite, you cannot rule out a counter-example emerging next. Which is quite different from the deductive logic of mathematics, where 2 + 2 = 4 as a result of the system itself.
To make a long story short, the last juicy step is publication.
Now you run into trouble. You’ve done your experiment, and supported your brilliant earth-shattering hypothesis. Will anyone believe you?
To find out, you detail your method (and the back story – why you felt it worthy of research) and your results and a bit of discussion on what it all means. Then you send it for peer review. This is a blind (well semi-blind – sometimes people work out who the reviewers are) process where your reviewer doesn’t know who wrote the paper and is asked to appraise the science, comment, and put their opinion on whether it is fit to publish. Most papers fail this test on the first pass, and the majority never make it to publication. What tends to define success is that the paper details a properly conducted line of research taking into account previous work in a similar field. Failure in peer-review doesn’t mean that there is a conspiracy against you – it usually means your paper is either not relevant to the journal in question, or that you need to write up your science better. Without peer review, this statement cannot be made with any certainty about a paper.
Also, consider that how the media treats science is not the same as the science itself. Science is only balanced in its reporting in so far as it “objectively” reports the outcomes of research and the opinions of researchers. So 90% of scientists might agree with a broad-based position, but it only takes one from the 10% to balance a journalists report – giving a 50/50 impression. Note also the diversity of opinion that will lie within the 90% who agree – these people do not speak to a common mantra, they merely assent to certain generalisations.
So next time you see controversy about methods and “conspiracies” to promote one “side” of an argument over another, consider the above and consider that most scientists are too busy with the steps involved to also hold some sort of cabinet meeting on how to bend the entire scientific community. After all, that would be like herding cats.