You are currently browsing the category archive for the ‘scientific method’ category.

When I was young I wanted to be a SCIENTIST. I wanted to pour over the literature; I wanted to argue about method. I especially wanted to spend hours traipsing about the wilderness in search of tiny little things that may or may not help my research, and I then particularly wanted to spend back breaking hours in a laboratory dissolving things in solutions and then blasting them with heat and lasers and collecting gasses and measuring them to within inches of their lives.

Naturally, I had visions of spending hours in front of a computer calculating the statistics and determining, within certain bounds, the exact results, resulting in a mystical dream of writing up the research. Of course I would then have it go backward and forward between peers (each with their own particular flavor of review) and finally, after six good productive months of politics and writing, those results would be published in a journal for most of the world not to read.

Of course, none of the above is true. That was not my childhood vision. I mean, I didn’t want to be a fireman or anything like that, sure, but what I’ve just described? No thanks!

No, I had dinosaurs! I grew up in the time where it was discovered that it was an asteroid that hit Earth and killed them! I was a child of the time of ‘Transformers’, and I was especially proud of a T-Rex transformer that I had that none of the other kids did. In hindsight, that just made me a spoilt brat.

I watched the occasional documentary, mostly because my grandmother and mother encouraged it. I remember seeing David Attenborough crack open a rock to reveal a fossil. I guess it was a trilobite, can’t remember, but wow, I was amazed!

I also had politics. My parents always had the ‘adult news’ on at 7pm – the national news, and the current affairs after. If I wanted to spend that time with my mum and dad, I was watching that. I didn’t have half a clue what these people were on about, but clearly it was important. My subsequent high school education ended up being all about science and mathematics, hardly surprising for a son of doctors. Although my best final marks were in English – pretty uncommon for a science and maths nerd.

Fast-forward about 10 years and I was studying science, specifically geology, at university. If you’re wondering about the time gap, I studied Law for about 3 years at one point, but it WILL NOT appear on my resume since I gracefully withdrew after, shall we say, attendance-related performance issues. I attended enough though to learn a bit about argument, or as they more commonly say, enough to be dangerous. I then worked in real estate for a year, which one might justifiable say that combined with my half-baked and inadequate legal training made me positively lethal.

So any way, after learning the art of snakes and snake oil, I went in to learning about rocks and well, other rocks. This 3-year journey of rocks and their interaction with other rocks and how they all relate to each other was so fascinating that I decided to do an honours project in geology for an extra year. In which I determined the age of some rocks from a costal area in my home state of Victoria (about 50 million years old, in case you’re wondering – I wouldn’t want to leave you hanging).

These rocks were basalts as it happens (like that which erupts from the volcanoes of Hawaii) but what is really important is that this got me out there, cracking open rocks and finding samples. It also involved considerable literature review and statistics and report writing. Remember the start of this essay?

One thing I did do while collecting little bits of rock was stand on a rocky shore platform looking for samples, all alone, in complete disregard of the risks. I am not exaggerating to say that I could have died that day. The water from that freak wave only reached my waist, but any further and I would have found myself in the water, possibly kilometers from land. It does happen. That’s why Universities don’t let people do what I did alone (actually they didn’t then either, I just kinda well, you know…)

Why am I telling you this? Well, it’s because there, there on that platform being hit with a freak wave, I did consider why I was doing it. At the time, I guess I just needed that sample. Badly – my degree depended on it! Plus, what a beautiful place to die! Only kidding.

Nine-odd further years later, I am asking the same questions, but this time, there are no freak waves. Since then, I’ve been staring at rocks and reports and trying to decide where the next big gold or nickel deposits might be found. I’ve worked for a couple of the biggest mining companies in the world, I’ve also slept in a swag in the middle of a frigid desert night between stints supervising dusty, loud drill rigs.

Through all of this, I have continued to have an almost child-like fascination in science and nature. I have read all the famous authors – Gould, Dawkins, Sagan, you name it. I watch the docos. I watch them again. I sometimes write stuff about things on my blog. I follow my little curiosities down the rabbit hole that might begin with a name, lead to a wiki search, and end in several journal articles and a whole new ‘issue de jour’ for the week. I get involved in skeptical arguments and pursue the philosophical reasoning they entail. I pity those around me sometimes; I suspect I am quite, as they politely say, “intense”.

So, I could tell you something about evolution, I could tell you a little about quantum physics (which is only to say, for example, that I could tell you about Schrodinger’s Cat, and also explain that no cats are involved). I could describe the Monty Hall problem and why it demonstrates how flawed our thinking can be.

Why would you listen to me though? I’m a geologist, not a philosopher, or physicist, or even a biologist. In fact, I have never done a single formal course in biology, and that includes school level biology (I did physics and chemistry and geography, there’s only so much room). I did read A Brief History of Time by Stephen Hawking, when I was in high school, but I’m not about to say I’m some sort of cosmologist.

I would ask you to listen only because it is interesting; for no other reason. I would appeal to your sense of intrigue. I am suggesting to you that the real world is far more interesting than anything that you might have seen in the Da Vinci Code or, God forbid, stuff about the megalodon shark in Shark Week.

The real world has every sort of mystery of the sort that Dan Brown wrote his fiction about. But the thing is, it’s not fiction. Even Brown’s fiction contains some real science and real history. And that is the point. Its not just science, it is history and culture too. They all link together to make this great big wonderful story. And anyone can be part of that. Anyone can be that investigator. The scientist at the frontline, and even, with time, the one that the President calls when the aliens land.

For that though you’re going to need skills. Investigative skills, particularly of the sort that can be verified and tested. And this is where you will need some scientific training. Or at least, a good appreciation of rational thought and how something can be known, objectively. You need to have an appreciation of why scientific reasoning has lead to the advances it has. After all, you trust planes not to fall out of the sky, right?

Either way, rejoice in your fascination for all things interesting. Science is not a specialized territory only inhabited by nerdy, bespectacled introverts who are always portrayed as nerdy, bespectacled introverts. In fact, most scientists are normal. Really, they are.

Most importantly, don’t worry if you don’t want to be a scientist. I was almost a lawyer, and there are plenty of other valid, interesting and important pursuits in life, and no, science is not the only ‘way of knowing’. All that said though, please learn about science and its methods. The modern world demands it.

 

Reposted from Medium: https://medium.com/architecting-a-life/369f86f61f79

It’s all very well to look at the history of science and the prominent revolutionary figures therein and conclude that these were ‘Masters of Science and Scientific Method’; but did they really follow set rules handed down to them? After all, it was precisely their radical ideas that caused change. A traditional scientist of the day might have dismissed those radical scientists as rogues, or worse, pseudoscientists (if they’d had that word back then!).  Clearly, following prescribed rules does not imply scientific progress.

We’ve met Karl Popper and falsificationism, which seemed to help us decide what is scientific or not, but then we found that this is problematic if the evidence partially supports the hypothesis. We’ve seen that enough of this can lead to a paradigm shift (Thomas Kuhn). Now we return to look at the activity of scientists, and we find that in reality, scientists who progress science seem to be just proposing whatever theory they like.

Paul Feyerabend was an Austrian-born philosopher of science, who, true to his theories of science, lived in 4 different continents and at least 6 different countries. He rejected the notion of universal method in science, instead advancing what is termed ‘epistemological anarchism’, which is to say, there is no absolutely fixed way of knowing things and that ‘anything goes’ would better describe scientific method in ‘revolutionary science’. Furthermore, he suggested that science cannot claim the role of ultimate arbiter of truth.

Controversially, Feyarabend looked at heroes of science, such as Galileo, and said that instead of being sticklers to persistent and careful methods who finally got their day in the Sun, these guys were really just great persuaders. So forceful was their campaign, that their theories won the day. Crucial to this is that their theories were, at the time of their proposing, either not fully supported by the facts, or the technology did not exist to fully test their theories (a number of Copernican-era predictions went untested for centuries, for example). This implies that their theories, at the time, were not necessarily scientific, meaning they shared the stage with competing theories of a more mystical nature (for example, astrology or religious doctrine). The success of the ‘scientific’ theories has led to science becoming the dominant way of knowing about the world, and that in turn has lead to science oppressing other epistemologies.

All this leads to a rather dismal postmodernist view of science – that there really is no way to decide between science or magic or religion when it comes to understanding the world, that they are all ‘relative’. This simply does not bear scrutiny when we look at the advances science has made. Feyerabend is perhaps best understood as a product of huge social change that resulted in postmodernism becoming a dominant philosophy in the post-war era. His work, however, shed light on that suspicion we always had, that science is not strictly rule-bound, that advances are made through radical activities. This much is probably true; even if we discard the notion that religion could be on equal footing to science when it comes to understanding the world.

This article first appeared in print in my column in Woroni, the student newspaper of The Australian National University, 15 August 2013.

“New opinions are always suspected, and usually opposed, without any other reason but because they are not already common.”

-John Locke, 1690

Revolutions are usually considered bloody affairs. In science, this is rarely the case, as those white lab coats do stain easily. It would seem crazy to think of science as having a revolutionary history, a sort of dialectic that puts one theory in the red corner and another in the blue. Science is normally thought to involve thousands of tedious hours of hard work gradually adding to our understanding of the world.

Yet we have already encountered the ‘Black Swan’ – that piece of evidence that refutes a hypothesis, and we have struggled with how to proceed from there. Standard practice would be to reject the hypothesis, and continue further research, and in fact, that is how the majority of science is conducted.

What happens when someone develops a new theory though, one that fully explains all the existing observations, but in a novel way? Sometimes, this shakes the foundations of science – new science can only progress in light of this new idea.

Read the rest of this entry »

“Plurality must never be posited without necessity”

-William of Ockham

People prefer simple solutions to problems. This is pretty obvious – it takes less unnecessary hard work. So, does this idea apply in science? Welcome down the rabbit hole of the history and philosophy of science.

Ockham’s Razor is a hugely influential heuristic (rule of thumb) in science. The Razor provides a way to decide between competing explanations that are equally supported by the evidence at hand. It suggests that the favoured explanation is that which posits fewer variables.

However, we all know that science is not often ‘simple’. How do we translate this position to science? This is where ‘falsifiability’ comes in, a concept made famous by Karl Popper in “The Logic of Scientific Discovery”. If you cannot falsify a hypothesis, then it is not scientific. The famous example is “all swans are white”. By inductive logic, no amount of white swans can prove this statement; instead it is supported until a single black swan is found.

Falsifiability alone does not, however, reduce possible explanations to one. Competing theories may all be falsifiable, thus scientific. Having established that swans can be black or white, we propose three competing ideas: 1. All swans are either black or white. 2. All swans are either black or white, but location determines which. 3. All swans are either black or white, but location and the season it is in December determine which. Having taken samples from Australia and England to test these hypotheses, you would see that all statements are supported, however number 2 has an extra variable, and 3 has two.

Strict application of the Razor would suggest you accept hypothesis 1. However, the extremely strong correlation between location and swan colour suggests that 2 is also acceptable. In this case, you decide that the ‘simplest’ hypothesis is the weaker, because it has less explanatory power. That is, even though clearly swans are either black or white (hypothesis 1), the black swans are all in Australia, and so hypothesis 2 suggests an explanation determined by geography. What about hypothesis 3?  Well, seasons are dependent on location, and so the seasons variable is superfluous, regardless of how well supported it is by the results. We take hypothesis 2 and move on, because that result has thrown up new hypotheses (e.g. around species and evolution) – the very fodder of science.

Thus science aims to explain, rather than simplify. Ockham’s Razor is really about how to prefer an explanation, rather than about the most simplistic explanation. Sometimes the best explanation is very complicated, the point is that it is no more complicated than it needs to be to do the explaining. Many ‘conspiracy theories’ fall foul of the Razor for this reason–they introduce extra variables without improving the explanatory power – that is, the non-conspiracy hypothesis can explain all the evidence.

Things get interesting when a contradictory result is found by a new experiment. Does it really falsify the hypothesis, or should we modify the hypothesis? Should hypotheses be ‘backward modified’ like this to explain new data? Doesn’t this contradict everything I’ve just said? How the hell does science really work? This will be a tale for another day, where we meet people like Thomas Kuhn and Imre Lakatos, and encounter the anarchist, Paul Feyeraband.

This article first appeared in print in my column in Woroni, the student newspaper of The Australian National University, No. 8, Vol 65, 23 July 2013.

“What do we want? Scientific Certainty! When do we want it? Within a certain timeframe!”

The public, the media and especially politicians like to make a big thing about scientific uncertainty. For scientists, it’s just a fact of life. So what is this ‘uncertainty’ and how does this affect our lives?

We scientists perform research just so that we can understand the world around us. To do so, we use various scientific and statistical techniques, and especially where the latter is concerned, these result in ‘measures of confidence’ in the data (and thus conclusions drawn there from). It means that we present data with ‘error bars’, which are designed to show a range of values within which the ‘reality’ may lie. These error bars represent upper and lower limits that are determined on the basis of our confidence in the results. This is largely a statistical calculation, and it results in mind-bending statements such as “plus-minus 6% with 95% confidence”. What does this all mean?

Three concepts: Confidence, Error and Likelihood.

Imagine this scenario: you decide to determine whether the morning light is a result of the rising of the Sun in the morning (hear me out, this is going to be scientific!).

You’ve noticed that it seems to get quite light at around about the same time as when the Sun rises, but you’re not sure that it’s actually related to the Sun rising (stay with me!). So you hypothesise that the morning light is due to the Sun rising. To test this, you take a series of measurements over numerous days – the amount of light, the time of day, and the position of the Sun with respect to the horizon.

Your data looks a bit like a curve when you plot it – that is to say, there is no definite point at which dark becomes light (anyone who’s been up before it’s light will know this, but for the benefit of an undergraduate audience…).

So you do the statistics on it (yes, there is a point to paying attention to those stats classes!). This shows that there is a correlation between the position of the Sun and the amount of light (durrr, I know…), but wait! There is variation in the data. Not every day is the same! How could this be? Well, it could be that your instrument is near some artificial light sources, it could be that the very light of God is shining upon your scientific research (hey, appealing to all audiences here). How do you decide?

To the rescue – the null hypothesis!

For this you decide to generate a completely random version of the sunlight data (even your phone could do that these days). And then you compare, statistically, the random set to the experimental set. Sure enough, it tells you that only a percentage of the data could be explained by the random data. The rest could be considered to be explainable by the hypothesis (that the amount of light is a result of the position of the Sun).

Now, just say you decide you want to know what 95% of the data is saying. It is telling you that the light patterns match to the Sun position patterns to within, say, plus or minus 2 minutes every day. That is to say, the middle 95% of the light data matches within the same times plus or minus 2 minutes every day. What have you learnt? Well, you’ve probably confirmed that the position of the Sun is the dominant factor in the amount of light in a given place at a particular time of day (yes, yes, assuming you are outside, etc). That is, your 95% Confidence Interval.

But why all the scary stats and numbers? Why should we be only 95% confident of this match, plus or minus 2 minutes? Well, because we have measured things in the real world, with human-made devices and their associated problems; nothing is infallible. But also because there might actually be other factors at work – street lights, machine error, etc. But if we take that null hypothesis test we did before, we’ll see a pattern. In the above example, had we taken the middle 99% of data, we may have had a result that was plus or minus 30 minutes in the time data. That’s starting to sound a bit dodgy. Had we taken the middle 66% percent of the data, we may have been within plus or minus a few seconds, but that would have left a third of the data unexplained. What’s going on here?

Well, fortunately, these numbers I’ve been picking relate to ‘standard deviations’ (SDs), a highly statistical term that essentially means the amount to which the data show ‘weirdness’. A small SD means the data is pretty tight – it’s all showing the one thing. One full SD is around 66%, which we’ve agreed is a pretty poor test of the data. 2 SDs however, is 95% of the data, “almost all” in most people’s parlance. 3 SDs puts you in the 99% category, which is ridiculously definite!

Imagine a diagram of confidence versus error; the Y-axis shows Error, measured as a percentage deviation (that is, how much it differs from the average), while the X-axis shows the confidence level, measured in those Standard Deviations. Remember, we choose our confidence level, then see what the error level is. Choose your confidence interval, and then see where your error margins plot. This will give you an idea of how strong your result is. This is, the likelihood that you have made an observation of reality; your science has revealed a ‘truth’ about the world around us.

So, those studies that have low error margins at high levels of confidence, those are the ones we can be pretty darn certain are likely to represent the real world. The ‘Nobel Committee’ area of certainty represents experiments that start to demonstrate ‘theory’ – that summit of science where things are considered to be the closest thing that science has to ‘fact’.

Examples of things that fall in to that ‘Nobel’ area: gravity being responsible for the apple falling from the tree; the Sun rising in the east and causing ‘daytime’; human influence on climate causing global warming. Yes, I said it. Ask the climate scientists – this is where the data lies.

We’re never certain, we’re just certain within certain error bounds, at a confidence level of X.

Screen Shot 2013-06-30 at 9.04.07 PM

 

This article published at Woroni, the student newspaper of The Australian National University: http://www.woroni.com.au/features/scientific-uncertainty-a-certain-certainty/

A rough version of a talk I’m doing at uni. Thought I’d try out recording it. Thoughts?

The journey continues. After reading “What is this thing called science” by A. F. Chalmers, I got a rush of blood to the head and decided to take the plunge. It is fascinating stuff, and what is particularly interesting to me is that, as a geologist, I was born just after probably the most significant revolution in geology of all time – the plate tectonics revolution. It is an amazing story of the progress of science, one that should be told and analysed more (and if I have the time, I will!)

So now, the history and philosophy of science is firmly on my agenda. Here’s the recent additions:

Karl Popper, The logic of Scientific Discovery


Thomas Kuhn, The Structure of Scientific Revolutions


Paul Feyerabend, Against Method

The astute reader will note the lack of Lakatos here. My mistake, let me get through these three first…

Also, this may take me a while – lacking a formal education in philosophy, some of these folks can be quite obtruse… but you get there in the end…

In the coming month I will be producing a short film profile of Gary Cass, a scientific researcher in the soil science/agriculture section of the Faculty of Natural and Agricultural Sciences at the University of Western Australia where I am studying a Masters in Science Communication.

Red Wine Dress, Micro'be. Photo Ray Scott, courtesy of Bioalloy.org

He is famous for the Red Wine Dress. He used to work in a vineyard and he noticed a thin film of slime that developed on red wine when Acetobacter infected it and turned it to vinegar (a wine-maker’s worst nightmare). Being an artistic person, he wondered whether it would be useful as a fabric. The film was in fact threads of nanofibre-scale cellulose that is the ‘poo’ of the bacteria. So he got together with an artist and developed the world’s first “Red Wine Dress”. As creative as that was, what he’s realised is that the same cellulose fabric is potentially useful in other applications. He’s now involved in further research into these materials.

The great thing is that all you need is wine, sugar and the bacteria to produce it. It can even be used to produce biofuels. In other words, we could have a multi-use biofuel technology – wine, fabric and fuel all from the one crop. It’s far more land efficient than sugar cane for instance. The spooky part is what a colleague of his is doing in the States – he’s taken gene’s from the Acetobacter and put them in cyanobacteria, so now these little bugs photosynthesise to produce the same cellulose. All they need is water, sunshine and carbon dioxide!

I spoke to him yesterday and he is passionate about creativity in science. One of the things he does is teaching at a school here, Shenton College. It’s a program he developed where he gets the kids (year 11s) to learn earth history, biology and genetics using artistic methods. So for instance, one kid coded a musical score from his basic DNA sequence. Another group of girls put the process of abiogenesis to dance! The reaction has been very positive and he’s now getting international attention for his approach. He thinks that creativity is an essential part of scientific progress (really shouldn’t come as a shock to anyone, that, but it does challenge traditional ideas) and that for too long science education has stifled that. Art is a natural medium to reintroduce it, and the strong boundary between art and science has been unnecessarily created. He struggles somewhat with the question of whether he’s an artist or a scientist! He did agree however, that really it’s depends on the work he’s doing – when testing hypotheses, he’s a scientist, when developing creative ideas, he’s more of an artist.

My film will be a profile of him with a focus on the Shenton college program, with some background about the red wine dress.

A few links about him:

http://www.news.uwa.edu.au/business-briefing/grow-your-own-dress-uwa

and here’s a little film about an exhibition with him: http://www.youtube.com/watch?v=f-F2RD1KZT4

his website: http://bioalloy.org/o/ and particularly the dresses: http://bioalloy.org/o/projects/micro-be.html and the evolution pages: http://bioalloy.org/o/projects/bioalloyevolution.html

I am interested in the information that is lost as scientists proceed from experiment to publication. The real factors that slip through the cracks of expediency. What is more important in the communication of research, the method, or the factoid results that come from it? Are we too trusting in the scientific method? Has peer-review become a substitute for a wider interrogation of method?

These are just a few questions going through my mind as I read “Simplification in Scientific Work: An Example from Neuroscience Research” – a 1983 article by the late Susan Leigh Star. I was particularly struck by an early observation in the article that “published scientific conclusions tend to present results as faits acomplis, without mention of production of decision-making processes.” I am not sure that this is so true today, but I am intrigued by the possibility that it is exactly that loss of information (as research is presented with a higher degree of ‘granualrity’) that opens a door for skepticism in the wider community. When a large body of research by multiple scientific schools tends to agree on a matter, there is sometimes an impression given that they are all doing exactly the same experiments. Whilst the broad methods are the same, of course expedient decisions are made and this causes subtle differences. These are not always thoroughly explained, even if they are justified. I can’t help but think that something in this is relevant to the skeptical program in climate change. Is this what lets in the calls of “conspiracy”?

More to read, more to do. I have a few other things on my plate, but this is an intriguing line of research.

When you think of a scientist, you probably imagine a person dressed in a white lab coat, wearing thick glasses and adorned with white, straggly hair. Perhaps you think of Albert Einstein. The reality is quite different; scientists, overall, do not conform to this stereotype. They do not have a particular uniform, and they do not wear a badge. What makes someone a scientist?

The answer is in the way they think. They employ “the scientific method”. But what is the scientific method? The concept really boils down to a way of formulating ideas and testing them; a way of explaining the world through systematic observation and hypothesis testing. In short, a way of telling reality from fantasy. Read the rest of this entry »

One of the great frustrations in science is getting good data. Collecting it yourself can be a boring, longwinded and seemingly pointless exercise, especially if you are collecting data on multiple variables when you know you’ll only use a few (a common thing in geology). Getting legacy data from others can be even harder. Incomplete data sets, different files, wrong formats, wrong headings etc etc… These are all issues we face. However, having complete data is golden – you don’t want your work usurped by another on the basis that they had more complete data and so could see the real picture.

So it seems in the climate change debate, we now see a real problem emerge. One that actually does cause a few problems for the climatologists who have provided the evidence for “AGW”. New Scientist recently published a piece correctly (in my opinion) highlighting this as a significant concern, but one with some seemingly intractable barriers to resolution. Large chunks of important data are sitting with commercial rights within the vaults of institutes around the world. Governments would pay penalties for their general release. This is not good for the science and only fuels speculations from the deniers. It is indeed a pity that the deniers can’t get their hands on it because then they could do the same tests and come to conclusions that add to the debate. However, all this should not be mistaken as a conspiracy – it is normal in many scientific fields to have data sets locked up under commercial arrangements (or government legislation). Science has worked around this for years and continues to do so. Climate science itself has worked successfully under this regime too. Perhaps this is just another storm in a teacup.

We’ve had government bailouts for banks, perhaps its time for governments to put some money and legislation behind freeing up these data sets completely. Pay-off the commercial interests, legislate for data freedom. It would be a nice shot in the arm for a needlessly troubled science. I suspect only the deniers have anything to fear.

There seems to be a common thread amongst sceptics out there that science is done via something that looks a little like the Council of Nicaea. That is to say, that a committee of scientists decides what is “doctrine” before instructing publishers what to print. There is confusion between the ways the legal system (or political system) works and how science works.

Lets have a look at some typical tasks in a scientist’s professional life:

1. Data collection. This can be the longest and hardest (and most boring) phase. This is where hours are spent over test-tubes, or, in my case, hours in the hot sun staring closely at rocks. Whilst you may be thinking about the end-game in this phase, the task is usually so routine that bias hardly exists (if it does, it is because the method itself is biased, or you’re just sloppy). Actually, there will be mistakes, but these tend to revert to the mean, so will be cancelled out in the final analysis.

2. Hypothesis generation. I put this after data collection to bait some people, but actually, it has to be said that hypotheses are generated throughout the scientific process. The important thing is that you are only testing the original hypothesis whilst conducting an experiment designed to test that hypothesis. Other ones must wait for other experiments. There is no harm in “hypothesis-driven research” – this is what science is. However, this is different to biased research driven towards a pre-determined conclusion. Note the difference – a hypothesis is actually tested, a pre-determined conclusion is circular.

3. Data analysis. Here comes the statistics. So you have the data, and you see patterns. Are they significant? This is a technical, statistical question that determines whether you can use your data (gathered in 1. above) to test the hypothesis (2. above). If there is no significant result, then there is no support for the hypothesis from your results. THIS DOES NOT MEAN IT IS DISPROVEN. It is more like an absence of evidence, which, as the saying goes, is not evidence of absence. If the results show a statistically significant result, then you can compare it with your hypothesis. Now a hypothesis can be disproved – proposing that the sky looks blue and finding it to look green would be an example. Unfortunately the opposite does not apply. If your result concurs with your hypothesis, it lends support to it, but does not prove it. It can never prove it due to a quirk of inductive logic that demonstrates that no matter how many positive examples you show in support of a proposition, since the set of possible examples is infinite, you cannot rule out a counter-example emerging next. Which is quite different from the deductive logic of mathematics, where 2 + 2 = 4 as a result of the system itself.

To make a long story short, the last juicy step is publication.

Now you run into trouble. You’ve done your experiment, and supported your brilliant earth-shattering hypothesis. Will anyone believe you?

To find out, you detail your method (and the back story – why you felt it worthy of research) and your results and a bit of discussion on what it all means. Then you send it for peer review. This is a blind (well semi-blind – sometimes people work out who the reviewers are) process where your reviewer doesn’t know who wrote the paper and is asked to appraise the science, comment, and put their opinion on whether it is fit to publish. Most papers fail this test on the first pass, and the majority never make it to publication. What tends to define success is that the paper details a properly conducted line of research taking into account previous work in a similar field. Failure in peer-review doesn’t mean that there is a conspiracy against you – it usually means your paper is either not relevant to the journal in question, or that you need to write up your science better. Without peer review, this statement cannot be made with any certainty about a paper.

Also, consider that how the media treats science is not the same as the science itself. Science is only balanced in its reporting in so far as it “objectively” reports the outcomes of research and the opinions of researchers. So 90% of scientists might agree with a broad-based position, but it only takes one from the 10% to balance a journalists report – giving a 50/50 impression. Note also the diversity of opinion that will lie within the 90% who agree – these people do not speak to a common mantra, they merely assent to certain generalisations.

So next time you see controversy about methods and “conspiracies” to promote one “side” of an argument over another, consider the above and consider that most scientists are too busy with the steps involved to also hold some sort of cabinet meeting on how to bend the entire scientific community. After all, that would be like herding cats.

%d bloggers like this: