Is the term ‘approach significance’ cheating?

For those of you who know me well, know this a topic that I am able to argue for hours and hours, however, in the interest of being a good scientist I shall look at this from both sides. But overall, I feel that approaching significance is a term that should not be used in scientific works.

According to Cumming (2010)* research is considered significant if p<.05, and seen as approaching significance if p<.10, however, he also pointed out that where possible exact p-values should be reported, as it gives the reader the ability to judge the significance of a result themselves. Other sources** agree with this and advise people writing up results that p-values that are greater than .05 but less than .10 should be discussed, and are seen as approaching significant, and an effect could be present.

I can completely understand this logic. I am a relativity logical person, and what is being suggested makes sense. Researchers spend a very very long time planning research and conducting research, and then once the analysis of the data comes through and your significance level is something like p=0.057, you would be very frustrated. Your results would be telling you that chance does play a part in results you have found, however a p value between .05 and .10 would suggest that chance is only a small part of this result.

Furthermore, there could be a number of reasons affecting the significance levels of your study, and maybe if one element was different then significance would be found. For example, your sample might not be truly representative of the population, and maybe if you had chosen a more representative population then significance may have been found. But this idea then brings in the ethics of manipulating the data and then adding participants to try and find an effect.

On the other hand, the term approaching significance suggests that there is a chance that the results found might not be as due to chance as the alpha level suggests. Now this means that people may conclude that an effect might possibly be there, creating a type one error, where an effect is found/thought to be there, when it is actually not. This is a big enough issues when talking about things that are significant, without bringing in approaching significance.

So, to conclude, I feel that yes using the term approaching significance is cheating, because the research is claiming an effect may be there if further investigation is undertaken, when the stats you have been give suggests otherwise!

* http://www.stat.auckland.ac.nz/~iase/publications/icots8/ICOTS8_8J4_CUMMING.pdf
** http://psych.hanover.edu/classes/psy220/resultdisc.htm

TA – Comments Week 8


Is there anything that can’t be measured by psychologists?

Why is the “file drawer problem” a problem?

The main differences between a case study and single case designs.

Why is the “file drawer problem” a problem?

Also I cannot find the direct link to my last comment. However, here is the blog it is commented on and then comment

Should statistics be written in layman’s terms?

I have always been against writing stats in layman’s terms until I read your blog. You have managed to completely persuade me, especially when it comes companies that I now see as being big evil dog like creatures that are digging holes in the ground to stop us knowing the full extent of their research.

However, I feel that a certain amount of jargon is necessary because it makes it easier for academics to be able to explain the same phenomena. Furthermore, when written in layman’s terms where do we draw the line, what is considered acceptable and what is too scientific for the general public?

Thanks 😀

Is it possible for a researcher to be truly objective when conducting their experiments?

Experimenter influence is something that has been brought up a lot as a criticism of psychology; however, we are not the only science that has influences of experimenter bias. I feel that in fact we are one of the few sciences that actually control for experimenter bias.

First of all I feel that it is necessary to address what experimenter bias actually is. It is the idea that the person conducting the study has preconceived ideas over its outcome, and this in turn influences they way the research is carried out, and any errors that may occur (allpsych*).

As psychologists we are very conscious of the fact we are working with very very complex living humans, compared to some of the other sciences that do not rely on information from people. But you are probably thinking ‘what does that fact we are working with humans have to do with experimenter bias?’, well this is where our rigid experimental controls come in to play. Working with people there are a lot of precautions to not openly or covertly influence the participants in our studies, such as a double blind experiment.
Double blind experiments are when neither the participants or the researcher know who is in the control condition and who is in the experimental group**. Sounds like a fantastic idea in theory doesn’t it, all experiments should be conducted like this so that there is no possible influence the experimenter can have on the participants. However, in practise this is a lot more difficult to achieve, because sometimes there are different procedures between the control and experimental. An example of this might be an imitation experiment, where in the control group children do not see the target action whereas in the experimental group they will. It is therefore not possible for the researcher to not know which group they are in based on the procedure. Another issues with double blind experiments are the fact that there is not always a control group. For example in a repeated measure study there is only one participant and therefore influence may be seen across the condition.

So; what does this tell us about researchers being objective? It shows that even though there are measures to prevent experimenter influence on participants, whether they mean to or not, there are time when objectiveness is just not completely possible to achieve when researchers have preconceived ideas of the outcome of the experiment.

* http://allpsych.com/dictionary/e.html
** http://www.wisegeek.com/what-is-a-double-blind-test.htm

Should the media be allowed to interpret research findings?

The idea for this blog came to me when I was reading an article about the Daily Mail being completely useless (a regular read I know). Basically the have managed to set a new record for the most inaccurate interpretations of research findings. There is a whole scale system, its pretty awesome actually, but that is a completely different tangent that I shall not be exploring at the point, which makes me conclude that the media should not be allowed to interpret research findings, and I’m gonna tell you why…
The article* clearly highlights the dangers of misinterpretation of data, exploring the real research behind daily mail headlines such as ‘Just one can of diet fizzy drink can increase risk of heart attack or stroke. Articles such as this can create mass panic over many people’s current habits, when actually the research conducted shows that drinking excessive amounts of fizzy drink can lead to increase risk of heart attack. Showing here that the media are capable of putting the worldwide panic by presenting false conclusions to the world.

Well you may be thinking okay, but to be fair people should probably be careful about fizzy drink consumption anyway, it can’t be that much of a big deal, and you may be right and this is the daily mail, nearly everyone realises that they have a tendency to exaggerate, right? However, this isn’t the only issue of media misinterpretation.

In 2006 Wright, Bradley, Sheldon and Lilford** looked into an episode for BBC’s Panaroma, where they programme tarnished various surgeons reputations by making allegations of neglect. All of the information that started this from a report that looked at the patients of 138 various Yorkshire surgeons, finding that one particular surgeon had a post-surgery life expectancy 5 years shorter than average. However, after the programme was aired further analysis of the report was done, it found that the 5 years was not significantly different from the other surgeons, and therefore was due to chance. This example clearly shows the media should not be allowed to interpret findings, because they have managed to ruin one mans career, because they didn’t look closely enough at the report they based a television programme on.

Looking at the evidence, a clear case for not letting media interpret research findings is beginning to form. However, in the western world there will always be an issue over freedom of speech. There have been varying debate over the years to what extend people/the media are allowed freedom of speech. In fact a Danish newspaper provoked this exact argument in 2006*** when it printed a controversial cartoon. The conclusion of this argument was that free speech should be allowed, but in the nature of offensive material caution should be taken and people should engage in self-censorship. So if the media have been given the freedom of speech over something as controversial as an insulting cartoon, then surely just passing on the findings of a research paper must be allowed, even if those ideas are incorrect.

So; to conclude, I feel that there are many errors presented in the media that should be avoided, but according to the western ideal of free speech then there is nothing that can be done to change this.

*http://www.thenewjournalist.co.uk/2012/02/12/just-one-copy-of-the-daily-mail-could-ruin-your-life/

** http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(06)68497-3/fulltext

*** http://www.guardian.co.uk/media/2006/feb/13/mondaymediasection7

Should Researchers Except Funding from Large Companies?

This an interesting topic area and I’m sure has crossed nearly every researchers mind at some point in their career, ‘is it truly ethical of me to accept funding from this company, will it effect my results??’…Well, there is no true answer to this; there have been theories ideas of funding bias, which is a subset of experimenter bias where the person/company that is funding the research can influence the findings, however, researchers claim that they do not set out to find specific results, and therefore the person funding the research wouldn’t effect the results.

There are thought to be may parts of funding bias that could effect the results of the research. Firstly human nature is thought to influence even the most ethical of researchers. This idea suggests that even if a research takes all precautions to try and keep your results from being influenced, on a subconscious level results are altered. This links nicely into the next element of funding bias, the predetermined conclusion. This is the idea that the results of a study can be change, due to the researcher unconsciously select and remove participants in order to fit a predetermined conclusion, which usually fit with answer that the funding company are looking for. Finally, there is also the idea of publication bias. Anders Sandberg* suggests that funding bias might be caused publication bias, because, it is easier to publish positive results than negative ones. However, just because these theories exist doesn’t suggest that they are correct, and that these biases actually occur.

There have been numerous research studies that have looked into the idea of funding bias. Turner and Spilich (1996)** found that when a study funded by a tobacco company the results showed nicotine to have a greater effect on cognitive performance, than when conducted researchers not funded by the a tobacco company. There is also more recent evidence, in 2006 a meta-analysis was conducted and found that studies funded exclusively by a mobile phone company were least likely to report significant results when looking at health risks related to mobile phone use (Huss, Egger, Hug et al***).

It’s all well and good showing that research can be effected by the people funding it, but is it feasible to just stop accepting any funding? Definitely not, most research wouldn’t have any chance of being conducted without funding, many researchers would be out of jobs, and new discoveries would not be made, discoveries that can change lives, but that still doesn’t make this easy.

So, to conclude, funding bias does occur, and it can affect the outcome of results, but at the same time, funding is absolutely necessary, and therefore researchers should still use it, as the outcomes out-weigh the costs.

Ref.

* http://www.fhi.ox.ac.uk/our_staff/research/anders_sandberg
** http://www.ingentaconnect.com/content/carfax/cadd/1997/00000092/00000011/art00003
***http://ehp03.niehs.nih.gov/article/fetchArticle.action?articleURI=info:doi/10.1289/ehp.9149

Have yourself an ethical Christmas

Now ethics has been done to death…FACT – however, because this is my last blog before christmas I thought I would spice it up a bit with a chirstmasy twist…

I have your attention now don’t I. Are you on the edge of your seat thinking there is no way that ethics can be Christmassy – Becky your crazy. Well you would only be right with one of those assumptions. Stay tuned…

The Boring stuff – what are ethics in research?

Click to access bps-conduct.pdf

Ethics are the bain of all researchers existence, they are the rules set out by the BPS in order to make things a nightmare, well not quite. They are actually way more functional than that, and aren’t designed to make things difficult. Ethics are completely necessary in order protect both participants and researchers.

I’m sure you all know this, but there are 6 basic ethical guidelines that have to be set out for the exact reasons layed out above. These guidelines are; deception, right to withdraw, protection from harm, debrief, informed consent and confidentiality.

Deception refers to not lying to the participants really, and just making sure that you are not being a horrible meanie, like some people have been in the past (like Milgram, but he has been looked at ethically to death and so I’m not really gonna look into it further, other than he was a bit of a meanie, but there was good stuff that came out of it).

Making sure the participants have the right to withdraw is essential in all research, because its not good for both you (the researcher) or the participant, for the participant to want to withdraw and not have the ability to, because they may sabotage your results or just get really upset.

Protection from harm is the next ethical consideration to look at, and again is a really good thing to have, especially in today society where people are suing each other left right and centre. Not only does it cover the researchers back, but also wont harm the participant in the first place…kinda a win win situation don’t you think??

Lets look at debriefing next – well, this is what happens at the end of a study, to let the participants know exactly what was going on in the study. If I am being honest I don’t think they are essential, but as a participant it is nice to know exactly what you have just taken part in. Unless you use deception, in which case you should debrief like a cat! I don’t know why cat, but you know, cats do things well, I think, why not =)

Now to tackle informed consent – people like to know what going on – and also its nice to have a piece of paper with the participants signature on just encase they decide they don’t like you and are gonna sue saying you didn’t give them informed consent – well then BOOM its on the paper!

And finally, confidentiality is the final ethical guideline, and is also highly important, because people don’t want you going around and being like ‘OH MY GOSH MY PARTICPANTS JOHN DANIELS WHO LIVES DOWN THE ROAD MIGHT BE SCHIZOPHRENIC’ – people tell researchers things in confidence and expect you to keep it that way!

So; I think all the ethical guidelines outlines above are important, and need to be considered when doing research. And now as promised a christmasy ethical twist =)

The Christmasy Stuff 😀

Right, well Santa, is doing his yearly elf check, he must abide by all of the BPS ethical codes, because what you don’t know about Santa is that while don’t making toys in the north pole, he is actually a psychologist, and therefore is likely to carry psychological ethical guidelines across his day to day job…

So; while doing his checks, first and formost, Santa must have informed consent, to keep records of the elves working in his factory, else one of they may lawyer up and sue Santa for having illegitimate records of elves. Then he at the same time must tell these elves that they do in fact have the right to stop working for Santa, without any repercussions (right to withdraw) as well as making sure the work shop is safe for them to work in (protection from harm) and also that the elves know exactly what they have to do and what is going on the in workshop (deception). It goes without saying that Santa keeps confidentiality of his elves, because when not in the workshop, the elves don’t want people knowing they work for Santa is causes unnecessary fame. And finally after the year is over and he is coming round to his checks again, Santa must insure he has explained exactly what has happened over the year for the elves to understand.

Bet you learned a lot about ethics and Santa – Happy Holidays =)