Nicodimas, on 06 November 2014 - 01:09 AM, said:
Quote
I'm sure the vast majority of scientific studies are conducted in good faith, but selective reporting is the bane of modern day science.
Anyone care to explain this one in a little bit more detail...always been curiuous to how this works.ya know?
Quote
View PostMezla PigDog, on 30 October 2014 - 01:50 PM, said:
Wait... do we trust anything those companies do as they are for profit??
Always wonder about big pharma...never be too certain on what they do and why.
It is a genuine concern indeed. The problem is actually more profound and widespread than just big pharma, although with big pharma there is not seldom intent behind it. And it's not really one problem, it is a range of them. First off there is the openness of data issue, for at the moment there is no international obligation to publish or make publicly available all the raw data from research studies and clinical patient trials. This means that pharmaceutical companies can either withhold data that puts their product in an unfavourable light and only publish that which makes the product look good, or they can cherry pick the positive aspects of the data and put emphasis on those, while downplaying or outright discarding the negatives.
Beside cherry picking, there is also the choice of controls in the study that can be manipulated. It is nice that a drug shows an effect compared to no drug, but if there are already 5 other drugs in the market treating the same disease, it is far more interesting to know whether your drug actually outperforms the others. Often however such comparisons are neglected.
A compounding factor is the way in which scientific journals choose their output. If you have 5 random scientists each sending in a manuscript describing tests on the same drug, and one of those publications shows really cool effects whereas the other four show little to no effect whatsoever, three guesses as to which publication the journal will run with. Although the lack of any statistical difference in itself is a scientifically valid result, as it tells you something about the behaviour of the experimental sample as compared to a placebo, many journals will consider it uninteresting and will instead opt for something that sounds exciting. So there already is an inherent bias in scientific publications even without any potentially malicious manipulation from pharmaceutical companies or unethical scientists. That in a way is even more worrying than the 'nasty pharma guys hiding evidence' part, as it seems to be an ingrained problem in all science. Our brains seem to be wired to respond favourably to changes from the norm, even though there is a reason that the norm is as it is. So experiments that tell us that something does not cause a significant effect is often erroneously interpreted as a lack of result. Obviously it is a result, but as it doesn't show you something cool or different, it is dismissed in favour of something that does do something neat.
In the past years, there has been a big drive from the international scientific community to tackle these issues. One of the things that is campaigned for is complete openness of all clinical trial data. This means that for every clinical trial that is being registered, at the end of the trial all the raw data needs to be accessible to the scientific community as a whole for scrutiny. It is actually quite bizarre to think that that has not already been the case in the past, but pharma has (obviously) been very reluctant to give such access. They are also trying to retrospectively retrieve clinical trial data for products that are already on the market. Another mode of action is the Cochrane reviews. You may have already heard of these, but if not, they are basically large meta-studies of the available literature. Again, it seems very obvious when you mention it, but up until a few years ago it was never done in a structured fashion. Basically, what the Cochrane reviews do is collect all the available publications on a particular drug or treatment method, then scrutinise these publications on scientific rigour, discard all those that have flaws in the experimental design or statistical approach, and use the combined results of all the remaining publications to determine the overall treatment effect. So suddenly they can find (and have found) that a treatment that was hailed as groundbreaking in a paper in a leading journal, was found to be largely ineffective or even underperforming compared to other treatments according to 4 or 5 papers in smaller, lower impact (and often largely ignored) journals.
I can definitely recommend reading some of Ben Goldacre's books on this, 'Bad science' and 'Bad pharma'. He explains these problems and their potential solutions (and the reasons why those solutions are not being implemented as we speak) in a very accessible way. He also has a website: www.badscience.net.
Yesterday, upon the stair, I saw a man who wasn't there. He wasn't there again today. Oh, how I wish he'd go away.