Global Research via
plosone.org
1 May 2009
The image of scientists as objective seekers of truth is periodically jeopardized by the discovery of a major scientific fraud. Recent scandals like Hwang Woo-Suk’s fake stem-cell lines [1] or Jan Hendrik Schön’s duplicated graphs [2] showed how easy it can be for a scientist to publish fabricated data in the most prestigious journals, and how this can cause a waste of financial and human resources and might pose a risk to human health. How frequent are scientific frauds? The question is obviously crucial, yet the answer is a matter of great debate [3], [4].
A popular view propagated by the media [5] and by many scientists (e.g. [6]) sees fraudsters as just a “few bad apples” [7]. This pristine image of science is based on the theory that the scientific community is guided by norms including disinterestedness and organized scepticism, which are incompatible with misconduct [8], [9]. Increasing evidence, however, suggests that known frauds are just the “tip of the iceberg”, and that many cases are never discovered. The debate, therefore, has moved on to defining the forms, causes and frequency of scientific misconduct [4].
What constitutes scientific misconduct? Different definitions are adopted by different institutions, but they all agree that fabrication (invention of data or cases), falsification (wilful distortion of data or results) and plagiarism (copying of ideas, data, or words without attribution) are serious forms of scientific misconduct [7], [10]. Plagiarism is qualitatively different from the other two because it does not distort scientific knowledge, although it has important consequences for the careers of the people involved, and thus for the whole scientific enterprise [11].
There can be little doubt about the fraudulent nature of fabrication, but falsification is a more problematic category. Scientific results can be distorted in several ways, which can often be very subtle and/or elude researchers’ conscious control. Data, for example, can be “cooked” (a process which mathematician Charles Babbage in 1830 defined as “an art of various forms, the object of which is to give to ordinary observations the appearance and character of those of the highest degree of accuracy”[12]); it can be “mined” to find a statistically significant relationship that is then presented as the original target of the study; it can be selectively published only when it supports one’s expectations; it can conceal conflicts of interest, etc… [10], [11], [13], [14], [15]. Depending on factors specific to each case, these misbehaviours lie somewhere on a continuum between scientific fraud, bias, and simple carelessness, so their direct inclusion in the “falsification” category is debatable, although their negative impact on research can be dramatic [11], [14], [16]. Henceforth, these misbehaviours will be indicated as “questionable research practices” (QRP, but for a technical definition of the term see [11]).
Ultimately, it is impossible to draw clear boundaries for scientific misconduct, just as it is impossible to give a universal definition of professional malpractice [10]. However, the intention to deceive is a key element. Unwilling errors or honest differences in designing or interpreting a research are currently not considered scientific misconduct [10].
To measure the frequency of misconduct, different approaches have been employed, and they have produced a corresponding variety of estimates. Based on the number of government confirmed cases in the US, fraud is documented in about 1 every 100.000 scientists [11], or 1 every 10.000 according to a different counting [3]. Paper retractions from the PubMed library due to misconduct, on the other hand, have a frequency of 0.02%, which led to speculation that between 0.02 and 0.2% of papers in the literature are fraudulent [17].
Eight out of 800 papers submitted to The Journal of Cell Biology had digital images that had been improperly manipulated, suggesting a 1% frequency [11]. Finally, routine data audits conducted by the US Food and Drug Administration between 1977 and 1990 found deficiencies and flaws in 10–20% of studies, and led to 2% of clinical investigators being judged guilty of serious scientific misconduct [18].
All the above estimates are calculated on the number of frauds that have been discovered and have reached the public domain. This significantly underestimates the real frequency of misconduct, because data fabrication and falsification are rarely reported by whistleblowers (see Results), and are very hard to detect in the data [10]. Even when detected, misconduct is hard to prove, because the accused scientists could claim to have committed an innocent mistake. Distinguishing intentional bias from error is obviously difficult, particularly when the falsification has been subtle, or the original data destroyed. In many cases, therefore, only researchers know if they or their colleagues have wilfully distorted their data.
Over the years, a number of surveys have asked scientists directly about their behaviour.
However, these studies have used different methods and asked different questions, so their results have been deemed inconclusive and/or difficult to compare (e.g. [19], [20]). A non-systematic review based on survey and non-survey data led to estimate that the frequency of “serious misconduct”, including plagiarism, is near 1% [11].
This study provides the first systematic review and meta-analysis of survey data on scientific misconduct. Direct comparison between studies was made possible by calculating, for each survey question, the percentage of respondents that admitted or observed misconduct at least once, and by limiting the analysis to qualitatively similar forms of misconduct -specifically on fabrication, falsification and any behaviour that can distort scientific data.
Meta-analysis yielded mean pooled estimates that are higher than most previous estimates. Meta-regression analysis identified key methodological variables that might affect the accuracy of results, and suggests that misconduct is reported more frequently in medical research.
[above text is the Introduction]
Copyright
To Read the Complete article click here
No comments:
Post a Comment