World Diabetes Day: Signs you may be prediabetic
This World Diabetes Day, learn the key signs of prediabetes and how lifestyle changes can help manage it before diabetes sets in.
One study of 526 patient trials in anesthesiology found that 8 per cent had fake data and 26 per cent were critically flawed.
If you are suffering with chronic pain, diabetes, heart problems or any other condition, you want to be confident that your doctor will offer you an effective treatment. You certainly don’t want to waste time or money on something that won’t work or take something that could do you harm. The best source of information to guide treatment is medical research. But how do you know when that information is reliable and evidence-based? And how can you tell the difference between shoddy research findings and those that have merit? There’s a long journey to the publication of research findings.
Scientists design experiments and studies to investigate questions about treatment or prevention and follow certain scientific principles and standards. Then the finding is submitted for publication in a research journal. Editors and other people in the researchers’ field, called peer-reviewers, make suggestions to improve the research. When the study is deemed acceptable, it is published as a research journal article. But a lot can go wrong on this long journey that could make a research journal article unreliable.
And peer review is not designed to catch fake or misleading data. Unreliable scientific studies can be hard to spot – whether by reviewers or the general public – but by asking the right questions, it can be done. While most research has been conducted according to rigorous standards, studies with fake or fatally flawed findings are sometimes published in the scientific literature. It is hard to get an exact estimate of the number of fraudulent studies because the scientific publication process catches some of them before they are published.
Advertisement
One study of 526 patient trials in anesthesiology found that 8 per cent had fake data and 26 per cent were critically flawed. As a professor in medicine and public health, I have been studying bias in the design, conduct and publication of scientific research for 30 years. I’ve been developing ways to prevent and detect research integrity problems so the best possible evidence can be synthesized and used for decisions about health.
Sleuthing out data that cannot be trusted, whether this is due to intentional fraud or just bad research practices, is key to using the most reliable evidence for decisions. The most reliable evidence of all comes when researchers pull the results of several studies together in what is known as a systematic review. Researchers who conduct systematic reviews identify, evaluate and summarize all studies on a particular topic.
They not only sift through and combine results on perhaps tens of thousands of patients, but can use an extra filter to catch potentially fraudulent studies and ensure they do not feed into recommendations. This means that the more rigorous studies have the most weight in a systematic review and bad studies are excluded based on strict inclusion and exclusion criteria that are applied by the reviewers. To better understand how systematic reviewers and other researchers can identify unreliable studies, my research team interviewed a group of 30 international experts from 12 countries.
They explained to us that a shoddy study can be hard to detect because, as one expert explained, it is “designed to pass muster on first glance.” As our recently published study reports, some studies look like their data has been massaged, some studies are not as well designed as they claim to be, and some may even be completely fabricated. Our study provides some important ideas about how to spot medical research that is deeply flawed or fake and should not be trusted.
The experts we interviewed suggested some key questions that reviewers should ask about a study: For instance, did it have ethics approval? Was the clinical trial registered? Do the results seem plausible? Was the study funded by an independent source and not the company whose product is being tested? If the answers to any of these questions is no, then further investigation of the study is needed. In particular, my colleagues and I found that it’s possible for researchers who review and synthesize evidence to create a checklist of warning signs.
These signs don’t categorically prove that research is fraudulent, but they do show researchers as well as the general public which studies need to be looked at more carefully. We used these warning signs to create a screening tool – a set of questions to ask about how a study is done and reported – that provide clues about whether a study is real or not. Signs include important information that’s missing, like details of ethical approval or where the study was carried out, and data that seems too good to be true.
One example might be if the number of patients in a study exceeds the number of people with the disease in the whole country. It’s important to note that our new study does not mean all research can’t be trusted. The Covid-19 pandemic offers examples of how systematic review ultimately filtered out fake research that had been published in the medical literature and disseminated by the media. Early in the pandemic, when the pace of medical research was accelerating, robust and well-run patient trials – and the systematic reviews that followed – helped the public learn which interventions work well and which were not supported by science.
For example, ivermectin, an antiparasitic drug that is typically used in veterinary medicine and that was promoted by some without evidence as a treatment for Covid-19, was widely embraced in some parts of the world. However, after ruling out fake or flawed studies, a systematic review of research on ivermectin found that it had “no beneficial effects for people with Covid- 19.” On the other hand, a systematic review of corticosteroid drugs like dexamethasone found that the drugs help prevent death when used as a treatment for Covid-19. There are efforts underway across the globe to ensure that the highest standards of medical research are upheld.
Research funders are asking scientists to publish all of their data so it can be fully scrutinized, and medical journals that publish new studies are beginning to screen for suspect data. But everyone involved in research funding, production and publication should be aware that fake data and studies are out there. The screening tool proposed in our new research is designed for systematic reviewers of scientific studies, so a certain level of expertise is needed to apply it. However, using some of the questions from the tool, both researchers and the general public can be better equipped to read about the latest research with an informed and critical eye.
Advertisement