Seen the “98% of studies were ignored!” one doing the rounds on social media. The editorial in the BMJ put it in much better terms:
“One emerging criticism of the Cass review is that it set the methodological bar too high for research to be included in its analysis and discarded too many studies on the basis of quality. In fact, the reality is different: studies in gender medicine fall woefully short in terms of methodological rigour; the methodological bar for gender medicine studies was set too low, generating research findings that are therefore hard to interpret.”
You can of course. Statistics are not required to explain why a self selective Facebook poll is low quality while a multi centre 5 year study with followup and compartor is of a much higher quality.
Studies are also scored low on quality if, for example, they don’t control for important sociodemographic confounders. Study that do control these, will have more reliable results.
You can read how the scoring works in supplementary material 1.
“They dismissed 98% of the data” remains a lie. Repeating it doesn’t change anything.
“You can of course. Statistics are not required to explain why a self selective Facebook poll is low quality while a multi centre 5 year study with followup and compartor is of a much higher quality”.
That’s wrong when you are trying to be scientifically correct. A science paper without that math isn’t science my dude. And comparing trans healthcare data to Facebook polls is ridiculous
It’s remarkably common in systematic reviews, a feature even. You give the impression that this is a new or foreign concept to yourself and are just encountering these ideas for the first time.
Search on pubmed or the bmj or the Cochrane library for other systematic reviews using the Newcastle-Ottawa score. You’ll trip over them.
One of the studies reviewed recruited patients over Facebook and polled them.
“They dismissed 98% of the data” remains a lie.
Again I’ve written these reports. It is absolutely not common practice to disclude data without scientific reason and analysis. It is explicitly taught not to do it that way in college. And it is not scientific to do that without a statistical threshold and confidence analysis of your reasoning.
I am forced to strongly doubt this given your whole misunderstanding of the basic concepts on assessing methodical quality…
Certainly, you’ve never authored a systematic review for a reputable medical journal.
But don’t take my word for it…
https://handbook-5-1.cochrane.org/chapter_13/13_5_2_3_tools_for_assessing_methodological_quality_or_risk_of.htm
You mean such as using a method like the Newcastle-Ottawa score to assess data quality?
If your college course covered systematic reviews and didn’t include a review of study assessment methods, ask for a refund.
Statistics are not required to assess that a study without a comparator is weaker than one with.
“They dismissed 98% of data” remains a lie.
The Newcastle method is not seen as a scientific basis for dismissal on its own.
98% of the data was dismissed in the synthesis and was not used to reach the conclusion that there wasn’t enough scientific evidence to support transition when 98% of the science says that is wrong.
And every scientific paper is expected to be comprehensive on its subject matter and/or thesis.
It’s not used for “dismissal” it’s used to score studies on their likelihood of bias. Studies without appropriate controls for example are more susceptible to bias than those with.
Demonstrably false, only low quality studies were excluded from the synthesis which account for less than half of the 103 reviewed. A lie is a lie no matter how often repeated.
That’s not what the conclusions say, for example:
And
“They dismissed 98% of data” remains a lie.
https://www.tandfonline.com/doi/full/10.1080/26895269.2024.2328249
That was published a month before Cass came out and so hasn’t anything to do with the two systematic reviews being discussed above. It doesn’t even mention them.
I’m uncertain what expertise a business graduate can bring to assessing the quality of a systematic review in medicine.
Readers are free to Google the author and subsequently make a judgement on their objectivity on the subject matter.