Dumb "research" ?
I was amazed to come across a paper from the Urology Dept. at M.D.Anderson which pronounced:
"Our findings suggest that overweight and obese patients with renal cell carcinoma have a more favorable prognosis than patients with a normal BMI."
This paper, entitled "Prognostic value of body mass index in patients undergoing nephrectomy for localized renal tumors" appeared in 2004 and has been followed by others in various parts of the world trying to throw light on its mysterious finding. Since obesity is probably the highest risk factor predisposing to RCC it seemed surprising that it could confer a benefit in survival of RCC.
It appears to me to be easy to resolve the conundrum when one realises that that 2004 paper did not show what the authors claimed, as a result of their proceeding on the basis of a false assumption, or, rather, definition from the very outset. They classified their patients into 3 groups by BMI and defined as "normal" any BMI less than 25. Tell the hundreds of millions of starving people of south-east asia and Africa that any BMI under 25 is "normal".
As soon as one puts together the basic facts that are known about the age distribution of RCC patients, the sarcopenia of ageing and the complex relationship between BMI and both morbidity and mortality, the idiocy of the paper's study design is glaringly obvious.
The abstract of the paper concluded with the breathlessly expressed optimism that "If others confirm our finding that a high BMI confers a survival advantage to patients undergoing nephrectomy, BMI may prove to be an important prognostic factor in renal cell carcinoma."
Unsurprisingly, subsequent studies have NOT confirmed their imaginary discovery. A paper from Heidelberg University in 2008 entitled "The influence of body mass index on the long-term survival of patients with renal cell carcinoma after tumour nephrectomy" followed the WHO classifications of BMI category and concluded: "While underweight patients had a significantly worse prognosis than those of normal weight, overweight or obese patients had a similar outcome to that of patients of normal weight." which is what the intelligent observer might expect.
It's a pity when a poorly thought out piece of "work" gets published in a respectable journal and then spawns a number of 'me too' investigations before its inadequacy is demonstrated to the unwary. Here, for example, it led to a paper in Korea published as recently as 2010 reporting "Our findings suggest that overweight and obese Korean patients with renal cell carcinoma have more favorable pathological features and a better prognosis than those with a normal body mass index." It's striking that the actual wording is virtually identical, mutatis mutandis, with that in the 2004 paper. Their BMI cut-offs were different (e.g. 'obese' = BMI over 27.5) reflecting the different body composition of asian populations and the greater risk for them of degenerative diseases at lower obesity thresholds. However, the same fundamental error was made, with all BMIs below 23 (representing more than a third of their cohort) being treated as 'normal'.
I've composed this message for two reasons. One is that it identifies totally spurious information that could be directly damaging to RCC patients. The other is the more general academic point that you need to bring your own brain to bear on what you are told in order to try to sort the wheat from the chaff and should always maintain a degree of skepticism when presented with facts that could have importance for you.
Comments
-
Brilliant research!
Tex that is brilliant research! Of course I am a fat girl,so this sounds wonderful to me. This is also the only risk factor I had. I guess being fat got me in this mess and being fat will get me out.LOL I hope everything is going well for you since your surgery. I will have a scan done tomorrow and get the results next week. I finish my second round of Sutent Sunday. Can't wait!!! I am messed up from my head to my toe. Until next time remember FAT GIRLS RULE!!!!
Nancy0 -
Hey NanaLouNanaLou said:Brilliant research!
Tex that is brilliant research! Of course I am a fat girl,so this sounds wonderful to me. This is also the only risk factor I had. I guess being fat got me in this mess and being fat will get me out.LOL I hope everything is going well for you since your surgery. I will have a scan done tomorrow and get the results next week. I finish my second round of Sutent Sunday. Can't wait!!! I am messed up from my head to my toe. Until next time remember FAT GIRLS RULE!!!!
Nancy
Just had to say not only do you rule but you Rock so with that said good luck on those upcoming scans I hope to hear that you are kickin RCCs butt0 -
Here's to NancyLimelife50 said:Hey NanaLou
Just had to say not only do you rule but you Rock so with that said good luck on those upcoming scans I hope to hear that you are kickin RCCs butt
Hear, hear Mike.
On the theme of the thread, the more academically minded may find the following piece shocking beyond belief.
http://www.guardian.co.uk/commentisfree/2011/sep/09/bad-science-research-error
The article is headed "The statistical error that just keeps on coming" and the strap-line is:
"The same statistical errors – namely, ignoring the "difference in differences" – are appearing throughout the most prestigious journals in neuroscience."
The guts of the argument are this "academics in neuroscience papers routinely claim to have found a difference in response, in every field imaginable, with all kinds of stimuli and interventions: comparing younger versus older participants; in patients against normal volunteers; between different brain areas; and so on." False conclusions, from bum statistics, are endemic in the world of neuroscience - that's what investigation has proved. The bottom line in the article is clear here:
"These errors are appearing throughout the most prestigious journals for the field of neuroscience. How can we explain that? Analysing data correctly, to identify a "difference in differences", is a little tricksy, so thinking generously, we might suggest that researchers worry it's too longwinded for a paper, or too difficult for readers. Alternatively, less generously, we might decide it's too tricky for the researchers themselves.
"But the darkest thought of all is this: analysing a "difference in differences" properly is much less likely to give you a statistically significant result, and so it's much less likely to produce the kind of positive finding you need to look good on your CV, get claps at conferences, and feel good in your belly. Seriously: I hope this is all just incompetence."
It seems reasonable to assume (and my personal experience bears it out) that the same can be said of the whole of medicine. Statistics is not taught to med students as it is to psychology students so most medics don't have the first clue. It's yet another factor, to add to the many, that must call into serious question the vast generality of "findings" from clinical trials. At the better end of the cline, the researchers pull in pro statisticians to keep them straight. However, it would be much more satisfactory if anyone going into medical research received a formal education in statistics. That way we would run fewer risks of all the specious "breakthroughs" and disappointing false dawns trumpeted in supposedly reputable papers of more dubious provenance.0 -
Big Pharma, motivation, dementia, circumspectionTexas_wedge said:Here's to Nancy
Hear, hear Mike.
On the theme of the thread, the more academically minded may find the following piece shocking beyond belief.
http://www.guardian.co.uk/commentisfree/2011/sep/09/bad-science-research-error
The article is headed "The statistical error that just keeps on coming" and the strap-line is:
"The same statistical errors – namely, ignoring the "difference in differences" – are appearing throughout the most prestigious journals in neuroscience."
The guts of the argument are this "academics in neuroscience papers routinely claim to have found a difference in response, in every field imaginable, with all kinds of stimuli and interventions: comparing younger versus older participants; in patients against normal volunteers; between different brain areas; and so on." False conclusions, from bum statistics, are endemic in the world of neuroscience - that's what investigation has proved. The bottom line in the article is clear here:
"These errors are appearing throughout the most prestigious journals for the field of neuroscience. How can we explain that? Analysing data correctly, to identify a "difference in differences", is a little tricksy, so thinking generously, we might suggest that researchers worry it's too longwinded for a paper, or too difficult for readers. Alternatively, less generously, we might decide it's too tricky for the researchers themselves.
"But the darkest thought of all is this: analysing a "difference in differences" properly is much less likely to give you a statistically significant result, and so it's much less likely to produce the kind of positive finding you need to look good on your CV, get claps at conferences, and feel good in your belly. Seriously: I hope this is all just incompetence."
It seems reasonable to assume (and my personal experience bears it out) that the same can be said of the whole of medicine. Statistics is not taught to med students as it is to psychology students so most medics don't have the first clue. It's yet another factor, to add to the many, that must call into serious question the vast generality of "findings" from clinical trials. At the better end of the cline, the researchers pull in pro statisticians to keep them straight. However, it would be much more satisfactory if anyone going into medical research received a formal education in statistics. That way we would run fewer risks of all the specious "breakthroughs" and disappointing false dawns trumpeted in supposedly reputable papers of more dubious provenance.
There's an interesting and controversial figure who ranks high in the UK 'nutritionist' field, Patrick Holford, who is under public challenge from Ben Goldacre and merits a whole chapter of the latter's book "Bad Science". This morning I came across a piece of Holford's addressed to our Prime Minister, lobbying for funds for a modest enough investigation into the efficacy of B vitamins in preventing or slowing the progression of Alzheimer's. It's brief and can be found at
http://www.bluequest.co.uk/foodforthebrain/emails/enews_apr2012/
It contains the familiar and irrebuttable theme:
"The trouble is, to put it bluntly, that there appears to be a built-in bias against research funding for non-patentable, non-drug approaches, that might be seen to compete with lucrative drug approaches, despite the fact that the science stacks up extremely well and no dementia prevention drugs have yet been proven to work. Please fund this vital research which would prove, once and for all, that Alzheimer’s disease is preventable in those with raised homocysteine levels, and also give people an easy way to take positive prevention steps."
For all the value of the immense effort and funds being poured into research by the drug companies (and long may the GOOD research continue) it is undeniably the case that there are big interests vested in our NOT finding quick, cheap cures for ailments on which careers are built (cf. gastroenterology and antibiotics for helicobacter pylori) and also whole companies. What's more, there's a lot more kudos in attracting a huge research grant from a high-profile company which will trumpet the 'breakthrough' you'll duly announce (even if your 'work' is utter crap).
In decrying all of the waste and corruption inherent in the present drug research model one doesn't have too throw the baby out with the bath-water. By the same token it remains desirable to watch out for biased rubbishing of low-cost solid work that threatens to derail the gravy train.
Holford's campaign doesn't depend on his personal credibility. It's well rooted in the work of a man of impeccable credentials. The under-linked article in our MailOnline (not a distinguished title in the British press, but there's good and bad to be found most places - hence the need for keeping one's critical faculties up to scratch) gives the nub of the case, more discreetly framed
http://www.dailymail.co.uk/health/article-2116392/Dementia-Vitamin-B-supplement-tackles-disease-drug-industry-spending-billions.html
Lest anyone is in any doubt about the author's capacity you can quickly see here why he should be listened to
http://www.mrc.ox.ac.uk/staff/david.html
Bottom line, I suggest, is to be ruthlessly critical but try to stay as open-minded as possible and receptive to new ideas unless their provenance is patently contemptible.0
Discussion Boards
- All Discussion Boards
- 6 CSN Information
- 6 Welcome to CSN
- 121.9K Cancer specific
- 2.8K Anal Cancer
- 446 Bladder Cancer
- 309 Bone Cancers
- 1.6K Brain Cancer
- 28.5K Breast Cancer
- 398 Childhood Cancers
- 27.9K Colorectal Cancer
- 4.6K Esophageal Cancer
- 1.2K Gynecological Cancers (other than ovarian and uterine)
- 13K Head and Neck Cancer
- 6.4K Kidney Cancer
- 671 Leukemia
- 794 Liver Cancer
- 4.1K Lung Cancer
- 5.1K Lymphoma (Hodgkin and Non-Hodgkin)
- 237 Multiple Myeloma
- 7.1K Ovarian Cancer
- 63 Pancreatic Cancer
- 487 Peritoneal Cancer
- 5.5K Prostate Cancer
- 1.2K Rare and Other Cancers
- 540 Sarcoma
- 734 Skin Cancer
- 653 Stomach Cancer
- 191 Testicular Cancer
- 1.5K Thyroid Cancer
- 5.9K Uterine/Endometrial Cancer
- 6.3K Lifestyle Discussion Boards