A Real Problem: The Illusion of Evidence-Based Medicine

The Illusion of Evidence-Based Medicine” investigates corruption in many areas of so-called evidence-based medicine. I found the author’s work examining Randomized Controlled Trials to be the most telling and damning. But there’s more of which I’ll touch on.

Clearly, medical treatments should only be administered if they have been proven to be safe and effective.

That just makes sense.

It is an axiom of medical research that the best evidence of safety and effectiveness can be found through a large, controlled, randomized, double blinded trial.

Alas, there is a problem.

Many experts are reaching the same conclusion; large, randomized clinical trials are very expensive and often only affordable by the pharmaceutical industry.

So, the question must be asked: does the profit motive of Big Pharma undermine the reliability of RCTs?

Some of the top people in the medical profession believe that is the case. Stanford professor of medicine, John Ioannidis, had this comment, “The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.”

The former editor of the prestigious New England Journal of Medicine agreed. “It is simply no longer possible to believe much of the clinical research that is published."

Is medical science being compromised? And if so, how?

This question is the strength of Jon Jureidini and Leemon McHenry’s book.

The Actual Process of RCTs

First, let's talk about how RCTs are conducted.

An “investigator” - someone or some group who wishes to conduct a study - will contact a sponsor to fund the study. In drug trials, the sponsor is often a pharmaceutical company.

The investigator will propose a “study design” to the sponsor.

The design may be a RCT comparing the safety and efficacy of one drug against another drug or against a placebo. And it will have what are called “primary endpoints” and “secondary endpoints.”

The primary endpoints represent the hypotheses being tested. For instance, if a drug is designed to prevent allergy-related deaths then the primary endpoint might be: Did the drug succeed in preventing deaths when compared to the control group?

Questions like that are typical primary endpoints.

A secondary endpoint is a finding from the study which is collateral to the purpose of the study but may be of interest. For instance, a drug designed to prevent allergy-related deaths might also have a secondary endpoint such as “Was quality of life improved?”

But remember this. The study was designed to answer a binary question (death or survival.) The study was not designed to resolve the ambiguous, non-binary secondary endpoint like quality of life. Secondary endpoints may raise interesting issues, but they usually don’t prove anything.

This is important, we’ll come back to it later.

The actual conduct of the study is guided by rules called “protocols.” Protocols include:

  • Who qualifies to participate (selection criteria);
  • How many people will be part of the study;
  • How long the study will last;
  • How the drug will be given to patients and at what dosage;
  • What assessments will be conducted, when, and what data will be collected;
  • How the data will be reviewed and analyzed.
After the data is collected, it is collated and statistically analyzed according to the protocols. The result is distilled into a final clinical study report. That report becomes the basis for a publication which, hopefully, will be published in a medical journal.

What could go wrong?

The answer to that question is: Just about everything.

Let’s look at each step of the RCT according to the authors.

Study Design

A study can be deliberately designed to succeed or fail - depending on the desired outcome.

For instance, if a drug is meant to look good in the trial, it can be tested against a drug known to be ineffective. Or it can be tested against another drug administered with a dose too low to have any effect.

Either way the drug meant to be promoted looks good by comparison.

If, on the other hand, if a drug is meant to look bad, the dosage of that drug can be set so high as to increase adverse events. On the other hand, the dosage can just as easily be set too low so it would not have any real effect.

Either way, the drug can be demonized.

Other scenarios are easy to imagine. For instance, the time of administration of the drug is important.

Some drugs must be administered early in a disease process to be effective. Typically, flu medications must be administered early.

There are other drugs where the time of administration is important. Thus, the time of the administration could be manipulated in a study in order to show purported effectiveness or ineffectiveness depending upon the desired goal.

Medical professionals argue that there are other ways by which studies have been designed to come to a pre-determined conclusion. They submit that it can be done by locating a study in a locale where the availability of commonly consumed over-the-counter drugs will engineer the desired result. Or by the selective utilization of the criteria for inclusion or exclusion of participants.

In short, it is easy to imagine any number of ways that a study design can generate a preferred outcome.

Protocols, Data Collection and the Final Report

After the design of the study is approved, next comes the administration of the drug and the collection of the data. Once that data is collected it is collated into a final clinical study report.

That report can be made to actually hide the real results from the study. This can be done via the study’s rules or protocols. For instance, adverse events can be white-washed out of a final clinical study report by a categorization protocol. Here is what that means:

The final clinical study report simply does not list all the adverse events shown by the raw data. Instead, the protocols of the study may require that adverse events be reported by way of pre-defined categories.

The book offers an example of how adverse events in a the study of an anti-depressant were hidden by “categorization”. Under the protocols of that study there was no category for suicidal thoughts. Thus, suicidal thoughts were, of necessity, reported in another category, “emotional lability." This, in effect, disguised severe adverse events (suicidal thoughts) by reporting it in a seemingly benign category of emotional lability.

Another way to hide adverse events is by setting arbitrary reporting thresholds. For instance, the protocols may specify that adverse events will only be reported if they are found in more than 10% of patients. In that way, even very serious adverse events might not be reported at all since not enough of them occurred to meet the reporting threshold.

It gets worse.

Reporting thresholds can be coupled with categorization to further limit the reporting of adverse events. Suppose a study has no category for suicidal thoughts. In that case, suicidal thoughts might sometimes be reported under emotional lability and sometimes under “agitation.” By dispersing suicidal thoughts under two categories, neither category might reach the 10% threshold for reporting and, accordingly, the problem would not be reported at all.

This strategy can hide even dramatic signals regarding adverse events. Suppose there was a 19% rate of suicidal thoughts. That is a substantial signal indicating significant danger in the use of the drug, but if those reports of suicidal thoughts are spread equally between two separate categories, e.g. emotional lability and agitation (9.5% each) neither category ever reaches the 10% reporting threshold. Thus, the fact that 19% of test subjects reported suicidal thoughts is not reported at all. That effectively hides a very significant problem which, in reality, far exceeded the 10% threshold.

Non-reporting can also be facilitated if a clinician unilaterally fails to report the adverse event at all. The clinician may simply decide that an adverse event is not related to the drug being tested and therefore does not have to be placed in any category whatsoever.

In these ways, adverse events can simply be whitewashed out of a study altogether.

What About Honesty?

Now, at this point you might say, "well, wait a minute, some of these authors of the articles generated by the study must be honest and certainly some of those honest authors will object to the deliberate mischaracterization or hiding of relevant data."

That is a legitimate objection. But prepare to be shocked.

Oftentimes, the authors of these articles are not even aware of the actual data at all.

Yes, you read that correctly.

How can that happen? Like this:The study may have been conducted by a contract research organization and the article itself written up by professional medical ghostwriters.

The actual authors may be barely involved in the study at all.

One medical ghostwriting company advertisement described the process as follows: The first step is to choose the target journal best suited to the manuscript’s content, thus avoiding the possibility of manuscript rejection. We will then analyze the data and write the manuscript, recruit a suitable well-recognized expert to lend his/ her name as author of the document, and secure his/her approval of its content.
That really happens.

But, you might object “well, so what, the authors can see the actual raw data. And some wouldn’t sign on to something that misrepresents that data. Right?”

Not so.

First, remember that the original raw data is compiled into a final clinical study report. And remember that that final clinical study report is generated after the protocols have sanitized the raw data. In reviewing the clinical study report the authors are not seeing the raw data at all.

And now for the clincher…

The authors only get to see the final clinical study report. They don’t get to see the raw data at all. According to Jureidini and McHenry, the raw data itself is considered to be “owned” by the pharmaceutical companies. It can only be released to the purported authors with the consent of the pharmaceutical company.

What? Would researchers actually sign off on a study without seeing the data?

Yes. It happens.

In the Surgisphere scandal eminent scientists unknowingly signed off on a study based upon clinical findings that apparently never existed in the first place.

In short, the authors of a paper might not know if the study had been handled correctly. They might not know if the conclusions of the study are consistent with the actual data. They might not even know if the study was conducted at all. (NOTE: the doctors involved in the Surgisphere article withdrew their endorsement of the study once it become known that there were problems.)

But, yes, it gets even worse.

Jureidini and McHenry relate the story of a researcher who did indeed have access to the original data. She found out that the drug being tested was unsafe and ineffective. Being honest and dedicated she complained. So, the Pharma company simply withdrew funding and terminated the study. But the researcher, Nancy Olivieri, decided to go public and publish her findings. Her actions came with a cost. After Olivieri published, she was sued by the drug manufacturer for an alleged violation of a confidentiality agreement she had signed. It took many years for the legal issues to resolve.

So, here is Jureidini and McHenry’s take on published RCTs.

Even though an article may list a dozen or more authors, the “authors’” involvement in the study may be minimal. They may have read the article, offered some minor editing, but that is it. They may have no idea if the study is actually supported by the raw data. They may have no idea if the study was even conducted.

But, on the other hand, if a researcher actually obtains information about the real data (such as Nancy Olivieri) they may find themselves mired in a serious legal and ethical dilemma. And should they choose honesty and ethics, it can involve them in a multi-year lawsuit.

And, yet, there is more.

Suppose that the results shown by the actual raw data are so horribly awful that that even a dedicated ghostwriter cannot possibly spin it in a way that saves the day.

That happens. The following email from a ghostwriter referred to a manuscript commissioned in relation to a treatment for panic disorder. The ghostwriter noted, “There are some data that no amount of spin will fix …”

Stop!

Fixing data with “spin?” Isn’t this supposed to be science? Well, maybe not.

What Problems?

In any event, problems like horrible data can be still be circumnavigated by Big Pharma.

Here is how this is done, according to Jureidini and McHenry.

Suppose that the primary endpoints show the drug is not safe. Further suppose the primary endpoints show that the drug is not effective. What is done then?

Sometimes the response is to simply to drop the primary endpoints from the final article altogether.

But if that is done, then what is left to be published in the medical journal?

Well, at that point, carefully selected positive secondary endpoints are touted as proving efficacy and/or safety. And that happens even though secondary endpoints are not necessarily dispositive of anything in the first place.

Whew!

So, how many RCT’s suffer from these flaws? No one knows for sure. “The prevalence of ghostwriting for the medical journals is unknown precisely because ghostwriting is designed to be untraceable,” wrote Jureidini and McHenry.

And that is just the lowdown on RCTs. There is much more in this book.

And There's More

For instance, pharmaceutical companies have thinly disguised marketing plans for drugs which include presentations at (and sponsoring of) medical conferences, participation in (and sponsoring of) continuing medical education, as well as sponsoring of consumer organizations likely to favor the prescription of a drug.

yes, this gets worse, too.

Even formal, published drug recommendation guidelines are often influenced by pharmaceutical companies. The book notes that authors of clinical practice guidelines have extensive conflicts of interest with the pharmaceutical industry. In one study, of the 192 authors of these guidelines surveyed, 87% had interactions with industry, 58% received financial support and 38% had been employees or consultants of industry, report Jureidini and McHenry.

And pharmaceutical companies also influence government regulatory authorities, universities and medical journals.

By way of example, medical journals receive substantial advertising and other revenues from pharmaceutical companies.

The authors state that many journals could not even survive without that income. The book notes multiple instances where medical journals were confronted with “undeniable evidence of fraud,” but the journals nonetheless refused to retract industry studies.

But all the above is just an overview. It is just an outline. There is a lot more in the book including failed attempts at reform, suggestions for effective reform and a philosophical discussion of the difference between actual science, marketing and pseudo-science.

So, take a look.

And the next time someone tells you to “just follow the science,” loan them a copy of this book. Then tell them to read this book “because while I would love to ‘follow the science’, you know, somehow I just can’t seem to find it.”

This is a companion discussion topic for the original entry at https://peakprosperity.com/a-real-problem-the-illusion-of-evidence-based-medicine/

Great Write Up Mike

Anyone who followed Chris through the Covid circus knows how flawed the research side of medicine has become. This book is a great exclamation point on something we have come to realize. I trust the medical profession as much as I trust Dominion voting machines.

21 Likes

Love This Book Summary Addition!

I was going to title my post “Love This!” but actually I am horrified although not surprised that is is going on in our scientific community.

10 Likes

I’ve Seen This Before.

Suppose that the primary endpoints show the drug is not safe. Further suppose the primary endpoints show that the drug is not effective. What is done then?
Sometimes the response is to simply to drop the primary endpoints from the final article altogether.
But if that is done, then what is left to be published in the medical journal?
Well, at that point, carefully selected positive secondary endpoints are touted as proving efficacy and/or safety. And that happens even though secondary endpoints are not necessarily dispositive of anything in the first place
The study that got remdeathisnear an EUA did exactly this. They published the primary endpoint anyway because they were able to manipulate the data enough to get a small nonsignficant death benefit. They were able to whitewash the excess of kidney failure that showed up in the study too. While I'm at it, the TOGETHER Ivermectin trial took dosing manipulation to the highest of levels: a marginally low dose given for too few days.instructions to take on an empty stomach to minimize absorption and push the absorbed dose way down into the "too low" range.capping the dose at a given weight to reduce the dose to a significant number of people in the study who are overweight or obese and therefore most at risk. Running the trial in a place where the drug was available over the counter and widely used for Covid and making sure the placebo looked different from the version sold over the counter.They gave it late too when it was less effective. For good measure, they allowed the primary endpoint of hospitalization to be counted any time after the first dose while ignoring hospitalizations within 24 hours for other drugs they tested.
18 Likes

Amen!

This is such an important topic.
And, it gets even worse. If the sponsors want to show a failure they simply under power the study to find that a p-value of >0.05, and state “no effect seen.” Then if the sponsors want to show success, they simply find end points with p<0.05 and throw money to make the small effect look big.
But, of course that isn’t what the p-value is saying. This is such a problem that the ASA has a position statement (https://amstat.tandfonline.com/doi/full/10.1080/00031305.2016.1154108#.XKdq5OtKhTb ) And scores of real scientists have raised this flag. (e.g. https://www.nature.com/articles/d41586-019-00857-9?fbclid=IwAR1Nr7r5ULjtBaqiwY8abQcoTNKiRpV8pCUp_C71FWlbQGnxIXqtswArNCc). It is estimated that roughly 50% of the literature has incorrect conclusions based on this misconception.
Yet almost none of the MD’s I know can correctly state what a p-value correctly means, and most of them routinely say “no effect” or “does not work” from simply reading the p-value and last sentence of the abstract. E.g. “Ivermectin doesn’t work - they ran XXX trial and this ‘proved’ it.” Uh, no. Please go back to school and learn the difference between clinical significance and statistical significance.
Then there is the hyper focus on 'blinded, randomized" trails. There are MD’s that treat RCT’s almost like a religion. They see a study title is an RCT, and bow. If not, they call it quackery. As if the title implies that there can be no other source of bias, and as if the words in the title RCT ensured that blinded or randomized correctly (e.g. considering the appropriate starting population to randomize in the first place, consideration for drop outs, ways that the blinding might not work, that data was collected properly, etc). Randomization and Blinding are just ways to reduce study bias. They are important tools, but FAR from the only consideration in evaluation of data. Similarly, the lack of blinding or poor randomization does not necessarily imply “no evidence”. It means you must consider the potential for bias and how that that might affect what was observed.
Note that this isn’t at all what what the original intent of EBM ever was. Sackett himself wrote “It’s about integrating individual clinical expertise and the best external evidence.” https://www.cebma.org/wp-content/uploads/Sackett-Evidence-Based-Medicine.pdf The whole point was to use ALL the external evidence. And, avoid the top down “from the ivory towers”, but instead start from the questions presenting in the patient in front of you. EBM in its original intent on training physicians about statistics was likely too optimistic in its goal, but had good intents. Now, it’s clear the term has been totally corrupted.

11 Likes

Flaws

The most basic of flaws is before the distortions of randomization and ‘controls’. It is in the definitions and labeling (not even getting to exclusions, sampling and measurements). In my fellowship year (which objectively was very successful from a publishing perspective). I was trying to reproduce a technique from a ‘landmark’ study to apply to another area. My chief technician, randomly, met the chief technician who had done the landmark work, over drinks, the ‘secret sauce’ was mentioned, that the graphs in the ‘landmark’ study were “representative” not individual. There was a lack of institutional support for any further demonstration of what was essentially a fraud, and did not make me interested in pursuing a career in acedemia.
The covid jab fiasco (Pfizer study), follows this path as well, change definitions, endpoints, inclusions, selections, outcomes, and structure (getting rid of the placebo group), add some actual fraud in some reporting and “presto” the desired result.

13 Likes

remdeathisnear” - an all too accurate pun. Was tempted to put lol but it is not funny, in the least ?

8 Likes

That really resonates with me. One of the conclusions that I have come to recently is that so-called “evidence based medicine” is oversold. In a complex system it is very, very difficult to prove anything at all. But the “experts” nonetheless claim that they “know” the answers. This begets a new from of tyranny akin to the one practiced by medieval scholastics against an indoctrinated populace.

6 Likes

“science” And “life Or Death” For Sail

FTX Slush Fund Bankrolled Fake Studies To Hide Covid Therapeutics – [your]NEWS (yournews.com)
The Uni-parties are for sale to the highest biders.

8 Likes

Great Review, Mike

Thanks for your work on this. One need only look at how Andrew Hill torpedoed the study of IVM done with Tess Lawrie to see how things really work.
The price paid goes beyond products that are neither safe nor effective, and the harms they cause. There is also the incalculable cost of loss off trust, which we are now seeing play out in real time. The effects, sure to be wide ranging in depth and scope, are still to be fully experienced.
This is critical info, a reality that people must be willing to open their minds to. But will they? I will not bet the farm, given wilful blindness seems to be endemic. Still, we have to keep trying…

9 Likes

Great Review

And the comments are excellent too.

7 Likes

Thank You For The Review Mike!

The unfortunate truth is this: If someone really wants to get under the covers of the pandemic narrative, they have to delve into this realm of debunking studies. There is no way around it. As well, some of the deception has occurred at a deeper level, leading to questions about how, for instance, mRNA transfection may have changed the tendency of PCR to produce false negatives.
One of the most egregious deceptions though IMO was the way that pharma hid the fantastically negative efficacy in the first two weeks after the first dose was given. We all heard the stories of waves of Covid-19 passing through nursing or care homes in the weeks after mass injections. Pharma simply binned folks as unvaccinated until weeks after the second dose, such that in a statistical view, these case counts made the efficacy of the injections look higher, not lower.
That was a nasty, nasty deception. The effect comes out clearly in this charted version of the SAIC dataset… wildly negative efficacy in the first weeks after “vaccination”… followed by… um… almost no discernable positive efficacy.
https://roundingtheearth.substack.com/p/the-saic-data-shows-zero-vaccine
https://peakprosperity.com/wp-content/uploads/2022/11/saic-data-shots-1668878664.5166-800x998.gif

8 Likes

What they did with Ivermectin completely shattered what little faith I still had in government institutions.
Everyone I know who took Ivermectin for Covid recovered quickly. It didn’t matter how sick they were. Everyone recovered quickly without exception - including myself.
This type of “information fraud” is, unfortunately, not limited to public health institutions. It is endemic to institutions throughout Western Culture.
I don’t see how a culture based upon systemic dishonesty can survive. I just don’t see how it can.

13 Likes

Thanks.
One of the many things in the book that didn’t make it into the review is how facts are “hidden in plain sight.”
davefairtex often points this out.
A lot of times the studies come with appendices. The appendices are often relied upon as justifying the conclusion offered in the study. But oft times, the actual data in the appendices contradicts the conclusion proffered by the study.

8 Likes

Right. IMO this is what the meaning of ‘integrating clinical experience’ means in the article. The human body is more complicated than we can possibly even imagine. Every person has a unique set of genes and background exposure history that might have led to their presentation.
So they used to talk about ‘binding the evidence to the patient’ - meaning to see if the external study was close enough to fit the patient. The boards like to emphasize this in questions like “<some clinical scenario>. The best course of action to take next is <some choice list based on one study that had some conclusion you are supposed to know about>.” The wrong answers are usually reasonable things to do, but not the “best” answer because of some focus point.
This is way over simplified. The external evidence is just used to formulate a better understanding of how you think things are operating and what is going on in the patient. It is to highlight where your feeble model might be wrong or incomplete. The goal of that understanding is just so it can help with decisions about what might help to make things better and if the risks of some action are less than the benefits. Sometimes actions are relatively benign so ok to try even though there is a lot of uncertainty about benefits. Other times even when benefits may be reasonably established they entail lots uncertainty about risks or downsides, or opportunity loss, so still not worth it.
E.g. patient circumstances matter - a lot. I remember one patient where I was dinged on “patient quality metrics” because I didn’t prescribe him a statin when “the evidence” supported that he should have been on this. But unseen to the algorithm generating the quality report, the patient was homeless, and had a major alcohol issue. So I spent the visit trying to get that patient into a shelter. That patient was going to die of alcohol abuse way before statins might have any effect - even if he could have afforded to take them and had motivation to take them (and even if the “evidence” on statins was correct).

5 Likes

I have come to the conclusion that doctoring is an art. There are good artists and bad artists. But doctoring is not the “one size fits all” approach offered by clowns like Fauci.

12 Likes

I thought it was pronounced rundeathisnear.

10 Likes

One More Issue - All Patients In A Demographic Are Alike

All of the above are, in my mind, fraud. There is another, more insidious issue with all medical treatments, at least partly driven by insurance attempts to use the cheapest, not the best treatment.
Say I do a test on some treatment and it’s wildly successful for 55% of the test population, but no or adverse effect on the rest. Quite frankly, I don’t give a d…m. The ONLY salient question is why didn’t it work on the other 45%. We’ve gone a long way in the last half century from testing on middle class white males, to studies that include women and people of color and they now consider age sometimes pre-existing conditions.
However, other than pre-existing conditions, none of these demographic blobs are homogeneous in either genetics or environment. We’re beginning to see more breakdown in some cases, but seldom in treatments designed for wide distribution. If we really wanted to get some science back into the equation, we’d require any researchers to then answer the question of what happened in the other 45%. Instead, they simply dump a drug on the market with NO good list of who shouldn’t get it based on pre-existing (they were already excluded from the study). Instead we have a long list of adverse side effects that “might happen” and maybe requirements for monitoring. Too bad the vaccines didn’t have at least the monitoring requirement.
Well, they were moving at the “speed of science”, e.g. fraudulent, money-motivated science. Today’s science couldn’t possible prevent polio or put a man on the moon. But not to worry, they can make billions from the metaverse [or an untested vaccine] and that’s what counts.

4 Likes

Excellent… One One More Thing: Ndas…

Not sure if the book mentioned it, but this write up did not.
The Doctors/Researchers are OFTEN required to sign NDAs. And if the study goes bad, and kills everyone. They are PREVENTED from letting the public know.
Find Dr. Diamond Relative Risk… On YouTube.
He mentioned at an event I attended that the NDAs hide even more. I asked him this question “Does the trust we give a study that has a reported “positive” result get impacted at all, if there WERE 10 studies showing Harm? And does the fact that NDAs PREVENT anyone from knowing about the 10 Studies simply make most of these studies untrustworthy?” (He would not go that far)…
But you decide. If you knew that MOST Cholesterol Drugs harm 10 times the number of people they help… Would you still take them? (You have a 10% increased risk of Type II diabetes. And less than 1 person PER 500 will prevent a heart-attach (Number Needed to Treat)).
Would you risk 50 people becoming diabetic to save 1 heart-attack?
BTW, Diabetes is the #1 risk factor for heart attacks…
Every drug should report… NNT: The Number needed to treat for 1 person to see the benefit.

7 Likes

That is one way to describe it. I prefer to describe it as “practicing medicine”. Been practicing for hundreds of years and continue to do so.

2 Likes