Psycho-Babble Medication | about biological treatments | Framed
This thread | Show all | Post follow-up | Start new thread | List of forums | Search | FAQ

Re: ADs found to offer little clinical benefit Netch

Posted by Larry Hoover on March 1, 2008, at 13:34:54

In reply to ADs found to offer little clinical benefit, posted by Netch on February 26, 2008, at 4:48:17

> /Netch

I don't know why you didn't post the whole article, including this: "Professor Kirsch says patients should not change their treatment without speaking to their doctor, but said other approaches such as physical exercise, psychoanalysis and self-help books, have been found to help."

Right. Kirsch is a psychologist. Hardly biased. Note how he totally ignores the additive effect of medication and psychotherapy, which have vastly superior response and remission rates than do either modality alone.

Kirsch studied the identical dataset, in 2002 (Prevention: "The Emperor's New Drugs: An Analysis of Antidepressant Medication Data Submitted to the U.S. Food and Drug Administration"). His "findings" then were no different than now. How could they be? It's the same (incomplete) data! And the criticisms and limitations of his work have not changed, either. At least, he then had the sense to express doubt about antidepressant clinical trial methodology, when he said: "If there is a powerful antidepressant effect, then it is being masked by a nonadditive placebo effect, in which case current clinical trial methodology may be inappropriate for evaluating these medications, and alternate methodology need to be developed." He was absolutely correct to raise this question. Returning to the old dataset, without addressing this issue further, only makes him a hyprocrite, IMHO.

I have profound contempt for this latest work.

I can think of no valid reason to restrict the meta-analysis to research of such great age (11 to 20 years old), other than that the "results" were already known. There is a vast body of more competent, more recent, more extensive, and more relevant research which was totally ignored. The only redeeming feature is that at least he came up with the same statistics that he derived nearly six years before. However, his statistics have no external validity then, nor now.

External validity involves the generalizability of the findings of the study, outwards from the small sample of subjects studied, to the general population. For any antidepressant clinical trial (or collections of clinical trials, as in this meta-analysis) to have external validity, then a number of characteristics must be true. Just two of these assumptions are that the subjects taking part in the study are typical of the population to which the findings are to be generalized, and that these subjects are treated in ways typical to treatment in the real world. Well, let's take a look at these factors.

Are the participants typical? No! It has been estimated that less than 1 in 10 of all persons who present in a doctor's office with symptoms of depression would be eligible to participate in an antidepressant clinical trial. Some estimates go to as few as 1 in 30 eligible participants. Usually, exclusions are due to concurrent illness (having more than one medical issue), and/or concurrent medication, or history of other illness. So, right away, we've lost external validity between clinical trials, and clinical practise. Most of us aren't like these people at all. But there's far more to consider.

Participants in clinical trials are self-selected. They consent; they're not compelled. I have never seen anyone try to determine what psychological variables self-selection bias might bring to a clinical trial. Were they motivated by free medication? Payment for participation? Why wouldn't they be concerned about the 50% possibility of getting an inactive medication, if they are truly as depressed as they say they are? I have other subject-related concerns, but I'll move on to methodology.

Are real people treated the way they are in a clinical trial? Again, no! No, they are not. How do you think the researchers obtain their Hamilton Depression scores, or their own subjective measures of antidepressant response? By paying attention to their subjects, frequently and extensively. I've taken part in a clinical trial (not antidepressant), and let me tell you, I can only wish for that level of supportive attention. I have a gem of a doctor, and I'm not putting him down to say he can't come close. Is it any surprise that depressed people might respond to intensive attention and concern? A little loving?

Another huge issue: the duration of the trials. Some were of only four weeks' duration. How does that compare with standard clinical practise? Even at six weeks, or eight weeks (the greatest duration in this dataset), are we to assume that all possible response had already occurred? All we know is that they stopped measuring response at those points in time.

Remember, Kirsch himself questioned the methodology. And I'm going to demonstrate why I think his just-published study verifies his 2002 thesis that this older methodology obscures the true drug response with placebo artefact. The word artefact arises from the same root meaning as artificial. Antidepresant clinical trials are wholly artificial environments, and we must be very careful in reaching conclusions from the data so obtained, most particularly conclusions about external validity.

Let's take a look at the recent Kirsch paper. It's here:

The entire paper is really heavy on the statistics, and I don't presume to understand all the niceties myself, even though I consider myself to be well-versed on stats. However, you don't need to be a statistician to grasp the issues I intend to bring out.

One of the first is this quotation, from the Results section: "...the mean change (in Hamilton Depression Scale scores) exhibited in trials provides a poor description of results..." Problem #1: All his conclusions are based on analyses of mean change. If the data are poor, so are the stats derived from them. Garbage in, garbage out.

He uses some fancy statistical methods to derive standardized and normalized values for the mean differences in Hamilton score. Whenever standardization or normalization are done, the effect is to reduce variability. In other words, the differences get smaller. That's a pretty good trick. Nonetheless, in every model he employed, antidepressants were found to be superior to placebo, p <0.001. (See Table 2: ) That is statistical significance, but he wants to use what he calls clinical significance. More on that in a minute.

He makes numerous references to NICE (National Institute for Clinical Excellence) Guideline 23, entitled "Depression: Management of depression in primary and secondary care". I had to look it up, as I'd never heard of it before. An excellent meta-analysis of depression treatments, by the way.

One of the reasons I was delayed in my response to this thread is that the NICE document is rather large, at 363 pages. Anyway, it is the source of the term "clinical significance", which Kirsch uses as his threshold of efficacy. I needed to know what he was talking about, before I could properly criticize his work.

The simplest way to summarize the meaning is to say that it represents the magnitude of difference in response between drug and placebo in a clinical trial that would show unequivocal benefit in clinical practise (i.e. the real world). Just for the record, NICE did the same sort of analyses on the same dataset as did Kirsch, and here's what they found:

There is strong evidence suggesting that there is a clinically significant difference favouring SSRIs over placebo on increasing the likelihood of patients achieving a 50% reduction in depression symptoms as measured by the HRSD (N = 1719; n = 3143; RR = 0.73; 95% CI, 0.69 to 0.78).


I can only believe Kirsch already was aware of this finding. However, he works with a different analysis of clinical significance, employing the Hamilton scale in a way he's already described as a poor descriptor of results. NICE already did that analysis, too, and they said:

There is evidence suggesting that there is a statistically significant difference favouring SSRIs over placebo on reducing depression symptoms as measured by the HRSD but the size of this difference is unlikely to be of clinical significance (N = 16; n = 2223; Random effects SMD = 0.34; 95% CI, 0.47 to 0.22).


My own interpretation is that the randomness in the data obscures the ability to measure a true difference between the two groups, but only when analyzed in this particular way. Talk about cherry-picking.

Now, Kirsch did produce some nice graphs, and I don't think you need any background in stats to understand them. I strongly recommend that you take a look at them.

Consider Figure 3:

Here, Kirsch presents the highly manipulated (standardized) difference score, d, as a function of depression severity. These are severely depressed subjects. Just from a quick look, would you rather be in the triangle group (drug), or the circles (placebo)? I pick triangle.

Notice also that the average drug effect line is horizontal. What I see is that this is the true drug effect, writ large. It is obscured by the placebo effect, which falls off when depression severity makes it less likely to occur. The placebo effect obscures the drug effect, just as Kirsch himself postulated in 2002.

Now, here's a very illustrative figure, Figure 4:

First, the zero line is no difference (i.e. superiority) of drug or placebo. Below zero is placebo superiority, whereas above it is drug superiority. What's the pattern tell you? Do we find just as many points below the zero line, and just as far from it, as we do above it?

Second, Kirsch calculated an average difference between drug and placebo of 1.8, favouring drugs. Take a piece of paper, and cover that part of the graph above 1.8 (just over half way between zero and that green line at 3). Then, do the same, but cover all points below 1.8. Does the pattern look the same? Does it appear to you that 1.8 *is* the average value? Not even close, in my estimation. His derived statistic is not representative of the data points. The reason is that he normalized it (manipulated it) first.

The *only* evidence that Kirsch supplied that suggest that this ancient clinical trial data is generalizable to the population at large (i.e. that it has external validity) is that NICE came up with a statistic that might be used to do so. I don't believe that Kirsch met his burden.

I could continue to flog this horse.....For example, we know that placebo responders are far more likely to relapse, and to do so far sooner than do those continuing with antidepressant treatment. And I reiterate, Kirsch doesn't even mention the large superiority in response and remission seen with combined antidepressant and psychotherapy.

I think the lay press ought to get a smack upside the head for buying this recycled line without critical analysis, too. But that's another story.





Post a new follow-up

Your message only Include above post

Notify the administrators

They will then review this post with the posting guidelines in mind.

To contact them about something other than this post, please use this form instead.


Start a new thread

Google www
Search options and examples
[amazon] for

This thread | Show all | Post follow-up | Start new thread | FAQ
Psycho-Babble Medication | Framed

poster:Larry Hoover thread:814746