Psycho-Babble Medication | about biological treatments | Framed
This thread | Show all | Post follow-up | Start new thread | List of forums | Search | FAQ

Re: Placebo Responders Dropped...abstracts

Posted by Larry Hoover on February 10, 2003, at 8:21:51

In reply to Placebo Responders Dropped? Larry Hoover, posted by fachad on February 9, 2003, at 18:35:23

> > >During trials, apparently they give placebos first, then eliminate those test subjects who are affected by the placebos, thus sever3ely skewing the results.
> >
> > That's simply not true. That would not be double-blind.
> I'm sure I've read many studies where early responders, whether responding to the active agent, or the control, or the placebo, are dropped from the study.
> This is supposed to make the results more valid, by eliminating placebo responders so that all the results are from actual drug effects, vs. placebo response.

It took me a while to find the right keywords to use..."run-in period" turns out to be the winner.

It makes a great deal of difference how one defines the methodology. Employing a run-in period in your design is an a priori tool, i.e. a decision made before study data is collected. As such, it is presumed to be without bias. What I thought I was hearing about was a post hoc decision, one made after data collection to "weed out" subjects who dilute the treatment effect.

In medical circles, there is a concern that run-ins are applied for the right reasons. Excluding non-compliers makes good sense, because you're only interested in people who can follow the protocol. However, I see no valid argument that the technique should be used to screen out placebo-responders and I still believe that such a tactic would represent bias and invalidate the study outcome. Employing a run-in definitely distorts the information provided by the study: "Compared with results that would have been observed without the run-in period, the reported results overestimate the benefits and underestimate the risks of treatment, underestimate the number needed to treat, and yield a smaller P value." (from the first abstract)

In a variant which combines a run-in to screen out placebo-responders, only non-placebo-responders are then assigned to one of two active comparator treatments. I can accept *that* practise. Anyway, here are some abstracts. The third one addresses the need for placebo-controlled studies in general (and makes a very good point, I might add).

JAMA 1998 Jan 21;279(3):222-5

Comment in:
JAMA. 1998 May 20;279(19):1526-7.
JAMA. 1998 May 20;279(19):1526; discussion 1527.
JAMA. 1998 May 20;279(19):1526; discussion 1527.

Run-in periods in randomized trials: implications for the application of results in clinical practice.

Pablos-Mendez A, Barr RG, Shea S.

Division of General Medicine, College of Physicians and Surgeons, Columbia University, New York, NY 10032-3702, USA.

Prerandomization run-in periods are being used to select or exclude patients in an increasing number of clinical trials, but the implications of run-in periods for interpreting the results of clinical trials and applying these results in clinical practice have not been systematically examined. We analyzed illustrative examples of reports of clinical trials in which run-in periods were used to exclude noncompliant subjects, placebo responders, or subjects who could not tolerate or did not respond to active drug. The Physicians' Health Study exemplifies the use of a prerandomization run-in period to exclude subjects who are nonadherent, while recent trials of tacrine for Alzheimer disease and carvedilol for congestive heart failure typify the use of run-in periods to exclude patients who do not tolerate or do not respond to the study drug. The reported results of these studies are valid. However, because the reported results apply to subgroups of patients who cannot be defined readily based on demographic or clinical characteristics, the applicability of the results in clinical practice is diluted. Compared with results that would have been observed without the run-in period, the reported results overestimate the benefits and underestimate the risks of treatment, underestimate the number needed to treat, and yield a smaller P value. The Cardiac Arrhythmia Suppression Trial exemplifies the use of an active-drug run-in period that enhances clinical applicability by selecting a group of study subjects who closely resembled patients undergoing active clinical management for this problem. Run-in periods can dilute or enhance the clinical applicability of the results of a clinical trial, depending on the patient group to whom the results will be applied. Reports of clinical trials using run-in periods should indicate how this aspect of their design affects the application of the results to clinical practice.

Stat Med 1993 Jan 30;12(2):111-28

A comprehensive algorithm for determining whether a run-in strategy will be a cost-effective design modification in a randomized clinical trial.

Schechtman KB, Gordon ME.

Washington University School of Medicine, Division of Biostatistics, St. Louis, MO 63110.

In randomized clinical trials, poor compliance and treatment intolerance lead to reduced between-group differences, increased sample size requirements, and increased cost. A run-in strategy is intended to reduce these problems. In this paper, we develop a comprehensive set of measures specifically sensitive to the effect of a run-in on cost and sample size requirements, both before and after randomization. Using these measures, we describe a step-by-step algorithm through which one can estimate the cost-effectiveness of a potential run-in. Because the cost-effectiveness of a run-in is partly mediated by its effect on sample size, we begin by discussing the likely impact of a planned run-in on the required number of randomized, eligible, and screened subjects. Run-in strategies are most likely to be cost-effective when: (1) per patient costs during the post-randomization as compared to the screening period are high; (2) poor compliance is associated with a substantial reduction in response to treatment; (3) the number of screened patients needed to identify a single eligible patient is small; (4) the run-in is inexpensive; (5) for most patients, the run-in compliance status is maintained following randomization and, most importantly, (6) many subjects excluded by the run-in are treatment intolerant or non-compliant to the extent that we expect little or no treatment response. Our analysis suggests that conditions for the cost-effectiveness of run-in strategies are stringent. In particular, if the only purpose of a run-in is to exclude ordinary partial compliers, the run-in will frequently add to the cost of the trial. Often, the cost-effectiveness of a run-in requires that one can identify and exclude a substantial number of treatment intolerant or otherwise unresponsive subjects.

Clin Ther 2001 Apr;23(4):596-603

Can placebo controls reduce the number of nonresponders in clinical trials? A power-analytic perspective.

Leon AC.

Department of Psychiatry, Weill Medical College of Cornell University, New York, New York 10021, USA.

BACKGROUND: There is ongoing debate regarding the ethics of placebo-controlled clinical trials when a moderately effective standard treatment exists. One aspect of the debate--the number of nonresponders--tends to be overlooked. A larger between-group effect size is expected in placebo-controlled trials than in trials with an active comparator. For that reason, substantially fewer subjects need to be enrolled in placebo-controlled trials; consequently, there tend to be far fewer nonresponders in placebo-controlled trials. OBJECTIVE: This analysis was undertaken to illustrate that the use of placebo as a control can reduce the number of subjects who are unnecessarily exposed to delayed treatment. METHODS: Statistical power analyses were used to estimate the sample size required to detect various population treatment differences and the resulting number of nonresponders for 2-tailed chi-square tests. RESULTS: Empiric evidence of the phenomenon is provided for a wide range of rates of response to placebo, investigational, and comparator treatments. For example, 24 subjects (ie, 12 per group) are needed to detect differences between placebo (10% response rate) and an investigational drug (70% response); 15 of these would not respond. In contrast, if the investigational drug (70% response) is initially compared with a standard therapy (60% response), 752 subjects would be required, 263 of whom would not respond. CONCLUSIONS: This paper shows empirically that placebo controls can reduce the number of nonresponders in a randomized controlled trial. The number of subjects who are exposed to unproven, albeit promising, investigational drugs should be kept to a minimum until placebo-controlled trials support their use.




Post a new follow-up

Your message only Include above post

Notify the administrators

They will then review this post with the posting guidelines in mind.

To contact them about something other than this post, please use this form instead.


Start a new thread

Google www
Search options and examples
[amazon] for

This thread | Show all | Post follow-up | Start new thread | FAQ
Psycho-Babble Medication | Framed

poster:Larry Hoover thread:140316