`David L. DeMets
`
`Curt D. Furberg
`
`Fundamentals of
`Clinical Trials
`Third Edition
`
`Springer
`
`Sandoz Inc. IPR2016-00318
`Sandoz v. Eli Lilly, Exhibit 1095-0001
`
`
`
`L
`
`cal .4
`
`Heah
`
`burg
`
`Nati¢
`
`Bran~
`
`befol
`
`man~
`
`heart
`
`and |
`
`giver
`
`C
`
`Heall
`
`M.D.
`
`Trial,,
`
`Nati(
`
`istrat
`
`or ad
`
`clink
`
`in th
`
`cacy
`
`of th
`
`SOIl.
`
`desi~
`
`degv
`
`pletc
`
`and"
`
`Biolx
`
`the J
`
`clini
`
`DeM
`
`and
`
`labol
`
`desi~
`
`serv~
`
`He h
`
`mull
`
`Curt D. Furberg
`Department of Public Health Services
`Wake Forest University
`Bowman Gray School of Medicine
`Winston-Salem, NC 27109
`USA
`
`Lawrence M. Friedman
`Division of Epidemiology and
`Clinical Applications
`National Heart, Lung, and
`Blood Institute
`National Institute of Health
`Bethesda, MD 20802
`USA
`
`David L. DeMets
`Department of Biostatistics and
`Medical Informatics
`University of Wisconsin
`Madison, WI 53792
`USA
`
`Library of Congress Cataloging-in-Publication Data
`Friedman, Lawrence M., 1942-
`Fundamentals of clinical trials / Lawrence M. Friedman, Curt D.
`Furberg, David L. DeMets. -- 3rd ed.
`p. cm.
`Includes bibliographical references and index.
`ISBN 0-387-98586-7 (pbk. : alk. paper)
`1. Clinical trials. I. Furberg, Curt. II. DeMets, David L.,
`1944- III. Title.
`[DNLM: 1. Clinical Trials.
`1998]
`R853.C55F75 1998
`615.5 "072--dc21
`
`2. Research Design. W 20.5F911 f
`
`98-26138
`
`This is a reprint of an edition published by Mosby.
`
`ISBN 0-387-98586-7
`
`Printed on acid-free paper.
`
`© 1998 Springer-Verlag New York, Inc.
`All fights reserved. This work may not be translated or copied in whole or in part without the
`written permission of the publisher (Spfinger-Verlag New York, Inc., 175 Fifth Avenue, New
`York, NY 10010, USA), except for brief excerpts in connection with reviews or scholarly
`analysis. Use in connection with any form of information storage and retrieval, electronic
`adaptation, computer software, or by similar or dissimilar methodology now know or hereafter
`developed is forbidden.
`The use in this publication of trade names, trademarks, service marks and similar terms, even if
`the are not identified as such, is not to be taken as an expression of opinion as to whether or not
`they are subject to proprietary rights.
`
`Printed in the United States of America. (EB)
`
`15 14 13 12 11
`
`Springer-Verlag is a part of Springer Science+Business Media
`
`springeronline.com
`
`Sandoz Inc. IPR2016-00318
`Sandoz v. Eli Lilly, Exhibit 1095-0002
`
`
`
`36
`37
`40
`41
`42
`
`45
`
`45
`45
`47
`49
`,50
`~50
`54
`,55
`
`,57
`
`ChAiR 1
`
`Introduction to Clinical Trials
`
`The evolution of the clinical trial dates from the eighteenth century.1°,5~ Lind, in
`
`his classical study on board the Salisbury, evaluated six treatments for scurvy in 12
`
`patients. One of the two who were given oranges and lemons recovered quickly and
`
`was fit for duty after 6 days. The second was the best recovered of the others and
`
`was assigned the role of nurse to the remaining 10 patients. Several other compara-
`
`tive studies were also conducted in the eighteenth and nineteenth centuries. The
`
`comparison groups comprised literature controls, other historical controls, and con-
`
`current controls?~
`
`The concept of randomization was introduced by Fisher and applied in agricul-
`
`tural research in 1926.8 The first clinical trial that used a form of random assignment
`
`of subjects to study groups was reported in 1931 by Amberson et al.2 After careful
`
`matching of 24 patients with pulmonary tuberculosis into comparable groups of 12
`
`each, a flip of a coin determined which group received sanocrysin, a gold com-
`
`pound commonly used at that time. The British Medical Research Council trial of
`
`streptomycin in patients with tuberculosis, reported in 1948, was the first to use
`
`random numbers in the allocation to experimental and control groups.
`
`The principle of blindness was also introduced in the trial by Amberson et al.~
`
`The patients were not aware of whether they received intravenous injections of
`
`sanocrysin or distilled water. In a trial of cold vaccines in 1938, Diehl et al.26 referred
`
`to the saline solution given to the subjects in the control group as a placebo.
`
`It is only in the past few decades that the clinical trial has emerged as the pre-
`
`ferred method in the evaluation of medical interventions. Techniques of implemen-
`
`tation and special methods of analysis have been developed during this period.
`
`Many of the principles have their origins in work by Hill.
`
`Because the authors of this book have all spent formative years at the National
`
`Institutes of Health (NIH), it is also pertinent to cite a series of papers that reviews
`
`the history of clinical trials development at the NIH.*
`
`The purpose of this chapter is to define clinical trials; review the need for them;
`
`and discuss timing, phasing, and ethics of clinical trials.
`
`*References 13, 36, 40, 43, 66
`
`Sandoz Inc. IPR2016-00318
`Sandoz v. Eli Lilly, Exhibit 1095-0003
`
`
`
`2
`
`Fundamentals of Clinical Trials
`
`FUNDAMENTAL POINT
`
`A properly planned and executed clinical trial is a powerful experimental
`
`technique for assessing the effectiveness of an intervention.
`
`WI-IAT IS A CLINICAL TRIAL?
`
`A clinical trial is defined as a prospective study comparing the effect and value
`
`of intervention(s) against a control in human beings. Note that a clinical trial is
`
`prospective, rather than retrospective. Study participants must be followed forward
`
`in time. They need not all be followed from an identical calendar date. In fact, this
`
`will occur only rarely. Each participant, however, must be followed from a well-
`
`defined point, which becomes time zero or baseline for the study. This contrasts
`
`with a case-control study, a type of retrospective study in which participants are
`
`selected on the basis of presence or absence of an event or condition of interest. By
`
`definition, such a study is not a clinical trial. People can also be identified from hos-
`
`pital records or other data sources and subsequent records can be assessed for evi-
`
`dence of new events. This is not considered to be a clinical trial since the partici-
`
`pants are not directly observed from the moment of initiation of the study and at
`
`least some of the follow-up data are retrospective.
`
`A clinical trial must employ one or more intervention techniques. These may be
`
`"prophylactic, diagnostic or therapeutic agents, devices, regimens, procedures, etc."62
`
`Intervention techniques should be applied to participants in a standard fashion in an
`
`effort to change some aspect of the participants. Follow-up of people over time
`
`without active intervention may- measure the natural history of a disease process,
`
`but it does not constitute a clinical trial. Without active intervention the study is
`
`observational because no experiment is being performed.
`
`A clinical trial must contain a control group against which the intervention
`
`group is compared. At baseline, the control group must be sufficiently similar in rel-
`
`evant respects to the intervention group so that differences in outcome may reason-
`
`ably be attributed to the action of the intervention. Methods for obtaining an appro-
`
`priate control group are discussed in Chapter 4. Most often a new intervention is
`
`compared with best current standard therapy. If no such standard exists, the people
`
`in the intervention group may be compared with people who are on no active inter-
`
`vention. "No active intervention" means that the participant may receive either a
`
`placebo or no intervention at all. Obviously, participants in all groups may be on a
`
`variety of additional therapies and regimens, so-called concomitant interventions,
`
`which may be either self-administered or prescribed by others (e.g., private physi-
`
`cians).
`
`For purposes of this book, only studies on human beings will be considered as
`
`clinical trials. Certainly, animals (or plants) may be studied using similar techniques.
`
`l
`i
`i
`i
`
`i
`
`1
`
`i
`
`i
`
`l
`
`1
`
`1
`
`t
`
`t
`
`Sandoz Inc. IPR2016-00318
`Sandoz v. Eli Lilly, Exhibit 1095-0004
`
`
`
`Introduction to Clinical Trials 3
`
`However, this book focuses on trials in people, and each clinical trial must therefore
`incorporate participant safety considerations into its basic design. Equally important
`is the need for, and responsibility of, the investigator to fully inform potential partic-
`ipants about the trials.6°’as
`Unlike animal studies, in clinical trials the investigator cannot dictate what an
`individual participant should do. He can only strongly encourage participants to
`avoid certain medications or procedures that might interfere with the trial. Since it
`may be impossible to have "pure" intervention and control groups, an investigator
`may not be able to compare interventions, but only intervention strategies. Strategies
`refer to attempts at getting all participants to comply to the best of their ability with
`their originally assigned intervention. When planning a trial, the investigator should
`recognize the difficulties inherent in studies with human subjects and attempt to esti-
`mate the magnitude of participants’ failure to comply strictly with the protocol.
`As discussed in Chapters 5 and 6, the ideal clinical trial is one that is randomized
`and double-blinded. Deviation from this standard has potential drawbacks that will
`be discussed in the relevant chapters. In some clinical trials compromise is unavoid-
`able, but often deficiencies can be prevented by adhering to fundamental features of
`design, conduct, and analysis.
`Several people distinguish between demonstrating efficacy of an intervention
`and effectiveness of an intervention. The former refers to what the intervention
`accomplishes in an ideal setting; the latter to what it accomplishes in actual prac-
`tice, taking into account incomplete compliance to protocol. As discussed in Chap-
`ter 16 and elsewhere, our preferred analytic approach emphasizes the importance
`of the concept of effectiveness. Only in special circumstances, will the focus of the
`clinical trial described in this book be on efficacy.
`
`CLINICAL TRIAL PHASES
`
`While we focus on the design and analysis of randomized trials comparing the
`effectiveness of one or more interventions with a control, several steps or phases of
`clinical research must occur before this comparison can be implemented.
`
`Phase I studies
`
`Although useful preclinical information may be obtained from in vitro studies or
`animal models, early data must be obtained in humans. The first step, or phase in
`developing a drug or a biologic is to understand how well it can be tolerated in a
`small number of individuals. Although it does not meet our definition of aoclinical
`trial, this phase is commonly called a phase I trial. People who participate in phase I
`trials have typically already tried and failed to improve on the existing standard
`interventions. Most phase I designs are relatively simple. One of the first steps in
`
`’imental
`
`ad value
`
`1 trial is
`
`forward
`
`’act, this
`
`~ a wcll-
`
`:ontrasts
`
`ants are
`
`~rest. By
`
`om hos-
`
`. for evi-
`
`: partici-
`
`g and at
`
`may be
`
`S, etc.’’62
`on in an
`rer time
`process,
`study is
`
`vention
`lr in rel-
`reason-
`a appro-
`:ntion is
`:people
`ve inter-
`either a
`be on a
`entions,
`e physi-
`
`tered as
`miques.
`
`Sandoz Inc. IPR2016-00318
`Sandoz v. Eli Lilly, Exhibit 1095-0005
`
`
`
`4 Fundamentals of Clinical Trials
`
`evaluating drugs is to estimate how large a dose can be given before unacceptable
`
`toxicity is experienced by patients.* This dose is usually referred to as the maximally
`
`tolerated dose, or MTD. Much of the literature has discussed how to extrapolate ani-
`
`mal model data to the starting dose in hulllanS74 or how tO step-up the dose levels to
`
`achieve the MTD. As Storer and DeMets describer~ there is a sparsity of phase I
`
`design literature; somewhat surprising since the goals are not dissimilar from those
`
`of bioassay methods for which a large literature exists.
`
`In estimating the MTD in cancer drug development, the investigator usually
`
`starts with a very low dose and escalates the dose tmtil a prespecified level of toxic-
`
`ity iu patients is obtained. Typically, a small number of patients, usually three, are
`
`entered sequentially at a particular dose. If no specified level of toxicity is observed,
`
`the next predefined higher dose level is used. If unacceptable toxicity is observed in
`
`aoy of the three patients, an additional number of patients, usually three, are treated
`a~ the same dose. If no further toxicity is seen, the dose is escalated to the next
`
`higher dose. If additional unacceptable toxicity is observed, then the dose escalation
`
`is terminated and that dose, or perhaps the previous dose, is declared to be the
`
`MTD. This particular design assumes that the MTD occurs when approxhnately one
`
`third of the patients experience unacceptable toxicity. Variations of this design
`
`exist, but most are similar.
`
`Some investigators~,6~,s2 have recently proposed more sophisticated designs that
`
`specify a sampling scheme for dose escalation and a statistical model for the estimate
`
`of the MTD and its standard error. The sampling scheme must be conservative in
`
`dose escalation so as not to overshoot the MTD by very much, but at the same time
`
`be efficient in the number of patients studied. Many of the proposed schemes use a
`
`step-up/step-down approach; the simplest being to step-up with a single patient until
`
`toxicity is first observed. Further increase or decrease in the dose level depends on
`
`whether or not toxicity is observed at a given dose. Dose escalation stops when the
`
`process seems to have converged around a particular dose level. Once the data are
`
`generated, a dose response model is fit to the data and estimates of the MTD can be
`
`obtained as a function of the specified probability of a toxic response.82
`
`Phase II studies
`
`Once the MTD is established, the next goal is to evaluate whether the drug has
`
`any biologic activity or effect and to estimate the rate of adverse events. If the
`
`design of the phase I trial has not been adequate, the investigator may evaluate the
`
`drug for activity at too low or high a dose. Thus, the phase II design depends on the
`
`quality and adequacy of the phase I study. The results of the phase II trial will~ in
`
`turn, be used to design the comparative phase III triM. The statistical literature for
`
`phase II trials is also quite limited.**
`
`*References 3, 16, 38, 82, 83, 96
`**References 23, 29, 35, 37, 45, 78, 94
`
`of
`
`tol
`
`pa
`
`20
`
`cai .
`
`re~
`
`ha,
`5%
`ad~
`
`fro
`
`pm
`
`the
`
`get
`
`asp
`
`effi
`
`abl,
`
`Ust
`
`wh
`
`COl~
`
`the
`
`ass~
`
`pra,
`
`is s
`
`mal
`
`stu~
`
`pel~.
`
`pra~
`
`nee
`
`ma)
`
`sul~
`
`sary
`
`pha
`
`Sandoz Inc. IPR2016-00318
`Sandoz v. Eli Lilly, Exhibit 1095-0006
`
`
`
`[e
`
`Introduction to Clinical Trials 5
`
`One of the most commonly used phase II designs in cancer is based on the work
`of Gehan,~5 which is a version of a two-stage design. In the first stage the investiga-
`tor attempts to rule out drugs that have no or little biologic activity. For example, he
`may specify that a drug must have some minimal level of activity, say,in 20% of
`patients. If the estimated activity level is less than 20%, he chooses not to consider
`this drug further, at least not at that MTD. If the estimated activity level exceeds
`20%, he will add more patients to get a better estimate of the response rate. A typi-
`cal study for ruling out a 20% or lower response rate enters 14 patients. If no
`response is observed in the first 14 patients, the drug is considered not likely to
`have a 20% or higher activity level. That is, failure 14 times in a row would happen
`
`5% or less if the drug were truly effective 20% or more of the time. The number of
`additional patients added depends on the degree of precision desired, but ranges
`from 10 to 20. Thus a typical cancer phase II trial might include fewer than 30
`patients to estimate the response rate. As is discussed ha Chapter 7, the precision of
`the estimated response rate is important in the design of the comparative trial. In
`general, phase II trials are smaller than they ought to be.
`Others-~9~52,82 have proposed phase II designs that have more stages or a sequential
`aspect. Some~3,94 have considered hybrids of phases II and III designs to enhance
`efficiency. While these designs have desirable statistical properties, the most vulner-
`able aspect of phase II, as well as phase I studies, is the type of patients enrolled.
`Usually, patients entered in phase II trials have more exclusion criteria than those
`who will be considered in tlae phase III comparative trials. Furthermore, the out-
`come in the phase II trial (e.g., tumor response) may be different from that used in
`the definitive comparative trial (e.g., survival).
`
`Phase Ill/IV trials
`
`The phase HI trial is the clinical trial defined above. It is generally designed to
`
`assess the effectiveness of the new intervention and thereby, its role in clinical
`
`practice. As noted, the intervention need not be a drug, but the term phase III trial
`
`is still commonly applied. The focus of this text is on phase III trials. However,
`
`many design assumptions for phase III trials depend on a series of phase I and II
`
`studies.
`
`Phase III trials of chronic conditions or diseases often have a short follow-up
`
`period for evaluation, relative to the time the intervention might be used in clinical
`
`practice In addition, they focus on effectiveness, but knowledge of safety is also
`
`necessary to evaluate fully the proper role of an intervention. A procedure or device
`
`may fail after a few years and have adverse sequelae for the patient. Thus long-term
`
`surveillance of an intervention believed to be effective in phase III trials is neces-
`
`sary. Such long-term studies, which do not involve control groups, are referred to as
`
`phase IV trials.
`
`Sandoz Inc. IPR2016-00318
`Sandoz v. Eli Lilly, Exhibit 1095-0007
`
`
`
`6 Fundamentals of Clinical Trials
`
`WHY ARE CLINICAL TRIALS NEEDED?
`
`A clinical trial is the clearest method of determining whether an intervention has
`
`the postulated effect. Only seldom is a disease or condition so completely character-
`
`ized that people fully understand its natural history and can say, from a knowledge
`
`of pertinent variables, what the subsequent course of a group of patients will be.
`
`Even more rarely can a clinician predict with certainty the outcome in individual
`
`patients. By outcome is meant not simply that an individual will die, but when, and
`
`trader what circumstances; not simply that he will recover from a disease, but what
`
`complications of that disease he will suffer; not simply that some biologic variable
`
`has changed, but to what extent the change has occurred. Given the uncertain
`
`knowledge about disease course and the usual large variations in biologic measures,
`
`it is often difficult to say on the basis of uncontrolled clinical observation whether a
`
`new treatment has made a difference to outcome, and if it has, what the magnittide
`
`is. A clinical trial offers the possibility of such judgment because there exists a con-
`
`trol group--which, ideally, is comparable to the intervention group in every way
`
`except for the intervention being studied.
`
`The consequences of not conducting appropriate clinical trims at the proper time
`
`can be serious or costly. An example is the continued uncertainty as to the efficacy
`
`and safety of digitalis in congestive heart failure. Only recently, after the drug has
`
`been used for more than 200 years, has a large clinical trial evaluating the effect of
`
`digitalis on mortality been mounted.87 Intermittent positive pressure breathing
`
`became an established therapy for chronic obstructive pulmonary disease without
`
`good evidence of benefits. Much later, one trial suggested no major benefit from this
`
`very expensive procedure,s9 Similarly, high concentration of oxygen was used for
`
`therapy in premature infants until a clinical trial demonstrated Rs harm.7~ A clinical
`
`trial can determine the incidence of adverse effects of complications of the interven-
`
`tion. Few interventions, if any, are entirely free of undesirable effects. However, drug
`
`toxicity might go unnoticed without the systematic follow-up measurements
`
`obtained in a clinical trial of sufficient size. The Cardiac Arrhythmia Suppression Trial
`
`documented that commonly used antiarrhythmic drugs were harmful in patients
`
`who had had a myocardial infarction and raised questions about routine use of an
`
`entire class of antiarrhythmic agents.28
`
`in the final evaluation, an investigator must compare the benefit of an interven-
`
`tion with its other, possibly unwanted effects to decide whether, and trader what cir-
`
`cumstances, its use should be recommended. The cost implications of an interven-
`
`tion, particularly if there is limited benefit, must also be considered. Th:romboly~c
`
`therapy has been repeatedly shown to be beneficial in acute myocardial infarction.
`
`The cost of different thrombolytic agents, however, varies several-fold. Are the added
`
`benefits of the most expensive agents~ worth the extra cost? Such assessments are
`
`not statistical. They must rely on the judgment of the investigator and the physician.
`
`Sandoz Inc. IPR2016-00318
`Sandoz v. Eli Lilly, Exhibit 1095-0008
`
`
`
`Introduction to Clinical Trials 7
`
`It has been argued, most commonly and most forcefully by those suffering from
`
`and interested in the acquired immunodeficiency syndrome (AIDS), that traditional
`clinical trials are not the sole legitimate way of determining whether interventions
`
`are useful)~’~’7~ This is undeniably true, and clinical trial researchers need to be will-
`
`ing to modify, when necessary, aspects of study design or management. If the
`
`patient community is unwilling to participate in clinical trials conducted along tradi-
`
`tional lines, or in ways that are scientifically pure, trials are not feasible and no infor-
`
`mation will be forthcoming. Investigators need to involve the relevant communities
`
`or populations at risk, even though this could lead to some compromises in design
`
`and scientific purity. Investigators need to decide when such compromises so invali-
`
`date the results that the study is not worth conducting. It should be noted that the
`
`rapidity with which trial results are demanded, the extent of community involve-
`
`ment, and the consequent effect on study design can change as knowledge of the
`
`disease increases, as at least partially effective therapy becomes available, and as
`
`understanding of the need for valid research designs, including clinical trials, devel-
`
`ops. This has happened to some extent with AIDS trials.
`
`Clinical trials are conducted because it is expected that they will influence
`
`practice.* It is undoubtedly true that the influence depends on numerous factors,
`
`including direction of the findings, means of dissemination of the results, and exis-
`
`tence of evidence from other relevant research. However, well-designed clinical
`
`trials can certainly have pronounced effects on clinical practice.5~
`
`There is no such thing as a perfect study. A well thought-out, well-designed,
`
`appropriately conducted and analyzed clinical trial, however, is an effective tool.
`
`XVhile even well-designed clinical trials are not infallible, they can provide a sounder
`
`rationale for intervention than is obtainable by other methods of investigation. On
`
`the other hand, poorly designed and conducted trials can be misleading. Also, with-
`
`out supporting evidence, no single study ought to be definitive. When interpreting
`
`the results of a trial, consistency with data from laboratory, animal, epidemiologic,
`
`and other clinical research must be considered.
`
`PROBLEMS IN THE TIMING OF A TRIAL
`
`Once drugs and procedures of unproved clinical benefit have become part of gen-
`
`eral medical practice, performing an adequate clinical trial becomes difficult ethically
`
`and logistically. Some people advocate instituting clinical trials as early as possible in
`
`the evaluation of new therapies.2°~s~ The trials, however, must be feasible. Assessing
`
`feasibility takes into account several factors. Before conducting a trial, an investigator
`
`needs to have the necessary knowledge and tools. He must know something about
`
`*References 4, 5, 33, 34, 51, 69, 75
`
`Sandozlnc. IPR2016-00318
`Sandoz v. Eli Lilly, Exhibit1095-0009
`
`
`
`8 Fundamentals of Clinical TriMs
`
`the safety of the intervention and what outcomes to assess and have the techniques to
`
`do so. Well-rtm clinical trials of adequate magnitude are costly and should be done
`
`only when preliminary evidence of the efficacy of an intervention looks promising
`
`enough to warrant the effort and expenses involved.
`
`Another aspect of timing is consideration of the relative stability of the interven
`
`tion. If active research will be likely to make the intended intervention outmoded in
`
`a short time, studying such an intervention may be inappropriate. This is particularly
`true in long-term clinical trials or studies that take many months to develop. One of
`
`the criticisms of trials of surgical interventions has been that surgical methods are
`
`constantly being improved. Evaltmting an operative technique of several years past,
`
`when a study was initiated, may not reflect the current status of surgery.7.7°,~°
`
`These issues were raised in connection with the Veterans Administration study of
`
`coronary artery bypass surgery.~ The trial showed that surgery was beneficial in sub-
`
`groups of patients with left main coronary artery disease and three vessel disease, but
`
`not overall.25,59,s5 Critics of the trial argued that when the trial was started, the surgical
`
`techniques were still evolving. Therefore, surgical mortality in the study did not reflect
`
`what occurred in actual practice at the end of the long-tetalq trial. In addition, there
`
`were wide differences in surgical mortality between the cooperating clinics,~ which
`
`may have been related to the experience of the surgeons. Defenders of the study mairv
`
`tained that the surgical mortality in the Veterans Administration hospitals was not very
`
`different from the national experience at the Ome.2x In the Coronary Artery Surgery
`
`Study,t7 surgical mortality was lower than in the Veterans Administration trial, reflect-
`
`ing better technique. The control group mortality, however, was also lower.
`
`Review articles show that surgical trials have been successfully undertaken.",~
`
`While the best approach would be to postpone a trial until a procedure has reached
`
`a plateau and is unlikely to change greatly, such a postponement will probably
`
`mean waiting until the procedure has been widely accepted as efficacious for some
`
`indication, thus making it impossible to conduct the trial. However, as noted by
`
`Chalmers and Sacks,2~ allowing for improvements in operative techniques in a clini-
`
`cal trial is possible. As in all aspects of conducting a clinical trial, judgment must be
`
`used in determining the proper time to evaluate an intervention.
`
`ETHICS OF CLINICAL TRIALS
`
`People have debated the ethics of clinical trials for as long as they have been
`done. The arguments have changed over the years and perhaps become more
`sophisticated, but in general, they center around the issues of the physician’s obliga-
`tions to his patient vs. societal good, informed consent, randomization, and the use
`of placebo.* Studies that require ongoing intervention or studies that continue to
`
`*References 6, 9, 12, 15, 44, 49, 57, 67, 71, 72, 76, 80, 92, 93, 95
`
`Sandoz Inc. IPR2016-00318
`Sandoz v. Eli Lilly, Exhibit 1095-0010
`
`
`
`~s to
`
`one
`sing
`
`~efl-
`
`arly
`
`ast,
`
`but
`ical
`~ect
`
`ich
`
`cry
`cry
`
`~ct-
`
`11,84
`
`ted
`31y
`
`file
`
`by
`
`be
`
`en
`
`;a-
`
`se
`to
`
`Introduction to Clinical Trials 9
`
`enroll participants after trends in the data have appeared have raised some of the
`controversyY,57,9’ The indicated references argue a number of these issues.
`We take the view that properly designed and conducted clinical trials are ethi-
`cal. A well-designed trial can answer important public health questions without
`impairing the welfare of individuals. There may, at times, be conflicts between a
`physician’s perception of what is good for his patient, and the needs of the trial. In
`such instances, the needs of the participants must predominate.
`Proper informed consent is essential. The requirements of the U.S. Department
`of Health and Human Services arc reasonable ones.63 Also pertinent are the Interna-
`tional Ethical Guidelines for Biomedical Research Involving Human Subiects.~*’54 Sev-
`eral investigators have shown that simply adhering to legal requirements does not
`ensure informed consent?9,41 In many clinical trial settings, though, true informed
`consent can be obtained?° Sometimes, during a trial, important information derives
`frum either other studies or the trial being conducted, which is relevant to the
`informed consent. In such cases, the investigator is obligated to update the consent
`form and notify current participants in an appropriate manner. A trial of antioxi-
`dants in Finnish male smokers indicated that beta carotene and vitamin E may have
`been harmful with respect to cancer or cardiovascttlar disease, rather than benefi-
`cial.~6 Because of those findings, investigators of other ongoing trials of antioxidants
`informed the participants of the results and the possible risks. Not only is it an ethi-
`cal stance, but a well-informed participant is usually a better trial participant. The
`situations where participant enrollment must be done immediately, in comatose
`patients, or in highly stressful circtunstances and where the prospective participants
`are minors or not fully competent to tmderstand the study arc more complicated
`and may not have optimal solutions.
`The use of finders fees, that is, payment to physicians for referring participants
`to a clinical trial investigator, is inappropriate in that it might lead to undue pressure
`on a prospective participant?6 This differs from the common and accepted practice
`of paying investigators a certain amount for the effort of recruiting each enrolled
`participant. Even this practice becomes questionable ff the amount of the payment
`is so great as to induce the investigator to enroll inappropriate participants.
`Randomization has generally been more of a problem for physicians and investi-
`gators than for participantsY The obiection to random assignment should only apply
`ff the investigator believes that a preferred therapy exists. If that is the case, he
`should not participate in the trial. On the other hand, if he truly cannot say that one
`treatment is better than another, there should be no ethical problem with randomiza-
`tion. Such judgments regarding efficacy obviously vary among investigators. Because
`it may be unreasonable to expect that an individual investigator have no preference,
`not only at the start of a trial but during its conduct, the concept of "clinical
`equipoise" has been proposed?° In this concept, the presence of uncertainty as to
`the benefits or harm from an intervention among the expert medical community
`
`Sandoz Inc. IPR2016-00318
`Sandoz v. Eli Lilly, Exhibit 1095-0011
`
`
`
`10
`
`Fundamentals of Clinical Trials
`
`rather than in the investigator, is justification for a clinical triM. Similarly, the use of
`
`a placebo is acceptable if there is no known best therapy and in other special cir-
`
`cumstances (e.g., the commonly used therapy is poorly tolerated).31 Of course, all
`
`participants must be told that there is a specified probability, for example, 50%, of
`
`their receiving placebo. The use of a placebo also does not imply that control group
`
`participants will receive no treatment. In many trials, the objective is to see whether
`
`a new intervention plus standard care is better or worse than a placebo plus stan-
`
`dard care. In all trims, there is the ethical obligation to allow the best standard care
`
`to be used.
`
`The issue of how to handle accumulating data from an ongoing trial is a difficult
`
`one, and is discussed in Chapter 15. With advance understanding by both
`
`participants and investigators that they will not be told interim results, and that
`
`there is a responsible data monitoring group, ethical concerns should be lessened, it"
`
`not totally alleviated.
`There has been concern about falsification of data and entry of ineligible, or
`
`even phantom participants in clinical trials.1,6~ We