AMSA Convention 2016 Logo

Humans, Not Guinea Pigs

The New Physician April 2001
Deaths, violations, closures—what’s wrong with our clinical trials system? A lot, experts say. One suggested solution: Start treating human research subjects as “autonomous human beings,” not just
bodies for experiments.

In 1796 Dr. Edward Jenner began what may have been the first clinical trial in the history of medicine. In order to test his theory about a vaccination for smallpox, the physician infected an 8-year-old volunteer, a boy named James Phipps, with live cowpox virus. Seven weeks later, he infected the child with live smallpox virus. As Jenner suspected, the milder cowpox virus provided the boy with immunity to the much more serious smallpox virus. After this triumph, Jenner conducted more tests on volunteers. His experiments were astoundingly successful, eventually leading to a vaccination for smallpox that all but eliminated a disease that had previously ravaged Europe and Asia, killing and disfiguring millions of people.

Looking at Jenner’s research from a 21st-century perspective, most modern scientists would agree that his method’s results and the risks he took to get them are impressive. We will never know what agonies of conscience Jenner suffered as he considered the potential dangers and benefits of testing his theory. But we do know that the dilemma he faced is one that all clinical researchers face in one way or another. Even in today’s much more controlled medical environments, experiments on humans are not without risks. The potential good for the individuals who participate in clinical trials must be carefully balanced against the possible risks to a few volunteers. Dealing with these dilemmas is not made any easier by the current climate of biomedical research. An overworked oversight system, complex financial arrangements, and ever more complex and intractable diseases make the challenging job of a clinical researcher even more difficult.


Ellen Roche was not sick when she enrolled in a study at Johns Hopkins Asthma and Allergy Center last year. A month after her enrollment, however, she was dead. The experiment in which she participated was designed to examine how healthy lungs keep airways open even when they are exposed to irritating substances. No new drugs or therapies were being tested in this trial. The drug Roche was given and that likely led to her death—hexamethonium bromide, a lung irritant—had been prescribed decades ago to treat hypertension and to reduce bleeding during surgery. Yet it was never approved by the Food and Drug Administration (FDA) to be inhaled, which was how Roche’s clinical trial was administering it, and it isn’t currently approved by the FDA for use in humans at all.

No one knows why Roche, a healthy, 24-year-old lab technician, volunteered for what was supposed to be a low-risk experiment. It may have been simple curiosity, an altruistic desire to advance science and help others, or perhaps she needed the $365 she would have received if she had completed the study. Roche was not, however, desperate for a cure for asthma (she did not have the disorder), and in any case, she understood that the medication she received was not a therapy and that she would gain no health benefits by participating in the trial.

After months of investigation, it is still not clear exactly what went wrong in the Hopkins asthma study. Federal investigators allege that researchers overlooked data detailing the dangers of hexamethonium and that the institutional review board (IRB) failed to follow proper procedures. It is apparent, however, that Roche’s death came as a complete surprise to everyone involved.

And as a result of her death, the Department of Health and Human Services’ (HHS) Office of Human Research Protections (OHRP) ordered Hopkins to stop enrolling new participants for federally supported clinical trials; previously federally funded trials could continue only if they were in the best interests of the individual research subjects. This suspension was later lifted.

The lab technician’s death is not the only incident in recent years to raise questions about safety procedures in clinical trials. Since 1998, concerns about the safety of human research subjects have halted hundreds of experiments. Perhaps the most disturbing case, as well as the most publicized, was in 1999—the death of Jesse Gelsinger, an 18-year-old volunteer in a gene therapy trial at the University of Pennsylvania.

Like Roche, Gelsinger was not sick when he entered the trial. He did, however, suffer from a rare genetic disorder, ornithine transcarbamylase (OTC) deficiency. The disease affects the body’s ability to break down ammonia and is almost always fatal; most children who are born with OTC deficiency die within their first year, and survival beyond the age of 5 is extremely rare. Gelsinger suffered from a milder form of the deficiency, and medication and a strict diet kept his condition under control. The trial was designed to help develop a therapy for babies with the disease. Ethicists had determined that parents of babies with OTC deficiency could not give truly informed consent for their children to participate in the study, since they may be unduly influenced by their children’s illness. Instead it was decided the experiment would be done on mothers who were carriers of the disease and adult males, like Gelsinger, who had a milder form of OTC deficiency.

The experiment entailed some risks, but Gelsinger was aware of this. He said that he was doing it for “the babies.” The teenager died of multiple organ failure after being injected with adenovirus vectors designed to replace the faulty genetic information with the proper instructions.

Gelsinger’s death, the first reported death in a gene therapy trial, was a tremendous blow not only, of course, to his family and friends, but also to gene therapy research. After his death, the University of Pennsylvania was forced to halt all genetic research involving human subjects—a major setback for the institution that leads the nation in genetic research.


The deaths of Roche and Gelsinger, as well as other recent clinical trials cases involving violations or errors, have provoked intense scrutiny of the U.S. clinical trials system and the procedures designed to ensure the safety of human research subjects—primarily those involving the OHRP and IRBs.

Dr. Greg Koski, the OHRP’s director since September 2000, calls the current clinical trials system “dysfunctional.” Other experts agree, saying the current system is in dire need of improvement, if not a total overhaul. Financial conflicts of interest, lack of full disclosure about the details of previous studies, and consent forms that are difficult to understand have all been cited as significant flaws.

For example, while Gelsinger knew that his participation in the study entailed some risk, he did not know the gene therapy he received had resulted in the deaths of some primates during the animal phase of the study. Nor did he know the study’s chief investigator owned stock in the company funding the research.

Concerns in relation to funding sources are common. In the past, government agencies sponsored the majority of medical research. Today, pharmaceutical companies and other private industries and foundations fund more than half of the research that is being conducted in the United States. Critics of the system say this can easily lead to conflicts of interest as well as restrictions on how information is shared among researchers within the academic community. The National Institutes of Health has expressed grave concern about the ability of private enterprise to protect academic freedom in scientific research, and to determine and enforce appropriate limits of financial interests.

Another major problem is the IRB system. IRBs are designed to scrutinize and approve every piece of proposed research that will involve human subjects. However, the recent explosion of biomedical research—an estimated 5,000 institutions conduct clinical trials —has resulted in IRBs that are so overworked that doing their jobs well is almost impossible.

“When Ellen Roche died, 2,500 studies were under review by the various review boards at Hopkins,” says Alan Milstein, an attorney who has filed numerous lawsuits on behalf of clinical trials volunteers, including representing Gelsinger’s father in his case against the University of Pennsylvania, which settled out of court. “At any given time, between 200,000 and 300,000 studies are being done that involve human subjects. With this much research going on, the oversight system simply can’t do what it is mandated to do,” he says.

Dr. John Zaia, chair of the IRB at City of Hope Cancer Center in Los Angeles, agrees. “IRBs are totally snowed under. The biggest problem is that IRBs don’t have the staffing to deal with the increased workload,” he says.

But even when the problems can be identified, correcting them is not easy. The biomedical research community is a huge conglomeration of academic medical centers, private research labs, government agencies and private foundations. Enforcement authority and regulations vary from institution to institution, and protocols and reporting guidelines often change depending on who is funding the research.

For example, in 1981, the HHS established a set of regulations that have since developed into what is now known as the Common Rule. These regulations are designed to oversee the protection of human research subjects and to detail the responsibilities of oversight committees such as IRBs. However, the rule applies only to federally funded research and any changes to its provisions must be approved by as many as 17 federal agencies.

“When the [oversight] system gets too cumbersome, it stops functioning as a protective mechanism for either the researcher or the patients,” says Dr. Carla Falkson, a cancer researcher at the University of Alabama at Birmingham’s (UAB) Comprehensive Cancer Center.

The OHRP’s Koski has repeatedly stressed the need for open and honest cooperation between institutions (researchers, universities and their IRBs) and the government oversight offices. Under his watch, he says, the OHRP has been willing to use its authority to enforce regulations, but it can’t reform the system on its own. Institutions need to improve research protections voluntarily, he says. Some institutions are doing this. After the deaths of Roche and Gelsinger, Hopkins and the University of Pennsylvania have increased the number of IRBs and changed the practices of the boards so they can more closely monitor the research.

And in April 2001, a consortium of organizations (including the Association of American Medical Colleges, the Association of American Universities, and Public Responsibility in Medicine and Research) created the Association for the Accreditation of Human Research Protection Programs (AAHRPP—pronounced “a-harp”). Experts say AAHRPP’s approach uses site visits, rigorous performance standards and precise outcome measures to guide institutions toward making research programs safer. The goal is to get to the point where all research institutions seek AAHRPP accreditation. The association began accepting applications for accreditation in February.

One of the program’s strengths, says AAHRPP’s executive director, Marjorie Speers, is that it gets everyone from administrators and researchers to advocacy groups and patients involved in the protection system. “Medical research is safe now,” Speers says, “but it is essential that we restore and maintain the public’s faith in research. Accreditation will help do that.”


Like many ethics experts, Rebecca Dresser believes there are serious flaws in the clinical trials system but points to another area of grave concern—communication. Much of the problem with the system stems from volunteers’ unrealistic expectations of the biomedical process, says the professor of biomedical ethics at Washington University School of Law and author of When Science Offers Salvation: Patient Advocacy and Research Ethics.

“Researchers haven’t done a very good job of informed consent,” Dresser says. “People already have an impression when they walk into the researcher’s office—usually a positive impression—about the research. This may lead patients to not pay as much attention as they should to what the researcher is telling them about the trial.”

Volunteers aren’t the only ones at fault, though. “Researchers sometimes have an understandable reluctance to be brutally honest with people who are dying, who may be desperate for a cure,” she says. “This lack of brutal honesty may mean that some people who are used in trials don’t fully understand the chance of benefit.”

Ethicists call this “therapeutic misconception,” and researchers, when they acknowledge it at all, soon realize that it is the thorniest of issues they face when trying to justify the use of human subjects in medical experiments.

The fact is that most trials—especially phase one and phase two trials—offer participants very little chance of therapeutic benefit. Yet, few participants in clinical trials are doing it “for the babies”; most desperately seek a cure. Falkson describes the patients who volunteer for her studies as “people who’ve tried everything but don’t want to give up hope. They know there is only a 1 [percent] to 2 percent chance of a response [to the experimental therapy], but they are willing to take that chance.”

When asked if she is confident that her patients understand the possible risks and benefits of the experiment, Falkson nods sincerely. But she adds that “sometimes people don’t want to know. We mustn’t overestimate the ability of patients to comprehend the situation. Sometimes they are too emotionally involved with this to be rational. We have to handle them very gently.”

The therapeutic misconception arises when patients, and often the physicians who recommend them for trials, confuse “medicine” and “science.” In medicine, the goal is to alleviate suffering, perhaps to heal. In science, the goal is to advance knowledge with the prospect of eventually giving medicine better tools with which to pursue its goals. The role of patients in the clinical trials process is to volunteer their bodies to help researchers test theories so the scientific community can increase its knowledge. The role of researchers is, in part, to ensure the volunteers understand this. This is not an easy task, and this is why therapeutic misconception can be such a problem.

As Falkson pointed out, many volunteers are willing to try anything—to take any number of unknown risks for a minimal chance that this new therapy will cure their diseases or at least buy them more time. But here’s the dilemma: If human research subjects were not so desperate, and if they truly understood the odds, would they still volunteer? Probably not, Milstein says.

“If patients truly understood the benefits and risks, fewer people would volunteer for experiments. People volunteer because they think it is in their therapeutic best interest. No matter what the researcher says, the patient will believe that the ‘doctor’ has the patient’s best interest at heart. You don’t find a lot of altruists in cancer wards and children’s hospitals,” Milstein says.

Dresser agrees there’s a great disconnection between the research community and the average patient. “Our expectations for biomedical research are probably too great,” she says. “Research definitely provides benefits, but if you spend time around medical schools, you soon realize that medical research is a slow and incremental process. There are many dead ends.”

But despite the risky nature of the beast, if cures are to be found and advances are to be made, experiments have to be done, and human subjects are, at least at some point in the process, essential. Dr. David Curiel, director of UAB’s Gene Therapy Center, is adamant on this point. One of the reasons he came to UAB to conduct his cutting-edge genetic research is because the university has an effective “bench-to-bedside” program.

“A strong linkage between the basic scientists and the clinic scientists is ideal [for research],” he says. “We design something in the lab, and then we are able to put it into a trial right here. We get answers in the clinic that tell us what we need to do to fix the problems in the lab.” And that’s the crux of the issue, he says. Human subjects are needed to fix, adjust and refine the science long before the science is ready to cure anyone.

So, back to that old ethical dilemma: Saving untold millions of people from the horrors of smallpox required risking the lives of healthy volunteers. Did Jenner’s volunteers understand the nature of the risk? Perhaps. Was it worth it? It certainly seems so now. But these questions come up again and again, every day, in academic medical centers. And they aren’t any easier to answer now than they were 200 years ago. But one thing, critics say, is certain: A responsible approach to medicine, whether in treating patients or recruiting them for studies, requires being as honest as possible with patients, even when they don’t want to know or don’t want to understand.

“If honesty with subjects means that the pace of research is slowed, then that is the price we pay for truth,” Milstein says. “There are more important values than research, such as treating people as autonomous human beings and not as means to an end, putting their immediate safety and needs ahead of other, less tangible, concerns.”

This is not a new idea. It is, in fact, one of the concepts on which the practice of medicine was founded: primum non nocere. First do no harm.
Avery Hurt is a freelance writer based in Birmingham, Alabama.