Search This Blog

Sunday, July 1, 2018

Is waiting almost over with Geron?


Shares of Geron Corp. (NASDAQ:GERN), a clinical-stage oncology company, rose by as much as 14.9% around mid-June.
The spark? The drugmaker’s shares are responding positively to the company’s oral presentation over the weekend at the European Hematology Association meeting in Stockholm, Sweden. Specifically, Dr. David Steensma, from the Dana-Farber Cancer Institute, provided an update on the ongoing Part 1 portion of the combined Phase 2/3 trial known as “IMerge” that’s assessing the first-in-class telomerase inhibitor, imetelstat, in patients with lower risk myelodysplastic syndromes (MDS).

This conference presentation provided investors with a few key insights into imetelstat’s emerging clinical profile. First off, the company reported that a respectable 34% (11 of 32) of patients achieved red blood cell transfusion independence (TI) of greater than two months. That’s noteworthy because IT is the study’s primary endpoint, and over a third of the patients in this trial appear to be benefiting from imetelstat treatment. Normally, that sort of positive response in an experimental cancer trial is enough to warrant a larger, randomized trial.
Dr. Steensma also noted that a small sub-set of patients have now gone without blood transfusions for over a year at this point following imetelstat treatment. Although this initial part of the study lacks a control arm to put this intriguing result into the proper statistical context, this lengthy period of TI is impressive nonetheless, and suggests that this drug might even be a functional cure for a select few patients.

Looking ahead, Geron said it plans on unveiling the data from the expanded portion of the trial focusing on patients naïve to lenalidomide and hypomethylating agents and who lacked a certain mutation known as del(5q) at a future medical conference. While these particular data were arguably the main reason investors were tuning into this EHA presentation, the fact that the company expects to present additional data from this trial later on is a noteworthy development in and of itself.
In short, Geron and its shareholders are currently waiting on the all-important continuation decision by its partner Johnson & Johnson (NYSE:JNJ) regarding imetelstat’s fate. This decision is expected to come by the end of the third-quarter, but the next major hematology conference isn’t until December of this year. That’s not to say investors should read too much into this intriguing timeline, but this stated plan of action certainly implies that the two companies are indeed making plans that extend beyond the third-quarter of this year.

Are the Eyes Windows to Early Dementia?


Retinal neurodegeneration was linked to cognitive decline, adding to a growing literature that suggests retinal structures may be biomarkers for dementia, according to results from two prospective studies.
A thinner retinal nerve fiber layer (RNFL) was tied to worse cognitive function in people without neurodegenerative disease and greater odds of future cognitive decline, reported Paul Foster, PhD, of the University College London, and colleagues, for the U.K. Biobank Eye & Vision Consortium.
And among Rotterdam Study participants in the Netherlands, thinner RNFL was associated with increased risk of dementia, including Alzheimer’s disease, according to M. Kamran Ikram, MD, PhD, of Erasmus MC University Medical Centre.
Both studies, published in JAMA Neurology, used optical coherence tomography (OCT) scanning to assess eyes.
“Our primary motivation was to confirm if the RNFL-cognition association held true in the very early stages of cognitive decline,” Foster told MedPage Today. “Our results suggest it does. This is important because, between 2002 and 2012, 99% of clinical trials into treatments for Alzheimer’s disease failed. A probable reason for the high failure rate is that treatments are being tested on those who already have irreparable damage to the brain.”
The eye is an easily accessible outpouching of the brain and retinal changes can parallel cortical findings, noted Christine Nguyen, PhD, of the University of Melbourne in Australia, who was not involved in either study “These two studies show in large cohorts that the axonal layer of retinal ganglion cells in the eye appears to be a useful early marker of cognitive decline,” she said.
“OCT is increasingly widespread in optometric and ophthalmic clinics. It is relatively inexpensive, takes a few seconds to conduct, and is comfortable for the patient. As such, it has a high potential as a screening tool,” she told MedPage Today.
In the U.K. Biobank study, 32,038 people without a neurodegenerative disease were included at baseline testing; their average age was 56 and 53.6% were women. Those in the thinnest quintile of RNFL were 11% more likely to fail at least on3 cognitive test. When 1,251 people completed follow-up cognitive tests 3 years later, those with an RNFL thickness in the two thinnest quintiles were almost twice as likely to have at least 1 worse test score (quintile 1: OR 1.92; quintile 2: OR 2.08; P<0.001 for both).
“The size of our study gave us unparalleled statistical power, and we believe this now confirms the link between thinner RNFL and very early cognitive weaknesses, and subtle future changes, in the general population,” Foster said.
The Rotterdam Study also revealed eye-related associations in 3,289 people with an average age of 69, of whom 41 (1.2%) already had dementia at baseline. While thinner ganglion cell–inner plexiform layer (GC-IPL) was associated with prevalent dementia (OR per SD decrease in GC-IPL 1.37), RFNL was not.
Over an average follow-up of 4.5 years, 86 people (2.6%) developed dementia, mostly Alzheimer’s disease. Thinner RNFL at baseline was associated with an increased risk of developing dementia (HR per SD decrease in RNFL 1.44; similar for Alzheimer’s disease), but no link was seen between GC-IPL thickness and incident dementia (HR 1.13).
Brain scans and spinal taps show it may be possible to detect Alzheimer’s early, but “given these are costly and painful, doctors are understandably reluctant to use them widely,” Nguyen observed. “What’s needed is an early, easy and cheap method to detect Alzheimer’s disease.”
While OCT may help fill this gap, “the challenge with the technology and the marker to retinal nerve fiber layer thinning is the lack of specificity for Alzheimer’s disease, as other diseases such as glaucoma also exhibit these changes,” she pointed out.
Extrapolating these findings to clinical practice should be approached cautiously, Foster noted. “There may be a role for population-wide screening with either a broad or narrow focus,” he said. And cognitive decline is an emotive subject, he noted. “People who previously thought themselves healthy may feel labeled as ‘early dementia’ or ‘high risk for dementia,’ and this may have significant, adverse psychological impact.”
There’s no systemic disease that doesn’t have an eye sign, he added. “Retinal morphometry, either of vessels or neuroretina, can contribute to risk profiling for stroke, myocardial infarction, hypertension, diabetes, and dementia. These tests currently are limited to the research arena, but are likely to become available to general practitioners within 5 years.”
The UK Biobank analysis was supported by the Eranda Foundation via the International Glaucoma Association.
Foster disclosed no relevant relationships with Allergan, Carl Zeiss, Google/DeepMind, and Santen, as well as support from Alcon and the Richard Desmond Charitable Trust/Fight for Sight. Co-authors disclosed multiple relevant relationships with industry.
The Rotterdam Study was funded by Erasmus Medical Center and Erasmus University, the Netherlands Organization for the Health Research and Development, the Research Institute for Diseases in the Elderly, the Ministry of Education, Culture and Science, the Ministry for Health,Welfare and Sports, the European Commission, and the Municipality of Rotterdam.
Ikram and co-authors disclosed no relevant relationships with industry.
  • Reviewed by F. Perry Wilson, MD, MSCEAssistant Professor, Section of Nephrology, Yale School of Medicine and Dorothy Caputo, MA, BSN, RN, Nurse Planner

Comorbidities Can Help Predict Migraine Progression


People with migraine can be classified into eight clinically meaningful subgroups according to comorbidities, which may offer clues about headache causes and lead to individualized treatment, researchers reported here.
A modeling analysis showed that comorbidity patterns were correlated with a range of headache features. For example, people with respiratory and psychiatric comorbidities were most likely to experience photophobia and phonophobia, while those with cardiovascular comorbidities were least likely to experience nausea in conjunction with their headaches, according to Richard Lipton, MD, of Albert Einstein College of Medicine in New York City, and colleagues.
“We started with a set of questions,” Lipton said during the Harold G. Wolff award lecture at the American Headache Society (AHS) annual meeting. “Why is migraine so common? Why are the clinical features of migraine so variable from person to person? Why do some people respond well to one preventive treatment but not another? Why do we have to work so hard to find the right treatment for our patients? Why can’t we find genes that account for most people with common forms of migraine?”
“The answer to all these questions is that migraine is really more than one thing,” Lipton continued. “It becomes quite important to understand the heterogeneous nature of migraine and to find approaches for identifying natural subgroups of people with migraine — groups that are characterized by common risk factors, common biology, and if we’re lucky common treatment response.”
Lipton’s group used a statistical technique called latent class analysis to identify subgroups of migraine patients with shared features, focusing on comorbidities and concomitant conditions.
The analysis was based on data from CaMEO (Chronic Migraine Epidemiology and Outcomes), a prospective web-based survey that has collected data on symptoms, treatment, and quality of life from thousands of people with migraines in the U.S.
Out of a total population of nearly 13,000 respondents, those with no self-reported comorbidities were excluded, leaving a sample of 11,837 patients. The researchers initially looked at 62 comorbidities, but excluded those that were not useful in discriminating between groups, leaving 22.
After exploring a series of models, they found that a model with eight subgroups best fit the data. The researchers then determined whether various comorbidities, symptoms, and other headache features were more or less common than average in each subgroup.
  • Class 1: high on many comorbidities
  • Class 2: highest on respiratory and psychiatric symptoms
  • Class 3: highest on respiratory and pain symptoms
  • Class 4: highest on respiratory comorbidities
  • Class 5: highest on psychiatric comorbidities
  • Class 6: highest on cardiovascular comorbidities
  • Class 7: highest on joint pain
  • Class 8: low on comorbidities
Each natural subgroup had a distinct profile of demographic characteristics and clinical characteristic varied across classes, Lipton reported. For example, the respiratory/psychiatric group included a higher proportion of women, the cardiovascular group included more men and the psychiatric group was the youngest.
Patients who had many comorbidities were over four times more likely to have chronic migraine compared with people in the low comorbidities group (23.1% vs 4.8%). This group was also most likely to experience aura, allodynia, moderate to severe intensity pain, nausea, and worsening of pain with routine activities.
Those in the respiratory/psychiatric group were most likely to report pounding or throbbing headaches and were most likely to be bothered by light and sound. In contrast, the cardiovascular group was least likely to experience any of these symptoms.
Patients with many comorbidities were most likely to say they had severe migraine-related disability, or MIDAS grade IV (48.1%), followed by the respiratory/psychiatric (31.9%), respiratory/pain (28.6%), and psychiatric (26.1%) subgroups. Again, the cardiovascular group reported the least severe disability (14.1%).
The researchers then asked whether these comorbidity subgroups could be used to identify which patients were at greatest risk for progression from episodic migraine to chronic migraine.
“If you want to understand the mechanism of something, it’s very often useful to understand what it co-localizes with,” AHS scientific program committee chair Peter Goadsby, MD, PhD, told reporters during a media briefing ahead of the meeting.
“Those of you with family members with migraine, if I tell you people with more comorbidities are more likely to progress, maybe you’re not going to be surprised. If you start to understand the mechanisms of that interaction, you start to understand better how people with infrequent migraine develop frequent migraine,” he continued. “If you understand how they get there, you can understand how to lead them back.”
After excluding people who already had chronic migraine at baseline, Lipton’s group looked at an analysis sample of 8,658 patients.
Compared with the low comorbidities subgroup, those in the high comorbidities group were 5.3 times more likely to develop new onset chronic migraine over the course of a year after, adjusting for demographic factors only, Lipton reported. When other migraine features such as symptom severity and medication overuse were added to the model, the effect was attenuated (3.0 times more likely), but comorbidity subgroup still predicted the risk of progression.
“The hope is that as we identify biologically homogeneous groups, we’ll be able to do for a larger fraction of people with migraine what’s been done for familial hemiplegic migraine — that this work can help us predict prognosis and treatment response and ultimately perhaps lead to more powerful clinical trials,” Lipton said.
“At the moment, the drugs we discover are the ones that are effective for large subgroups of migraine,” he added. “If there were a treatment that works for 10% or 15% of migraine, the way we do our trials right now, we’d completely miss those effects.”
These results represent a step toward more individualized treatment for people with migraine.
“We can see how this work can lead us into an era of precision medicine for migraine,” said ASH session moderator Todd Schwedt, MD, of the Mayo Clinic in Phoenix.
The CaMEO study was sponsored by Allergan.
Lipton disclosed support from and relevant relationships with several companies including Allergan, Amgen, Biohaven, Dr. Reddy’s, Eli Lilly, GlaxoSmithKline, Merck, and Teva.

Loeb’s Third Point urges Nestle to split into three units


Billionaire investor Daniel Loeb on Sunday raised the pressure on food group Nestle, urging it to split into three divisions and telling its board to be “sharper,” “bolder,” and “faster” in overhauling the company.

Loeb, whose $18 billion hedge fund Third Point has invested more than $3 billion in Nestle, said in a letter to the board that the company needed to act more quickly on an overhaul and suggested it should be divided into beverage, nutrition and grocery units.
Such a move would help “simplify (Nestle’s) overly complex organizational structure,” according to the letter, which was seen by Reuters on Sunday.
“This is a call for urgency – rather than incrementalism,” Loeb wrote.
The Financial Times first reported on Loeb’s letter.
Loeb, who has been watching Nestle from afar for roughly a year and has periodically praised the company’s relatively new chief executive’s moves, appears to be running out of patience.
The fund manager criticized the slow pace of Nestle’s sales growth, the decline in its stock price and the fact that it had not sold more pieces that did not fit into its “nutrition health and wellness” strategy.
Third Point published a 34-page long presentation that says the company is not living up to its potential.
“Nestle’s insular, complacent, and bureaucratic organisation is overly complex, lethargic, and misses too many trends,” Loeb said in the letter.
Nestle hired Mark Schneider, a German, as chief executive officer in early 2017. Schneider became the company’s first non-Swiss CEO in nearly a century.
Over the years, Loeb has repeatedly taken on revered corporations, including Yahoo, Sony and Dow Chemical and Sotheby’s, urging them to perform better and finding new chief executives for some of them.
Loeb doesn’t shy away from making specific recommendations and has consistently urged Nestle to sell businesses that do not fit its strategy, including its stake in cosmetics firm L’Oreal.

Acquisition of PillPack gives Amazon access to sensitive health data: WSJ


Amazon’s acquisition of online pharmacy startup PillPack will give the e-commerce giant insight into people’s prescriptions, putting it into the highly regulated realm of health information with more restrictions than it is accustomed to on data-mining, according to The Wall Street Journal.

FDA Eyes Introduction of Lower-Cost Drugs, Next-Gen Sequencing


The market for biosimilar introduction is “extremely unstable,” but the FDA is working to ease the launch of more affordable cancer medicines, both biosimilars and generics, with a strategy that includes an attempt to reduce anticompetitive behavior, according to Scott Gottlieb, MD, commissioner of the FDA.
Gottlieb discussed the competitive landscape for oncology agents and how it is affecting the introduction of new medicines in a keynote talk at the 2018 Community Oncology Conference that the Community Oncology Alliance held in National Harbor, Maryland, in April.
There are numerous approval and marketing challenges for makers of biosimilars who seek to challenge the dominance of branded drugs. Since September 2017, 2 biosimilars have received FDA approval specifically for treatment of cancer and 2 for conditions related to cancer. Gottlieb said that pace has been a pleasant surprise. “I always thought it was going to be a slower-developing area with more hurdles to get over, but we’re right on track,” he said.
Although the FDA has no regulatory power over the prices set by manufacturers, distributors, and retailers, Gottlieb said, the FDA’s commitment to encouraging biosimilar and generic competition in the cancer drug marketplace has the potential to lower costs and improve patient access to available cancer care.
“Drug treatment costs are only a fraction of total care oncology costs, but patients [with cancer] are disproportionately shouldering the cost of oncology medicines,” stated Gottlieb. “The perverse reality of the market today is that cancer treatment comes with its own financial toxicity.”
In tandem with his discussion on improving competition in the oncology drug marketplace, Gottlieb discussed new guidelines from the FDA on the development and review of next-generation sequencing (NGS) tests.1 These tests also should help to spur innovation and the introduction of life-extending oncologics, he said.
“The FDA recognizes the tremendous potential of NGS technology to guide and improve patient outcomes,” Gottlieb said. “And we’re developing a policy approach to keep the pace with fast-moving NGS technologies that give patients and clinicians confidence in these panels’ analytical and clinical validity.”
The first breakthrough-designated NGS-based diagnostic test, F1CDx (FoundationOne CDx), was approved in November 2017. The powerful diagnostic tool can detect mutations in 324 genes and 2 genomic signatures in any solid tumor type. In March, CMS followed up with a national coverage determination in favor of NGS panel testing, expanding coverage to FDA-approved tests for patients with relapsed, refractory, or stage III cancers in addition to stage IV cancers.
Large-panel NGS tests will make it “easier and less expensive to screen patients for tumor mutations with a single test and then efficiently match them with available clinical trials,” Gottlieb said. He predicted that the information gained through this process will illuminate pathways to more effective therapies and stimulate the development of new agents. “This will increase the productivity of drug development, as drugs can then be targeted at common biomarkers across numerous tumor types.”
Gottlieb said that the FDA is striving to make NGS guidelines and current and future regulations “as nimble and sophisticated as the science driving these technologies so that clinicians and patients have access to them as soon as possible, while still providing patients with the reasonable assurance of safety and effectiveness they expect.”
The first set of guidelines addresses the design, development, and analytical validation of NGS-based panels, which can be used when diagnosing individuals with suspected genetic diseases. “For oncology, germline cancer predisposition contributes to about 5% to 10% of observed cancers. For cancer predisposition syndromes, such as Lynch syndrome, interventions like colonoscopies may be able to improve expected survival by sharply decreasing the rates of colorectal cancer,” Gottlieb noted.
These recommendations also explain what the FDA would look for in a premarket submission to assess a prospective test’s analytical and clinical validity, as well as its accuracy in detecting the presence or absence of a particular genomic change, leading to what Gottlieb said would be consensus standards for future NGS-based tests.
The second set of guidelines recognizes the availability of public human genetic variant databases, which could help to confirm the clinical utility of in vitro diagnostic (IVD) tests, particularly during premarket review. These FDA-recognized databases could serve as sources of existing evidence for the claims, such as of the clinical significance of a germline or somatic mutation, that developers of IVD tests make in the review process.
An additional piece of draft guidance explains a process that oncology trial sponsors can use to establish the risk level associated with an investigational IVD to be used in a trial of an investigational cancer drug or biological product.2 According to the guidelines, investigational IVDs are categorized as a significant risk, nonsignificant risk, or exempt from review. If a device is determined to be a significant risk, meaning that it could pose a serious risk to the health or safety of a patient, an investigational device exemption may be needed. The level of risk may disqualify a study for streamlined submission.
“This guidance reduces burdens on sponsors and on FDA staff by outlining circumstances under which sponsors may be able to include information about an investigational IVD into the investigational new drug application submission to the FDA center responsible for the therapeutic product,” Gottlieb said.
With these new policies, the FDA hopes to provide test developers with a more efficient path to market by means of efficient and accurate testing. They will encourage the innovation and adaptation of tools that can increase the productivity research and patient care.

References

  1. FDA finalizes guidances to accelerate the development of reliable, beneficial next generation sequencing-based tests [news release]. Silver Spring, MD: FDA; April 12, 2018. 2018. http://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm604462.htm. Accessed April 12, 2018.
  2. Investigational in vitro diagnostics in oncology trials: streamlined submission process for study risk determination guidance for industry. FDA website. http://www.fda.gov/downloads/Drugs/GuidanceCompliance-RegulatoryInformation/Guidances/UCM604441.pdf. Published 2018. Accessed May 17, 2018.

The Case Against Thrombolytic Therapy in Stroke


Lysing clots in brain arteries to reverse an acute ischemic stroke should work. Thrombolysis worked in myocardial infarction (MI).
For MI, 14 trials of more than 140,000 patients proved lytic therapy improved outcomes.[1] A 2014 Cochrane review of 27 stroke trials included about 11,000 patients—12-fold fewer.[2]
Before Gina Kolata, a health journalist at the New York Timeswrote a story about the decades-long debate between neurology leadership and emergency medicine specialists on the use of lytic therapy for stroke, I, like many cardiologists, accepted the experts’ view that tissue plasminogen activator (tPA) is beneficial and “should be given”[3] to eligible patients with stroke.
Kolata clearly sided with the neurologists. She also featured the oft-heard opinion from many in academic medicine that social media, blogs, and podcasts can be used in negative ways to slow uptake of beneficial therapy.
As an electrophysiologist, I often see patients with stroke—both acutely in the hospital and also in follow-up. I did not know about the tPA debate. So I studied the evidence.
What I found was shocking: The evidence for thrombolysis in acute stroke does not support its guideline recommendation. The resistance to thrombolysis promoted through social media channels and in emergency medicine literature is rational.
Let me explain the strong case against lytic therapy.

Dubious Benefits

Unlike thrombolysis for MI, no stroke trial has shown that lytic therapy lowers death rates. In fact, lytic therapy for stroke strongly increases early death, and shows trends towards higher death overall.
Neurologists measure benefit of lytic therapy by estimating function or ability to remain independent, a much more subjective endpoint than death. Trialists use multiple scales (such as the modified Rankin Scale [mRS]) to quantify functional capacity. Attempts to put numbers on qualitative outcomes is the first of many flaws in the lytic trials. Evidence from the neurology literature chronicle “potentially significant interobserver variability in these scales.”[4,5]
Ten randomized clinical trials assessed functional capacity in patients with acute ischemic stroke treated with thrombolysis or placebo. Two streptokinase trials[6,7] and two tPA trials[8,9] were stopped early because of harm or futility, and four trials showed no benefit of tPA on functional improvement.[10,11,12,13]That leaves two positive trials: NINDS (Part 2)[14] and ECASS III.[15] The first point to make concerns the distribution of these results. If you did 10 trials of an ineffective treatment, this pattern (most trials negative and outliers showing both harm and benefit) is what you would observe in a normal distribution—or by chance alone.
The second point is that the two positive trials have significant flaws and biases toward tPA.

Flawed Trials in Favor

In the NINDS (Part 2) trial,[14] patients treated with tPA were at least 30% more likely to have minimal or no disability at 3 months compared with those treated with placebo. In the original New England Journal of Medicine publication, the authors listed median baseline National Institutes of Health Stroke Scale (NIHSS) scores of 14 and 15 in the respective treatment groups. This gave readers the impression that baseline characteristics were similar. But they were not.
Five years later, the same authors[16] published a subanalysis of NINDS that brought to light imbalances in the baseline characteristics of the two groups. For example, within the subgroup of patients treated between 91 and 180 minutes, 19% of those given tPA  had NIHSS scores of 0 to 5 (mild) compared with only 4.2% of the placebo group. In addition, fewer patients in the tPA subgroup had severe strokes (18% vs 27% with NIHSS score > 20). Because the outcome of stroke depends greatly on the severity of the initial presentation, these imbalances bias the results in favor of tPA.[17]
Using patient-level data from the NINDS trial, which the National Institutes of Health (NIH) placed in the public domain, Jerome Hoffman and David Schriger, from the University of California, Los Angeles, published a reanalysis showing that baseline stroke severity and preexisting disability had a greater association with outcome than did the treatment provided.[18]
Their novel reanalysis centered on the concept of change in NIHSS score.  Essentially, the original NINDS investigators reported where patients end up at 90 days based on treatment assignment. What’s not reported is the change from baseline. If tPA is beneficial, people treated with the drug should end up with a significantly lower score than they started with. When Hoffman and Schriger charted the change in NIHSS, tPA benefit was no longer evident.
Their provocative analysis also refuted the time-is-brain hypothesis. When they analyzed the relationship in 90-day change in stroke scale by time to treatment, the purported advantage of early tPA disappeared. This finding supports the notion that the best predictor of stroke outcomes is the severity of stroke at presentation.
Post hoc analyses challenge Hoffman and Schriger’s conclusions and uphold the original findings of NINDS.[19,20,21] Crucially, none of these papers use patient-level data to measure the change in stroke score from baseline. Another post hoc study[22] by an independent group of authors convened at the request of the NIH to ascertain whether the subgroup imbalance invalidates NINDS  backed up the main trial but added the caveat that their exploratory analysis “was not powered to detect subgroup treatment differences.”
The other positive trial, ECASS III,[15] also had statistically significant imbalances in baseline stroke severity that biased toward tPA: The placebo group had more severe strokes (higher NIHSS score) and nearly double the number of patients with previous strokes (14.1% vs 7.7%; P = .003).
The most significant flaw in ECASS III involved the subjective primary endpoint. Investigators used the mRS dichotomized at scores of 0 to 1 or 2 to 6 for their primary endpoint. A favorable outcome (mRS score of 0 or 1) occurred in 52.4% of the tPA group vs 45.2% of the placebo group. The absolute difference of 7.2 percentage points just reached statistical significance at a level of P = .04. The problem is that the difference between a score of 1 (able to carry out all usual activities, despite some symptoms) and a score of 2 (able to look after own affairs without assistance, but unable to carry out all previous activities) is subjective. Yet in ECASS III, a score of 2 was lumped together with a score of 6 or death. If instead you compared those with a score of 0 to 2 with those with a score of 3 to 6, there was no significant difference between tPA and placebo.
The strongest argument against tPA in stroke came from the negative IST-3 trial[12]— the largest randomized, controlled trial of thrombolysis vs placebo, which included just over 3000 patients. The primary outcome of being alive and independent at 6 months as measured by the Oxford Handicap Score was not statistically different between those in tPA group and those in the placebo group (37% vs 35%, respectively; P = .18).

Certain Harm: Increased Risk for ICH

Every paper published on thrombolysis in stroke reports a higher rate of intracerebral  hemorrhage (ICH) with lytic therapy. This statement comes from the American Heart Association/American Stroke Association guidelines[3] for the early management of acute ischemic stroke: “Treatment with intravenous rtPA is associated with increased rates of intracranial hemorrhage, which may be fatal.”  A Cochrane systematic review found that thrombolytic therapy for stroke increased the risk for symptomatic ICH nearly fourfold (odds ratio [OR]; 3.75, 95% confidence interval [CI], 3.11 – 4.51).[2] It was two- to sixfold higher in ECASS III and NINDS (Part 2).

Possible Risk for Increased Death

All thrombolytic trials show an increased risk of early death with tPA. Death rates tend to even out over time, but the cumulative mortality signal trends higher with thrombolysis.
A 2012 meta-analysis of tPA trials did not show a significant increase in overall mortality (OR, 1.06; 95% CI, 0.94 – 1.20; P = .33).[23] Australian authors, who were not involved in any of the thrombolytic trials, did a systematic review of thrombolysis in stroke and found a 17% higher rate of overall death with lytic therapy vs placebo when they included all thrombolytic trials (OR, 1.17; 95% CI, 1.06 – 1.30; P = .003). When they included only tPA trials, overall death did not differ (OR, 1.04; 95% CI, 0.92 – 1.18; P = .49).[24] This latter meta-analysis also reported a trend for decreasing mortality rates over the two decades that stroke trials have been done. Because tPA trials came after the streptokinase trials, it’s possible the improved mortality in tPA trials occurred not because of drug effects but because of overall improvements in stroke care.

External Validity of Thrombolytic Trials

Let’s say you disagree with this analysis. You ignore the trials stopped early for harm. You ignore the many negative trials. You ignore the higher rate of ICH and possible signal of increased death. And you focus only on the two positive trials, despite their flaws and subjective efficacy endpoint. You still have the problem of translating this evidence to real life.
Diagnosing stroke emergently isn’t always easy. A study from the 1990s found that stroke mimics, such as Todd’s paralysis, infection, or metabolic disorders, occurred in nearly one in five patients initially diagnosed with stroke.[25]Proponents might argue that the rate of misdiagnosis is lower now because of stroke centers. Maybe, but many patients still receive care outside of specialty centers.  What’s more, any patient who receives tPA for a nonstroke gains no benefit and is exposed to a real possibility of harm.
In the tPA trials, strict criteria governed enrollment of patients, which is not feasible in regular practice. Cleveland Clinic researchers  tallied results from every stroke patient receiving tPA from 29 local hospitals area and found that half of the 70 patients treated with tPA received it inappropriately and that treated patients had a rate of ICH (15.7%), which greatly exceeded that seen in clinical trials.[26]
Supporters of tPA can point to observational studies reporting rates of ICH, early death, and functional independence similar to those in the clinical trials.  These studies have significant limitations: In addition to being uncontrolled, they suffer from incomplete case ascertainment and likely selection bias. For example, one such Canadian registry study[27] is not applicable to US practice because the centralized infrastructure of Canada’s healthcare system allows for the naturalized development of expert stroke centers. Another registry study (SITS-MOST)[28] had voluntary participation of centers that “promised” to enroll all patients. Also, SITS-MOST represents an idealized situation in that it excluded all patients given tPA in violation of strict eligibility criteria.[29]

Conclusion

Here I echo Hoffman’s analysis[30] of lytic trials from almost two decades ago when he wrote that using a new therapy is reasonable for a condition if four criteria are met:
  1. Outcome is almost uniformly bad with standard therapy.
  2. The potential benefits of the new therapy are substantial.
  3. The proposed treatment is unlikely to cause harm.
  4. There is no reason to suspect results will be substantially worse in general practice.
None of these criteria exist for thrombolysis in stroke.
Although we have all seen a patient with stroke defects improve after lytic therapy, the evidence from nearly 10,000 patients in trials shows that the odds of that patient getting better with placebo are similar (eg, spontaneous lysis) whereas the chances of that patient suffering ICH are two- to sixfold higher.
Thrombolytic proponents are wrong to recommend this therapy  as something that “should be done.” The evidence for harm is greater than the evidence for benefit. You don’t have to be a neurologist to see that.
Finally, the fact that a therapy with clear harms and high costs became anointed as beneficial despite dubious evidence argues strongly for the kind of independent critical appraisal that the digital democracy now allows.