Search This Blog

Monday, January 6, 2020

Allergan and Ironwood Pharma settle Linzess patent dispute with Sandoz

Ironwood Pharmaceuticals (IRWD -2%) and licensee Allergan (AGN +0.1%) have settled their patent infringement litigation with Novartis’ (NVS -0.1%) Sandoz unit related to constipation med Linzess (linaclotide).
Under the terms of the settlement, Sandoz will have a non-exclusive license to market a generic version (145 mcg and 290 mcg) in the U.S. effective February 5, 2030 or earlier depending on certain circumstances.

Anika Therapeutics to buy two med firms for up to $195M

Anika Therapeutics (ANIK +4.6%agrees to acquire Parcus Medical, a privately-held sports medicine company, for an upfront payment of ~$35M and contingent payment of $60M
Concurrently, agrees to acquire Arthrosurface, a privately-held provider of joint surface and preservation solutions for active patients for an upfront of $60M and additional milestone payment of $40M.
For 2019, Parcus Medical and Arthrosurface is expected to generate revenues of ~$12M – $13M and ~$28M – $30M, respectively.
Anika anticipates both acquisitions will close in Q1 2020

Catalyst Pharma up on bullish Firdapse forecast

Catalyst Pharmaceuticals (CPRX +5.4%) is up on below-average volume on the heels of its preliminary 2019 results and 2020 guidance.
Firdapse (amifampridine): Q4 2019 sales should be ~$30M. The total for the year should be ~$102M.
Quick assets at year-end should be ~$95M with no funded debt.
2020 outlook: Firdapse revenue for LEMS should be $135M – 155M. R&D and SG&A expenses should be ~$65M.
Topline data from a Phase 3 clinical trial evaluating Firdapse in patients with MuSK antibody-positive myasthenia gravis should be available in H1. If all goes well, the company expects to file a supplemental marketing application in the U.S. by year-end.

Hookipa Pharma up 2% on advancement of Gilead partnership

Thinly traded HOOKIPA Pharma (HOOK +2.1%) is up, albeit on a scant 38K shares, on the heels of its announcement that collaboration partner Gilead Sciences (GILD -0.2%) will advance its HBV and HIV vectors toward development, a decision that triggers another milestone payment under the HBV program.
Gilead has agreed to reserve manufacturing capacity for the vectors and has expanded the resources allocated to the partnership inked in June 2018.

Sunday, January 5, 2020

AI has come to medicine. Are patients being put at risk?

Health products powered by artificial intelligence are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.
IBM boasted that its AI could “outthink cancer.” Others say computer systems that read X-rays will make radiologists obsolete. AI can help doctors interpret MRIs of the heartCT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, said Dr. Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla.
“There’s nothing that I’ve seen in my 30-plus years studying medicine that could be as impactful and transformative” as AI, Topol said. Even the Food and Drug Administration ― which has approved more than 40 AI products in the last five years ― says “the potential of digital health is nothing short of revolutionary.”
Yet many health industry experts fear AI-based products won’t be able to match the hype. Some doctors and consumer advocates fear that the tech industry, which lives by the mantra “fail fast and fix it later,” is putting patients at risk ― and that regulators aren’t doing enough to keep consumers safe.
Andrew Ng
✔@AndrewYNg
Should radiologists be worried about their jobs? Breaking news: We can now diagnose pneumonia from chest X-rays better than radiologists. https://stanfordmlgroup.github.io/projects/chexnet/ 
Early experiments in AI provide a reason for caution, said Mildred Cho, a professor of pediatrics at Stanford’s Center for Biomedical Ethics.
Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain.
In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma ― an error that could have led doctors to deprive asthma patients of the extra care they need.
“It’s only a matter of time before something like this leads to a serious health problem,” said Dr. Steven Nissen, chairman of cardiology at the Cleveland Clinic.
Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is “nearly at the peak of inflated expectations,” concluded a July report from research company Gartner. “As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.”
That reality check could come in the form of disappointing results when AI products are ushered into the real world. Even Topol, the author of “Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again,” acknowledges that many AI products are little more than hot air.
Experts such as Dr. Bob Kocher, a partner at the venture capital firm Venrock, are blunter. “Most AI products have little evidence to support them,” Kocher said. Some risks won’t become apparent until an AI system has been used by large numbers of patients. “We’re going to keep discovering a whole bunch of risks and unintended consequences of using AI on medical data,” Kocher said.
None of the AI products sold in the U.S. have been tested in randomized clinical trials, the strongest source of medical evidence, Topol said. The first and only randomized trial of an AI system ― which found that colonoscopy with computer-aided diagnosis found more small polyps than standard colonoscopy ― was published online in October.
Few tech start-ups publish their research in peer-reviewed journals, which allow other scientists to scrutinize their work, according to a January article in the European Journal of Clinical Investigation. Such “stealth research” ― described only in press releases or promotional events ― often overstates a company’s accomplishments.
And although software developers may boast about the accuracy of their AI devices, experts note that AI models are mostly tested on computers, not in hospitals or other medical facilities. Using unproven software “may make patients into unwitting guinea pigs,” said Dr. Ron Li, medical informatics director for AI clinical integration at Stanford Health Care.
AI systems that learn to recognize patterns in data are often described as “black boxes” because even their developers don’t know how they reached their conclusions. Given that AI is so new ― and many of its risks unknown ― the field needs careful oversight, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison.
Yet the majority of AI devices don’t require FDA approval. “None of the companies that I have invested in are covered by the FDA regulations,” Kocher said.
Legislation passed by Congress in 2016 ― and championed by the tech industry ― exempts many types of medical software from federal review, including certain fitness apps, electronic health records and tools that help doctors make medical decisions.
There’s been little research on whether the 320,000 medical apps now in use actually improve health, according to a report on AI published Dec. 17 by the National Academy of Medicine.
The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices ― such as ones that help people count their daily steps ― need less scrutiny than ones that diagnose or treat disease.
Some software developers don’t bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.
Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. “It’s not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal,” said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and coauthor of the National Academy’s report. “That’s not how the U.S. economy works.”
But Oren Etzioni, chief executive at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.
“If failing fast means a whole bunch of people will die, I don’t think we want to fail fast,” Etzioni said. “Nobody is going to be happy, including investors, if people die or are severely hurt.”
The FDA has come under fire in recent years for allowing the sale of dangerous medical devices, which have been linked by the International Consortium of Investigative Journalists to 80,000 deaths and 1.7 million injuries over the last decade.
Many of these devices were cleared for use through a controversial process called the 510(k) pathway, which allows companies to market “moderate-risk” products with no clinical testing as long as they’re deemed similar to existing devices.
In 2011, a committee of the National Academy of Medicine concluded the 510(k) process is so fundamentally flawed that the FDA should throw it out and start over.
Instead, the FDA is using the process to greenlight AI devices.
Of the 14 AI products authorized by the FDA in 2017 and 2018, 11 were cleared through the 510(k) process, according to a November article in JAMA. None of these appear to have had new clinical testing, the study said.
The FDA cleared an AI device designed to help diagnose liver and lung cancer in 2018 based on its similarity to imaging software approved 20 years earlier. That software had itself been cleared because it was deemed “substantially equivalent” to products marketed before 1976.
AI products cleared by the FDA today are largely “locked,” so that their calculations and results will not change after they enter the market, said Bakul Patel, director for digital health at the FDA’s Center for Devices and Radiological Health. The FDA has not yet authorized “unlocked” AI devices, whose results could vary from month to month in ways that developers cannot predict.
To deal with the flood of AI products, the FDA is testing a radically different approach to digital device regulation, focusing on evaluating companies, not products.
The FDA’s pilot “pre-certification” program, launched in 2017, is designed to “reduce the time and cost of market entry for software developers,” imposing the “least burdensome” system possible. FDA officials say they want to keep pace with AI software developers, who update their products much more frequently than makers of traditional devices, such as X-ray machines.
Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products “is efficient and that it fosters, not impedes, innovation.”
Under the plan, the FDA would pre-certify companies that “demonstrate a culture of quality and organizational excellence,” which would allow them to provide less upfront data about devices.
Pre-certified companies could then release devices with a “streamlined” review ― or no FDA review at all. Once products are on the market, companies will be responsible for monitoring their own products’ safety and reporting back to the FDA.
High-risk products, such as software used in pacemakers, will still get a comprehensive FDA evaluation.
But research shows that even low- and moderate-risk devices have been recalled due to serious risks to patients, said Diana Zuckerman, president of the National Center for Health Research. Johnson & Johnson, for example, has recalled hip implants and surgical mesh.
Some AI devices are more carefully tested than others. An AI-powered screening tool for diabetic eye disease was studied in 900 patients at 10 primary care offices before being approved in 2018. The manufacturer, IDx Technologies, worked with the FDA for eight years to get the test, sold as IDx-DR, right, said Dr. Michael Abramoff, the company’s founder and executive chairman.
IDx-DR is the first autonomous AI product ― one that can make a screening decision without a doctor. The company is now installing it in primary care clinics and grocery stores, where it can be operated by employees with a high school diploma.
Yet some AI-based innovations intended to improve care have had the opposite effect.
A Canadian company, for example, developed AI software to predict a person’s risk of Alzheimer’s based on their speech. Predictions were more accurate for some patients than others. “Difficulty finding the right word may be due to unfamiliarity with English, rather than to cognitive impairment,” said coauthor Frank Rudzicz, an associate professor of computer science at the University of Toronto.
Doctors at New York’s Mount Sinai Hospital hoped AI could help them use chest X-rays to predict which patients were at high risk of pneumonia. Although the system made accurate predictions from X-rays shot at Mount Sinai, the technology flopped when tested on images taken at other hospitals. Eventually, researchers realized the computer had merely learned to tell the difference between that hospital’s portable chest X-rays ― taken at a patient’s bedside ― with those taken in the radiology department. Doctors tend to use portable chest X-rays for patients too sick to leave their room, so it’s not surprising that these patients had a greater risk of lung infection.
DeepMind, a company owned by Google, has created an AI-based mobile app that can predict which hospitalized patients will develop acute kidney failure up to 48 hours in advance. A blog post on the DeepMind website described the system, used at a London hospital, as a “game changer.” But the AI system also produced two false alarms for every correct result, according to a July study in Nature. That may explain why patients’ kidney function didn’t improve, said Dr. Saurabh Jha, associate professor of radiology at the Hospital of the University of Pennsylvania. Any benefit from early detection of serious kidney problems may have been diluted by a high rate of “overdiagnosis,” in which the AI system flagged borderline kidney issues that didn’t need treatment, Jha said.
Google had no comment in response to Jha’s conclusions.

How Anxiety Traps Us, and How We Can Break Free

Zulfi, a coaching client of mine, is the envy of many of his peers. As the general manager of a large, successful business, his work is one of the CEO’s core priorities. He does good work and is well-loved by his team.
But Zulfi has a secret. He suffers from anxiety. It keeps him up at night, impacts his health, and takes a lot of time and energy to manage. When people praise Zulfi’s poise during a major customer presentation, they’re unaware that he survived the meeting by taking anti-anxiety medication. Zulfi handles two jobs each day: the one outlined in his job description and the other managing his anxiety.
It’s normal to occasionally experience anxiety, such as when we’re faced with a high-stakes meeting, a stressed-out boss, or a conflict with a colleague. But according to the National Institute of Mental Health, in any given year, 19% of U.S. adults are suffering from an anxiety disorder, and 31% will deal with a disorder at least once in their lifetime.
Mental health experts postulate that, when anxious, we tend to get trapped in false or limited ways of thinking. These thought patterns create a debilitating negative spiral that can take over our lives by convincing us of impending doom and further exacerbating our sense of helplessness. Anxiety Canada, a website devoted to supporting people who suffer from anxiety, lists a number of these traps and thought patterns. Here are the ones my clients, usually senior executives, most commonly experience and the kinds of things they say when in the grip of a specific trap:
  • Catastrophizing: Imagining the worst possible outcome. “I will get fired if the presentation has any glitches.”
  • Mind reading: Imagining what others are thinking. “I know he doesn’t like working with me because he thinks I’m dumb.”
  • Fortune telling: Imagining what the future holds, but without data. “They will all hate me in the new group because I’m the only one who isn’t a physicist.”
  • Black-and-white thinking: Considering only two possible outcomes. “I’ll either hit a home run or get fired.”
  • Overgeneralizing: Painting all situations with a generalized outcome. “I presented to the CEO last year, and it didn’t go well. I never get things right or always fail when it comes to executive audiences.”
If one or more of these thinking traps has a hold on you, try these strategies I’ve used with my coaching clients to overcome them. While I’m not a psychologist or a medical professional, I do have experience helping my clients adjust their behaviors, change the way they think, and increase their effectiveness at work. These suggestions do not replace the need to consult mental health professionals for possible diagnosis and treatment for anxiety, but they can help you break your negative thought patterns, gain control over your anxiety, and allow you to listen to the chatter that really matters in your daily work.
Pause the pattern. Anxiety is often preceded by physical symptoms. Learn to recognize your physical cues of an impending attack: a churning stomach, sweaty palms, or flaring nostrils. These reactions are part of an amygdala hijack, causing your body to react with a fight-or-flight response instead of operating from your thinking brain. When you notice these reactions, consciously change your activities. Engage the thinking part of your brain, for instance, by doing math. But not something as simple as 2+2; try something that will challenge you enough to divert your brain away from your stressor.
Name the trap. Give your pattern a name, whether it is one of the traps listed above or something you come up with yourself. Naming converts the vague threat to something concrete. You regain power by realizing you’ve encountered it before — and survived. You can fine-tune your mitigation strategy based on the specific trap that’s ensnared you. Zulfi, for instance, had a better sense of the steps to take once he’d named his patterns and could distinguish between catastrophizing, mind reading, and fortune telling.
Separate FUD from fact. Create a two-column list. On one side list all your fears, uncertainties, and doubts, or FUD. The second column is for verified facts. Being able to compare the two can quell your fears and bring you back to reality.
For example, when Zulfi indulged in a mix of fortune telling and catastrophizing, he told himself, “Our key strategy is going to fail, and we’ll soon be out of business because our competitor moves faster, and our subsidiaries are located in places of political turmoil.” Entries in his FUD column included: Our competitor will out-innovate us and be faster to market; geopolitical events will spin out of control; we’ll have a great recession; and our best employees will burn out. Entries in his facts column included: We’ve beat the competition to market the last three times; only one of our 16 subsidiaries is in a politically unstable situation; economic indices are stable; and employee attrition is at an all-time low. Seeing the facts next to his fears helped Zulfi tone down his concerns. If you find your FUD column to be much longer than the facts, get others involved. Reach out to someone you trust, and ask them for their point of view. They may also be able to point to some realistic facts to offset your anxieties.
Tell more stories. We make assumptions, jump to conclusions, and tell ourselves stories all the time. Storytelling helps us get through life more efficiently, but it can also be limiting. When we’re anxious, we tend not only to believe our own stories, we believe the most extreme and negative forms of them.
Instead of curbing this reflexive habit, indulge it. Compose three separate stories and ensure they’re very different from each other. For example, when a client’s manager asked him to increase his technical depth, the initial assumption he made — in other words, the story he made up and told himself — was that his manager was dissatisfied with his performance. When I pushed him further, he developed two companion stories: “My manager wants me to showcase my technical depth further to have an even bigger impact in the group,” and “My manager wants my skills to be more easily transferrable, so as I become more senior, I have more places in the company where I can move for my next role.” Expanding the stories you tell yourself about a specific situation shows you there are multiple possibilities, many of them more positive than your initial hypothesis.
Walk your talk. Ask yourself what you’d advise others to do. When my clients are anxious, I ask them what counsel they would give a friend or team member in a similar situation. People who felt clueless a moment before are immediately able to provide sound guidance. If you find yourself saying, “I feel stuck,” “I don’t know what to do,” or “There’s no way out,” ask yourself, “If a colleague came to me with my predicament, what would I tell them?” This pause allows you to become more objective and loosen the thinking trap that has you in its hold.
While all of these strategies can help in the moment when you’re panicked, plans are hard to remember, much less execute. Write these tactics down and take them to your high-risk meetings. When you notice that familiar change in your heart rate or dryness in your throat, glance at your note and try one of these strategies to calm yourself.
After 10 months following these strategies, Zulfi started to notice changes. His anxiety attacks were less frequent, his self-talk changed from self-criticism to self-compassion, and he had more energy to focus on his day job.
It’s human to experience fear, self-doubt, and confusion. In the right dose these feelings can be helpful — they keep us vigilant, engaged, and productive. But when anxieties overburden our brains and undermine performance, it’s time to consciously choose the strategies that put us in charge of our internal dialogue and tune in to the chatter that matters.

Engrams emerging as the basic unit of memory

Engrams emerging as the basic unit of memory
Above: Memory engram cells labeled green and red in the prefrontal cortex of a mouse. Image credit: Takashi Kitamura Credit: Takashi Kitamura/MIT Picower Institute
Though scientist Richard Semon introduced the concept of the “engram” 115 years ago to posit a neural basis for memory, direct evidence for engrams has only begun to accumulate recently as sophisticated technologies and methods have become available. In a new review in Science, Professors Susumu Tonegawa of The Picower Institute for Learning and Memory at MIT and Sheena Josselyn of the Hospital for Sick Children (SickKids) and the University of Toronto describe the rapid progress they and colleagues have been making over the last dozen years in identifying, characterizing and even manipulating engrams, as well as the major outstanding questions of the field.
Experiments in rodents have revealed that engrams exist as multiscale networks of neurons. An experience becomes stored as a potentially retrievable  in the brain when excited neurons in a brain region such as the hippocampus or amygdala become recruited into a local ensemble. These ensembles combine with others in other regions, such as the cortex, into an “engram complex.” Crucial to this process of linking engram cells is the ability of neurons to forge new circuit connections, via processes known as “” and “dendritic spine formation.” Importantly, experiments show that the memory initially stored across an engram complex can be retrieved by its reactivation but may also persist “silently” even when memories cannot be naturally recalled, for instance in mouse models used to study memory disorders such as early stage Alzheimer’s disease.
“More than 100 years ago Semon put forth a law of engraphy,” wrote Josselyn, Senior Scientist at SickKids, Professor of Psychology and Physiology at the University of Toronto and Senior Fellow in the Brain, Mind & Consciousness Program at the Canadian Institute for Advanced Research, (CIFAR) and Tonegawa, Picower Professor of Biology and Neuroscience at the RIKEN-MIT Laboratory for Neural Circuit Genetics at MIT and Investigator of the Howard Hughes Medical Institute. “Combining these theoretical ideas with the new tools that allow researchers to image and manipulate engrams at the level of cell ensembles facilitated many important insights into memory function.”
“For instance, evidence indicates that both increased intrinsic excitability and synaptic plasticity work hand in hand to form engrams and that these processes may also be important in memory linking, memory retrieval, and memory consolidation.”
For as much as the field has learned, Josselyn and Tonegawa wrote, there are still important unanswered questions and untapped potential applications: How do engrams change over time? How can engrams and memories be studied more directly in humans? And can applying knowledge about biological engrams inspire advances in , which in turn could feedback new insights into the workings of engrams?

Explore further

More information: Sheena A. Josselyn et al, Memory engrams: Recalling the past and imagining the future, Science (2020). DOI: 10.1126/science.aaw4325