Search This Blog

Monday, September 23, 2024

'White House, Microsoft, SaS, Foley Hoag & Astellas Offer AI Guidance to Investors'

 

'Understanding the Current State of AI'

As recent as five years ago, an AI conversation would have been difficult to have with senior leadership in the life sciences industry. Now that it has been made broadly available to consumers, adoption has skyrocketed and it is a natural part of strategies into 2030. However, there is a significant amount of risk in misunderstanding what this technology can and cannot do, which is compounded by the legal landscape and the imminent prospect of future technological advancements such as quantum computing. As investors consider investing not only in AI but also companies adopting AI, it is imperative that they understand the current state of AI in the life sciences industry as well as strategies for the future.

While many of these issues are applicable to other parts of the industry, Cambridge Healthtech Institute (CHI) hosted a panel in late August during the Bioprocessing Venture, Innovation & Partnering Conference, which spotlighted AI innovation in bioprocessing and drug development. Moderated by Lori Ellis, head of insights at BioSpace, the AI & Technology Panel: Navigating Innovation, Technological Advancements, and Evolving Regulations focused on offering investors and senior leadership guidance into the current state of AI and a glimpse into the future. This article grew out of that panel discussion and focuses on the current state of AI in bioprocessing.

Understanding AI and the POC Purgatory

There is an explosion of AI largely due to GenAI. A fact that is largely overlooked is that AI technology has been around for decades and GenAI is just one part of the AI family.

 Mike Walker, executive director - life sciences supply chain, Microsoft.
Mike Walker, executive director - life sciences supply chain, Microsoft.

Cambridge Healthtech Institute

Because we have more data than ever before, storage, and the ability to computationally process it, the industry can do some powerful things, noted Mike Walker, executive director - life sciences supply chain, Microsoft. Yet, he said, companies are still struggling to adopt AI. To guide their portfolio of companies, as well as make informed future investment decisions, investors need an understanding of where AI is in the industry.

“While there is an enormous amount of willingness and even execution within the life sciences world, both med tech and pharma,”

Walker explained, “what I find most is a lot of proof of concept (POC) purgatory, meaning someone will have a great idea and the company does POC. Next thing you know, you’ve got a pile of POCs, and no one knows what to do with them.” Across the life sciences and healthcare industries—not just in bioprocessing — many paper

processes still exist. As the industry starts to digitize those processes, a greater understanding of the different forms of AI is needed.

Data Context and Trust

With the increased amount of data comes increased questions and concerns. Data questions will always be present, as data quality is an essential part of the conversation when discussing AI.

“Context is everything with data,” Walker explained. “What does this data mean? Should I even use this data? Can I trust this data?” Trust here is a loaded word as it could mean a myriad of things. He further emphasized, “It could mean quality. It could mean I don’t have the proper contextualization around that data and it’s going to give me a false result because it’s out of context.”

While data biases and issues have been discussed often, it is wise for both investors and senior leadership to continue to focus on data. False results lend themselves to potential recalls or process failures in the future.

AI Risk Profiles and Management

Sarah Glaven, principal assistant director, biotechnology and biomanufacturing at the White House Office of Science and Technology Policy, highlighted that the way AI is impacting the bioeconomy portfolio is “as vast as the number of sectors that are part of the bioeconomy.” These include everything from health, to agriculture, to climate, to economic security, national security and industrial chemicals. The Executive Order on Advancing Biotechnology and Biomanufacturing Innovation for a Sustainable, Safe, and Secure American Bioeconomy (EO) touches the entire space. Glaven noted that there are both risk and AI adoption commonalities among the different industries. “The EO is very much focused on protection, security and making sure that the benefits of AI outweigh the risks to the American people. That is the perspective right now in the White House.”

Sarah Glaven, principal assistant director, biotechnology and biomanufacturing, White House Office of Science and Technology Policy
Sarah Glaven, principal assistant director, biotechnology and biomanufacturing, White House Office of Science and Technology Policy.
 Cambridge Healthtech Institute

Walker suggested that risk profiles should be assigned to different types of AI. “You can’t treat machine learning the same as generative AI. Very simply, machine learning is making a conclusion based on existing data. Generative AI is creating synthetic data — data that never

existed to begin with, that’s been interpreted by a large language model.” As a result, GenAI is susceptible to hallucinations, which present real challenges. “While there is an enormous amount of opportunity, there is also an enormous amount of risk that we need to proactively and deliberately manage,” Walker said.

Risk management is a standard business practice. Colin Zick, a partner at Foley Hoag LLP, explained that managing AI risk is, from a process standpoint, no different from managing any other risk. From Zick’s perspective, countering the idea that AI risk is different from other risk is key. “You need to make sure that people understand what you are doing, the limits, what insurance is needed and define the risk,” he said. Ultimately, it is important to have a clear definition of the risks and the comfort level of management for those risks.

As with anything, AI risk management must be kept in context with AI adoption. Sherrine Eid, global head, real world evidence & epidemiology at the SAS Institute, Inc. stressed that companies should keep it simple when it comes to AI adoption. “If you have this elaborate, large language model (LLM) that’s so cool and you think you’re just going to get more funding, but it’s harder to interpret, what have you done?” Eid asked the audience. If a company extends the FDA approval cycle as a result of this LLM, what is truly accomplished?

Nagisa Sakurai, PhD, a senior investment manager at Astellas Pharma - Astellas Venture Management, echoed the other panelists’ sentiments. Internally, the company is evaluating AI technologies. While its leaders recognize that AI technologies present significant leverage throughout drug and medical device development and commercialization, they are not sure how much they should trust it, even if the FDA has approved a technology. “From an investors’ perspective, we do not know how much risk we should take,” Sakurai said. “While we aren’t ignoring AI, we are still assessing how much we should invest in it.”

Regulatory and Legislative Impact on AI

Colin Zick, partner, Foley Hoag LLP.
Colin Zick, partner, Foley Hoag LLP.

Cambridge Healthtech Institute

While we are now seeing other countries starting to create their own AI regulations, the EU AI Act laid the foundation for where AI regulations should start for the globe. It is expected to develop as technology evolves. Zick expects that there will be a Brussels effect in adopting the EU AI Act as a standard: “The regulations are tough, so the idea is that if we can meet that, then we’re probably good everywhere else, which is what we saw with GDPR.” He further noted that because the federal government in the U.S. could not get comparable online privacy legislation passed, states such as California adopted GDPR models for data privacy.

However, because of the United States’ push for innovation and the potential money involved in AI, he foresees more institutional pushback in this case. “There’s no single industry opponent to privacy, and it’s hard to push back at the state level on legislation,” Zick said, but by comparison, “There’s way more money involved in AI, and the arguments about stifling innovation have started already.” Combined with the federal government already pushing back on state legislators and governors trying to pass laws, he expects a muted Brussels effect around AI.

Two other legal issues currently affecting AI, and the industry as a whole, are the Supreme Court’s 2024 rulings on the Loper Bright and Corner Post cases. The cases remove deference to regulatory agencies and the statute of limitations of when a company or organization can file

a case due to injury based on a federal guidance. The consequences of these two cases have already started to arise as people have figured out ways to go back and challenge FDA approvals and cases that have been decided based on deference.

Zick and his firm have been monitoring the situation. “We are already seeing a constant, continual filing of cases challenging federal regulation,” he said. “Within the past week, I know another very long, complicated case was filed regarding the FDA regulation of laboratory tests.” Zick anticipates that there will be more uncertainty about regulatory interpretation and enforcement, which will slow down regulatory guidance drafts.

Nagisa Sakurai, senior investment manager, Astellas Pharma - Astellas Venture Management.
Nagisa Sakurai, senior investment manager, Astellas Pharma - Astellas Venture Management.

Cambridge Healthtech Institute

Glaven expressed concerns about what losing the ability to interpret nuance will do to the market. “What it’s going to do is put an additional burden on the regulatory agencies, who are already pretty bandwidth strapped in terms of workforce and folks that really can spend the time to accelerate the regulatory process. I get concerned that it’s going to hold back our ability to bring products to market.” The Biden administration is focused on making sure this does not become a barrier for drug development, but it is a huge challenge.

That being stated, the Biden Harris administration has emphasized through the Office of Science and Technology Policy that they want collaboration with industry. Glaven stated, “We want to hear from the stakeholder community to make sure that policy is being developed in a way that’s consistent with the needs of the private sector.” This includes comments on funding, de-risking certain aspects of regulation or setting specific standards for industry to follow.

Understanding and Preparing for Cybersecurity Risks

It’s commonly known that AI can be used to amplify cyberattacks such as denial of service attacks, conduct phishing attacks and so forth. But besides that, AI is being used in creative ways to mislead humans.

From a business perspective, there are many opportunities for abuse, such as in the filing of patents by AI. Since there are many ways that AI can be used to attack a business, risk mitigation strategies based on “what if?” scenarios become imperative. Based on the capability that a particular AI brings, it’s necessary to look carefully at what the different scenarios a business must plan for.

Walker illustrated the point by using quantum computing as an example: “the Chinese are using quantum annealing to simulate quantum computing, he said. “It used to take an hour and a half to compute, but now takes less than a second to compute. When you start to think about this model, you have to think in terms of what would it mean if this was applied to attack my business.”

Regarding healthcare information, HIPPA has language

Sherrine Eid, global head, real world evidence & epidemiology, SAS Institute, Inc.
Sherrine Eid, global head, real world evidence & epidemiology, SAS Institute, Inc.

Cambridge Healthtech Institute

built into it which states that companies must prepare for emergencies. It is a statutory obligation when dealing with any type of protected health information. The complicated part, Zick pointed out, is that “HIPAA security doesn’t explain what exactly should be done to address the issue.”

Because companies must have a solution, Zick suggested that companies know basic data hygiene practices and how to respond in emergencies. “Without this understanding, you can’t even start to figure out, what do the bad guys have?” This is the fundamental question that needs to be answered for companies to be able to respond in a timely manner, he said.

For Eid, these risks lead to liability concerns. She reminded the audience that companies are investing in startups and other entities “because they can’t assume all of the liability. There are times where SaS as a software company and a platform will invest in part of the liability, but will never take on all of the liability.”

This frank and open discussion took place during an exclusive event for investors and senior leadership within the industry who are focused on advancements in bioprocessing. The second part of this series discusses future strategies for AI adoption, communication and evolving with trends.

Disclaimer: The statements made by the panelists during the discussion do not necessarily represent the beliefs of their companies or organizations.

https://www.biospace.com/business/the-white-house-microsoft-sas-foley-hoag-astellas-offer-ai-guidance-to-investors-part-one-the-present

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.