Skip to main content

 

MPL Liability Insurance Sector Report: 2023 Financial Results Analysis and 2024 Financial Outlook

Wednesday, May 22, 2024, 2:00 p.m. ET
Hear analysis and commentary on 2023 industry results and learn what to watch for in the sector in 2024, including an analysis of the key industry financial drivers.

MPL Association’s National Advocacy Initiative in Full Swing

The MPL Association is shifting its focus toward state policy makers with a new program—the National Advocacy Initiative. This comes at an important time for the MPL community as the deteriorating policy environment in the states is resulting in increasing attacks on established reforms.

 


 

 

COVER STORY

Generative AI Scrambles the MPL Liability Puzzle

Offering Both Promise and Peril, the Rise of Generative AI Creates Challenges and Opportunities for Providers, Testing MPL Models

By Amy Buttell


Amid the generative AI boom, the potential number of healthcare applications is staggering. Generative AI possesses the potential to improve medical diagnoses, increase healthcare quality, decrease repetitive tasks, optimize resources, and more. What is far from settled, however, is how AI will impact medical professional liability.

Although relatively rare when compared to other types of property and casualty events, medical professional liability (MPL) cases still affect nearly one-third of practicing physicians each year. Overall, MPL costs are estimated at $60 billion a year, or approximately 2% to 3% of annual healthcare spending. Only a small percentage of all malpractice claims are litigated in court; of those, 80% to 85% are decided in favor of the defendant.

MPL stakeholders, including healthcare organizations, providers, and MPL insurers, wonder how generative AI will change today’s medical  liability landscape. Just as there are potential benefits, there are many potential risks, some of which we covered in the first part of this series, What MPL Stakeholders Need to Know About Generative AI.

“There are risks of using generative AI and not using generative AI,” said Catherine Gaulton, CEO of HIROC, an MPL insurer in Canada. “We know there are definite benefits in terms of enhancing patient safety. However, used incorrectly, or used without appropriate considerations, it could be dangerous to patient safety. It’s all about the how, not about the if.”

For David Eaton, an attorney specializing in medical professional liability with Hagwood and Tipton, the potential to take the human approach out of medicine and over-rely on technology is a concern. “Anytime we get into algorithms, data, and decision making, there’s the potential for problems in how that artificial intelligence is applied,” he said. “In this case, the possibility for errors is magnified, especially if there is a lack of the human element; training and the users’ understanding of the software will be very important.”

The bottom-line questions that we’ll explore in this article are whether it is possible for healthcare organizations, providers, and patients to benefit from generative AI without increasing MPL risk and what the potential risks involved in generative AI today are. We consulted with a number of MPL stakeholders as part of this series on the implications of generative AI for the MPL industry, including Gaulton of HIROC; Emma Parfitt, director of professional services and general counsel at MDDUS in the UK; Rodger Hagen, partner at MeagherGeer in Minneapolis, MN; Chris Burkle, counsel at MeagherGeer in Minneapolis, MN; Larry Van Horn, founder and CEO of Preverity in Nashville, TN; and David Eaton, shareholder at Hagwood and Tipton Law Firm in Jackson, MS.

Standard of Care Issues

A major emerging MPL issue in the debate over the place of generative AI in healthcare is around the standard of care. That’s because “standard of care” is a legal term, not a medical term. “Basically, it refers to the degree of care a prudent and reasonable person would exercise under the circumstances,” according to an article published in Innovations in Clinical Neuroscience. “State legislatures, administrative agencies, and courts define the legal degree of care required, so the exact legal standard varies by state. The vast majority of states follow the national standard, such as this from the Connecticut Code…"that level of care, skill, and treatment, which, in light of all relevant surrounding circumstances, is recognized as acceptable and appropriate by reasonably prudent similar healthcare providers.”

Essentially, for an MPL defendant to prevail in court, they must prove that they met the standard of care. The question is if, when, or how generative AI will figure into the future state of the standard of care. “I think this will be used in whatever way the plaintiff’s bar deems the most expedient way to use it,” Hagen said. “When this question is asked in a deposition, whatever answer is given will be argued as having been the wrong thing to do.” Hagen’s point is that plaintiff’s lawyers will likely play both sides of this issue—providers who use generative AI may be targeted for using it, while providers who don’t use it will be targeted for not using it.

 




Parfitt noted, “The issue around standard of care is going to be important—what would a “reasonable” doctor have done in the same position? Another question will be whether the generative AI in a particular hospital was of an appropriate standard—was it tested appropriately and so on, and will it provide the same standard of care, and was it reasonable to rely on it because others would have done so in the same situation.”  

Another risk that occurs as techniques and technologies advance is the potential for providers to get left behind, Gaulton said, noting, “There could be a situation where someone who is tech averse is not using it. But frankly that risk is no different than clinicians now who refuse to stay on top of or don’t have mechanisms to stay on top of best practices.”

Potential MPL Generative AI Risks

Burkle classified the risks of generative AI as falling into four categories: cybersecurity, privacy, incorrect or potentially harmful patient care and administrative output, and informed consent.

Cybersecurity Risk: Hackers can potentially “poison” the information in a large language model used in healthcare organizations, creating misinformation used by healthcare professionals leading to negative outcomes and potential claims. “We know that cybersecurity breaches keep increasing every year,” Burkle said. “In this case, nefarious actors could poison that data and hold that over healthcare providers, seeking large ransoms.”

Privacy risks: Healthcare data is protected by a number of regulations, including HIPAA. To train a generative AI large language model, a vast amount of underlying data is required; such an endeavor could potentially expose private patient data during the training and deployment process. Research studies reveal that generative AI tools can potentially re-identify and associate a particular individual with their healthcare data even when that information has been anonymized and scrubbed.

Incorrect or potentially harmful patient care or administrative output: Biased outputs can create risk by incorrectly or mistakenly extrapolating symptoms and treatments from one group of patients to another, Burkle said. For example, he noted, data on diabetic retinopathy diagnosis and treatment obtained from US patients was used to diagnose and treat patients in India, where it wasn’t effective. Healthcare biases that may occur in generative AI include historical, representative, measurement, aggregation, evaluation, and deployment bias.

Informed consent risk: Patients need to be informed when generative AI is used in some way to make decisions about their care. If that doesn’t happen, there is a risk that if something goes wrong and harm results, there wasn’t a complete and full disclosure in terms of informed consent between the provider, the healthcare organization, and the patient.

Other potential risks include:

Group Litigation Risk: Using generative AI, plaintiff’s attorneys may find easier pathways for locating records for patients who may go on to have MPL claims and then generating those claims may also become far easier, flooding insurers with claims. “Plaintiff’s attorneys may not need the same number of patient testimonies to produce claims; instead, they could potentially use generative AI,” said Parfitt. “This is something to seriously consider in terms of an increase in claims notifications and how to tackle the potential flood of increased claims and allegations.”

Utilization of Free or Unauthorized Apps: Within healthcare organizations, there are already providers using free versions of ChatGPT to help them with decision making, which is a risk in that the free version of ChatGPT is only updated with information through 2021 and also is not guaranteed to be accurate or free from bias. “The world of leadership and governance is thinking that none of this has happened yet, while physicians and nurses are pulling up free versions on their phones for help in decision making,” said Gaulton.

These risks, and others that we aren’t yet aware of, may lead to liability for hospitals and providers. “The hospital, being the purchaser of a generative AI large language model, may run into vicarious liability issues,” said Burkle. “Certainly, the clinicians themselves who used the generative AI in the process of providing healthcare will run into that issue.”

Establishing Policies

One of the most effective methods to counter generative AI risk in MPL is for healthcare organizations to establish policies around the use of generative AI in clinical care. “We’ll need to establish regulations around using it, not just in healthcare facilities but also in the claims arena,” said Parfitt.

“Clearly, there will be a need for providers, healthcare organizations, and insurers to work together to form teams to oversee generative AI and make sure that it is being applied correctly and catching any flaws or potential problems on the front end,” said Eaton.

Gaulton agreed, saying, “You need an efficient decision-making process around how a generative AI tool is first implemented and how it is evaluated over time in a healthcare setting, which is a lot of work.” HIROC leaders wrote an article for the Canadian College of Health Leaders Health Management Forum that highlighted many of the issues related to AI in healthcare, Preparing for the Future: How Organizations can Prepare Boards, Leaders, and Risk Managers for Artificial Intelligence. The article highlights overall principles and key questions for healthcare organization boards of directors and risk managers to consider in five areas:

  • Ethical risks: Clearly define the value proposition of AI systems
  • Governance risks: Establish comprehensive governance and oversight
  • Performance risks: Apply rigorous methods in building AI systems
  • Implementation risks: Apply change management processes
  • Security risks: Create strict privacy and security protocols

You can create a similar set of guidelines for your members or insured healthcare organizations and providers to use as they seek to adopt generative artificial intelligence to improve organizational efficiencies, patient outcomes, and patient safety.

Potential Regulation

Because generative AI is so new in healthcare, regulatory approaches are in flux. In the US, a bill, the Algorithmic Accountability Act of 2023, has been proposed that would assess the impacts of AI systems that they use or sell and create transparency around how and when such systems are used.

The US National Institute of Standards and Technology is developing a framework for generative AI risk based on the success of the NIST AI Risk Management Framework. An article in the New England Journal of Medicine urged the Biden Administration to develop a healthcare-specific framework “that includes a list of hazards that should be assessed before each piece of healthcare GAI functionality is deployed.”

The American Medical Association reported working closely with the US Food and Drug Administration to support a potential regulatory framework for the use of generative AI in healthcare and medical devices. The FDA’s Center for Devices and Radiological Health “is considering a total product lifecycle-based regulatory framework for these technologies that would allow for modifications to be made from real-world learnings and adaptation, while ensuring that the safety and effectiveness of the software as a medical device are maintained.”

Van Horn said that FDA regulation will provide an important safeguard for these technologies. “There are areas where generative AI is going to support healthcare delivery, but it is not going to ever be by itself,” he added. “The provider will always be the responsible party. If you go down a full generative AI route, that technology will have to be an FDA-approved product. If it is approved, it will be for a narrow use case around a specific disease condition.

“I think it’s important to put the issue in context and be measured about how it is likely to play out,” he continued. “The big issues will be first, how is it being trained and what is the foundation of the information, and second, how are we going to evaluate and FDA approve this generative AI technology in the context of healthcare.”

In the UK, the Medicines and Healthcare Products Regulatory Agency is the dedicated authority to set guidelines and principles; this agency has already presented a roadmap called Software and AI as a Medical Device Change Programme-Roadmap. In the EU, the AI Act will exist autonomously and overlap with sector regulations such as the Medical Devices Regulation and the In Vitro Diagnostic Medical Devices Regulation. In Australia, the Australian Medical Association has called for national regulations around the use of generative AI in medicine and health.

Impacts from Litigation and Jury-related Issues

The use of generative AI by providers and healthcare organizations will, at some point, impact MPL litigation and juries. There’s the potential for providers and healthcare organizations to be caught in the middle by plaintiffs’ attorneys who may say, on one hand, that the provider did consult with generative AI in the course of providing care and should not have; or, on the other hand, that the provider did not consult with generative AI in the course of providing care and should have, as Hagen pointed out.

In terms of the jury pool, the majority of Americans are uncomfortable with the concept of generative AI and AI in general being used by their own healthcare provider for diagnosis and treatment, according to the Pew Research Center. Specifically, “a majority of the public is unconvinced that the use of AI in health and medicine would improve health outcomes.”

Eaton is concerned that any perceived disassociation within the patient-provider relationship related to perceived over-reliance on technology would be poorly received by a jury. “There are already too many nuclear verdicts that are not based upon evidence, then add in the potential for errors related to generative AI, the potential exposure and amount of damages that might result could make things worse and nuclear verdicts more prevalent,” he said.

The evolving nature of generative AI and the risks within healthcare create the potential for a major reassessment of risk by MPL insurers and reinsurers that could potentially increase rates and affect policy terms, especially in the absence of regulation. As adoption of this technology becomes more prevalent—presently it is mainly employed in narrow use cases on an experimental basis—it could bring about a larger reckoning of the risk by a wide variety of healthcare stakeholders.

Up Next

Our third and final article in this series, which will be released on Wed., Oct. 25, will focus on the ways that healthcare insurers can use generative AI within their operations.


 


Amy Buttell is Editor of Inside Medical Liability Online.
A major emerging MPL issue in the debate over the place of generative AI in healthcare is around the standard of care. That’s because “standard of care” is a legal term, not a medical term.