[ad_1]
In regulating AI in medicine, the FDA must exercise caution in implementing a principled framework.
Artificial intelligence (AI) To the potential to make healthcare more effective and efficient in the United States and beyond, if developed responsibly and regulated appropriately. The Food and Drug Administration (FDA), which reviews most medical AIs, approved AI in a âlockedâ or âfrozenâ form, meaning it can no longer learn and adapt when it interacts with patients and providers. This important guarantee ensures that AI does not become less safe or effective over time, but also limits some of its potential benefits.
To promote the benefits while managing the risks of AI, the FDA has offers a new program to oversee âunlockedâ AI products that can learn and change over time.
The FDA proposal calls for a complex and collaborative approach to regulation that looks quite similar to the way financial regulators in the UK operated before the global financial crisis from 2007-2008. Financial regulators in the UK used a principles-based framework, which establishes broader principles instead of more detailed rules.
This style of regulation failed during the financial crisis, but that doesn’t mean the FDA’s AI proposal is doomed. This does mean, however, that regulators and Congress should learn a few lessons from where things went wrong during the financial crash, including ensuring that regulators can remain independent of the companies they oversee and putting fairness on the back burner. the very heart of the regulatory equation.
Taking a step back, AI promises breakthroughs in several areas of medicine. For example, suppliers can use AI to diagnose patients by looking at their medical imagery or even photos of their skin. Recent evidence suggest AI can be much better at making accurate diagnoses than even trained radiologists or dermatologists, so using software could reduce the cost and time to get a diagnosis in a routine case.
Despite its potential, the risks of medical AI have yet to be managed. Unfortunately, it is not always easy to determine how an AI system makes a diagnosis or medical recommendation, because AI often cannot explain its decision making.
AI software doesn’t âknowâ what a human body or disease is, but instead, it recognizes patterns in the pictures, words, or numbers it âseesâ. This limitation can raise real questions about whether an AI system is making the correct diagnosis or recommendation and how it came to that conclusion.
Big problems can result from AI systems that analyze data with bias. The American medical system has a long history of racism and other forms of marginalization, which can be reflected in medical data. When AI systems learn using biased data, they can to contribute to worse health outcomes for marginalized patient groups. Regulation to prevent these unfair outcomes is essential.
Many applications of medical AI would fall under the authority of the FDA to regulate medical devices. The 21st Century Cures Act Is exclude certain types of âlow riskâ AIs from FDA review, such as algorithms designed to help physicians. But many AI products have yet to pass FDA review, allowing the agency to take its risks seriously.
In April 2019, the FDA released a white paper describing an innovative regulatory plan for unlocked AI software. The FDA’s AI proposal built from its previous idea of ââ”pre-certification”, which would require the agency to regulate developers as a whole, instead of their individual software products, using general principles such as “clinical accountability” and “proactive culture” .
The 2019 AI proposal adds to this framework by asking developers to describe how they expect their software to change over time and how they will manage the risks associated with those changes. The FDA, with the help of the developers, would then monitor the actual results in the clinics and may require additional regulatory review if the software changes too much.
The Net Effect is a regulatory system in which the FDA approves a software developer’s plans for self-regulation of AI development, then uses the principles of pre-certification to assess regulatory outcomes and determine if they are consistent with public policy objectives. This type of system could be called Principled Regulation, which is the approach UK financial regulators used before the global financial crisis and others continue to use today.
Earlier in 2021, the FDA announcement plans to move the proposal forward and respond to stakeholder comments. If the FDA continues to move forward with its proposed principles-based plan, there are important lessons to be learned from the global financial crisis.
First, although regulators can learn and adapt to new and complicated situations or technologies by working with the companies they oversee, regulators must always maintain their independence from these companies.
An essential part of this lesson requires regulators to have a large enough budget to effectively oversee the industry. Regulators need sufficient resources to supervise companies and develop their own internal expertise on the technologies and markets they regulate, so as not to depend too much on companies for expertise and to expose themselves to the risks of capture.
Second, regulatory failures can often hurt marginalized groups the most, which the financial crash show Again. Already, peer-reviewed reports have show incidents of algorithmic bias leading to worse medical care for black patients, meaning AI could be more or less safe and effective for different groups of patients. Failure to regulate this component of AI could result in unacceptable and unfair health damage.
To address these issues, policymakers should consider at least two actions. The FDA should ask Congress for a long-term budget increase when the agency calls for new legislation to implement the AI ââplan – and Congress must be prepared to respond to the agency’s request. In addition, âhealth equityâ should be used as a stand-alone principle or outcome by which the agency will measure business performance and results in the real world. These and other changes to the FDA’s AI proposal could allow regulators to better protect all patients.
Congress and civil society groups should also monitor this complex area of ââpolicy and regulation to ensure that AI in medicine makes society healthier, safer and more equitable.
[ad_2]