Featured topic and speakers
Join us for a discussion about the ongoing development of augmented intelligence (AI) policy in health care. AI continues to be top of mind for policymakers interested in supporting innovation and the promised benefits of AI while also addressing ongoing concerns around patient safety, the need to ensure accuracy of AI systems, and the importance of keeping a physician in the loop in the delivery of health care. During the webinar, you will hear about the ongoing development of AI policy from a national perspective, learn about the evolving federal landscape, and hear how state lawmakers are addressing AI in health care, particularly related to transparency and payer use of AI.
Host
- Bruce A. Scott, MD, president, AMA
Speakers
Jared Augenstein, senior managing director, Manatt Health
Emily Carroll, JD, senior attorney, Advocacy Resource Center, AMA
Shannon Curtis, JD, assistant director, Federal Affairs, AMA
Kim Horvath, JD, senior attorney, Advocacy Resource Center, AMA
Transcript
Dr. Scott: Hello and thank you for joining us this afternoon for our latest in the AMA Advocacy Insights webinar series. I'm Dr. Bruce Scott, the president of the American Medical Association and an otolaryngologist in private practice here in Louisville, Kentucky. It's great to be with all of you today as we discuss this important issue of the ongoing developments and challenges of AI policy in health care.
As I've talked across the country to physicians, and as we've seen reflected in the studies that the AMA has done, physicians are increasingly excited about the potential for AI in health care. But at the same time, there are concerns—concerns about liability, regulation, transparency and patient privacy.
Now, it's important from the very beginning to recognize that at the AMA, we like to refer to AI not as artificial intelligence, but rather augmented intelligence, in order to emphasize the human component of this new resource and technology, so that patients know that whatever the future holds, that there will be a physician at the heart of their health care and the decisions.
The American Medical Association House of Delegates, back in 2018, developed a number of principles on AI in order to guide the technology and development future. We tweaked those in 2023 and made major revisions in 2024, pushing the fact that physicians should be involved in the development and the implementation of AI technology, so then, in effect, we will know that it will work at the bedside, in our clinics, in our offices, in the hospitals or the emergency departments, wherever it is that we practice.
This new policy also emphasizes the need for the government regulation. Voluntary standards are not going to be enough. We need to make sure that the principles of AI implementation are regulated. And as we move forward, we have engaged with the administration and with legislators in this regard. One example of this is transparency, so that physicians know exactly what they're getting into when they agree to utilize a particular new technology in AI.
As you can imagine, AI has been on the top of the mind for policymakers in Washington, DC, and for administrators within health care as well. So for the next hour, we're going to be discussing the ongoing challenges in the development of AI policy from both a national perspective, and we're going to check in on what's happening on state landscape as well.
Today, we're joined by four experts in this critical subject—Jared Augenstein, who is the senior managing director at Manatt Healthcare; Emily Carroll, JD, who's a senior attorney at the AMA Advocacy Resource Center; Shannon Curtis, JD, who is the assistant director of federal affairs at the AMA; and finally, Kim Horvath, JD, senior attorney at the AMA Advocacy Resource Center.
I want to make sure that we have enough time at the end for your questions. And you will have an opportunity to submit your questions later via the chat. But let's get started. Let's kick it off with Jared. Jared, can you give us kind of a state of the art, if you will, the state of play on this issue? What are the various players right now in this space thinking about AI policy?
Augenstein: Sure. Thanks, Dr. Scott. And thanks to the AMA for organizing this webinar. So as you mentioned, AI is quickly getting integrated into many aspects of health care. And state policy makers are grappling with—state and federal policy makers, I should say, are grappling with how to balance the innovation, and the rapidly advancing technology and the promised benefits of AI, with concerns about accuracy, about bias, privacy and other factors.
And we're really at an incredible time in terms of the technology advancement. About half of physicians are already using AI in some way in their practices. More than a thousand AI-enabled medical devices have been cleared by the FDA to date. And this administration, at the federal level in particular, is very positive on the potential for AI.
And just today, there's an RFI released related to the use of health technology. And we're starting to see more supportive statements about the use of AI in federal regulation and other sort of subregulatory guidance. With respect to states, because there really hasn't been that much activity, especially from a legislative perspective at the federal level, the states have really started to step in.
And so there are more than 250 health-AI-related bills that have been introduced across 34 states this year so far, in 2025. Those bills track broadly across four categories. The first is transparency. So these are laws which are typically outlining disclosure or information requirements between those who develop AI systems, those who deploy them and the end user.
The second category are consumer protection laws, focused on ensuring that AI systems don't unfairly discriminate, that they abide by disclosure requirements that are put in place, and that there's a way for end users of AI systems to contest AI decisions. The third is around payer use of AI. And we've seen a lot of activity in that space. And I know that Kim and Emily are going to speak a little bit about that later. But these laws generally establish when payers may use AI tools to support clinical decision making, such as utilization management and what oversight measures are necessary.
And then finally, we've seen some states introduce laws related to AI in clinical contexts. And these laws address clinicians' use of AI tools—physicians and clinicians more broadly, in some cases. And we've seen a lot, just in the past few weeks, for instance, on the use of mental health chatbots, which have been sort of in the popular press, have become very widely used. And we're starting to see states begin to regulate the use of those tools as well.
There's really three states that have been in the lead and have passed meaningful legislation. I'll just say Colorado, Utah and California. I think we're going to spend some time digging into each of those. So I won't do that now. But really, it's been, from a stakeholder perspective, fascinating, because you have—a lot of physician community understandably has concerns about the use of AI in clinical context.
Dr. Scott, as you outlined in your opening comments, you have an empowered, I would say, technology community, especially big tech community, but even small tech, as it were, that is concerned about overregulation and the potential impact on stifling innovation. You have generally not that well-educated policymakers.
These are really technical issues. These are really complicated issues. And I think it's hard to stay on top of just how fast the technology is evolving. It's hard for policymakers to stay on top of that. They're trying. But it's just ... the rate at which the technology is changing is sort of unprecedented. And so that's leading, in my view, at least, to a tremendous amount of confusion and uncertainty from a policy landscape.
Dr. Scott: So there is a lot going on. That's very obvious. Over these last 18 months or so—let's turn to Shannon. Tell us about, on a federal level, activities last year, way back in 2024. What were some of the things that you were following last year?
Curtis: 2024 seems like a lifetime ago, at this point in time, with all the changes we're seeing. But I think, as Jared mentioned, states have really started picking up the slack here and the lack of a lot of federal action. So we haven't seen anything huge, groundbreaking, that would really shift the paradigm on AI regulation. But we did have a few important movements, mostly by federal agencies, that we were watching last year.
Way back in early 2024, the Office of the National Coordinator for Health IT, which has gone through several iterations and name changes, and may change again going forward—but the staff in that office did finalize their HTI-1 rule that included some very important new transparency requirements for certain algorithms within EHR systems or otherwise certified HIT.
So we were very supportive of those efforts—again, really, the first federal effort at mandating any kind of transparency from an EHR vendor, which we thought was a really critical step towards what has been a really high priority for us in transparency requirements through federal regulation, whether it's at ONC, FDA or for other types of AI.
So we did also see the Centers for Medicare and Medicaid Services take some important steps and being a little bit more vocal about payer use of AI. Early in the year, they did put out a memo clarifying that payers that were using algorithms in things like pre-authorization, claims review, and determination couldn't use those algorithms to deviate from coverage standards within the program and that they couldn't make decisions based just on large data sets, that they had to consider individual circumstances of the patient going forward.
However, that was just in a memo. It was guidance. There's not formal law specific to AI requiring that of MA plans and stuff at the time—right now. And we did see some proposals later in the year from CMS that really would have touched on MA plans' use of algorithms in ways that might have perpetuated bias or discrimination, and proposing some new transparency and disclosure requirements too. That obviously was in the Biden administration. And the future of those types of proposals, I think, are a little bit uncertain with the new administration.
And really importantly in directly facing physicians, we saw the Office for Civil Rights within the Department of HHS finalize and provide some more guidance around the 1557 nondiscrimination rule and a provision within that rule that would essentially create new liabilities for physicians regarding use of clinical algorithms that may ultimately result in discriminatory harms towards patients.
So it was something that we were very concerned about in creating new liabilities for physicians. What was finalized was ultimately a little bit lighter touch. But it does require physicians to use reasonable efforts to identify potential discrimination within an algorithm, or discriminatory attributes or inputs into an algorithm, and then mitigate harm from those algorithms as best they can.
And then, most importantly, it wasn't really, technically, last year. It was at the very beginning of 2025. But since it was the Biden administration, I'll lump it in with the last year efforts. FDA did release a very long-awaited, very comprehensive guidance for device manufacturers that are manufacturing AI when they're going through the premarket submission review process.
It was the first time we'd seen any guidance from FDA that was recommending how companies should approach describing intended use of a product, what their performance validation should look like, and how that should be communicated—things like user interface design, cybersecurity, and really important for us, new recommendations for how these products should be labeled.
That will directly impact physicians going forward. That is the step from device manufacturers that's going to help provide physicians much-needed transparency and communicate information that they really need to know to understand these products and to make good choices about using them in their practice. So we were strongly encouraging of that draft guidance and hope they do move towards finalization with this new administration. I imagine that they probably will, but please don't hold me to that, because these days, you never quite know what's going to happen. So—
Dr. Scott: The one thing we can count on in Washington, DC, is that there's nothing we can count on being consistent. So who knows? It may change tomorrow. Kim, let's switch over to the state side. And review for us the activities from 2024, as Shannon would say, way back last year, from a state perspective.
Horvath: Yeah. Thanks so much, Dr. Scott. It does seem like a long time ago. Last year really was the first year that we saw a lot of state-level activity on AI, particularly related to the health care space. Most of the bills—in AI in general. Let me back up a little bit. Most of the bills that we saw last year were really focused on states creating task forces or work groups to study the use of AI, whether they had existing laws in place that they already apply to AI. So is there a need for determining whether there is a need for more legislation in the state, in this space?
The states were also really looking at what parameters need to be put in place for state agencies that might be using AI. So that was the bulk of the bills that we saw introduced last year and passed last year. However, we also saw a number of bills on transparency, including bills that establish transparency requirements between those who develop AI and those who deploy AI, as well as transparency between those using AI and patients.
Most of these bills focused on prohibiting algorithmic discrimination. And a couple of these bills did pass. Notably, Colorado passed legislation last year that continues to be a focus for lots of other states and policymakers, watching this evolution of this bill very closely. Governor Polis did sign the bill into law last year, but did so with reservations, and wanted lawmakers to come back and amend portions of the bill, as he thought that there needed to just be some changes and provided some direction.
There was a group that got together of stakeholders. They did convene over the last several months to work through some of those issues. A bill was introduced to provide some of those fixes. But that did not go as planned. And the law is now set to go into effect in February. There is a continued push to have that implementation date extended so that there is an opportunity for potential legislation next year to address some of the concerns that were raised by industry and some business groups, and others, even some consumers.
California also passed a law related to health plan use of AI in making medical necessity determinations. That is something Emily will talk about, I'm sure, a little bit more, but was a first-in-kind bill at the state level. And finally, we saw a number of bills focused on disclosure to patients when generative AI is used. California passed bills in this space, as well as Utah, that would require disclosure to patients if generative AI is used, and a qualified physician or another health care professional didn't review the information first, so a little bit of caveats there. But those were also some bills that we saw move last year and actually enacted last year in this space.
Dr. Scott: Well, let's move all the way forward to 2025. And Emily, let's let you chime in, and then Kim, if you have something to add as well, about what's happening. This time, let's talk to the state level first. And then we'll hear about the federal level in just a moment, because I'm interested in both ways. But let's stay with the states. But let's move to 2025.
Carroll: Great. Thanks, Dr. Scott. I can start off, and then I'll let Kim finish up. But one issue that I think has gained a lot of traction, and we're seeing a lot of bills this year, and Kim just mentioned this, is this idea of health plan use of AI, and particularly in terms of clinical decision making. Our survey data broadly shows that physicians are excited and positive about efficiencies and benefits associated with AI. But one area where we're seeing a lot of concern is AI use by health plans to deny care.
For example, in our recent prior authorization survey, 61% of the responding physicians said they are concerned that AI is already increasing or will increase prior authorization denial rates. So there's been a lot of response to that, I think, in the states and among advocates.
I'm guessing most folks have seen all the investigative reporting on the automated clinical decision making tools used by some of the big payers to systematically deny claims. And I think that initiated a lot of the legislative and regulatory action we've been seeing.
It's not a new concept for medical societies in the AMA to want to have a qualified physician making medical necessity determinations, especially when they're leading to a denial. You know this well, Dr. Scott. And our policy and our model legislation has always required a physician of the same specialty and licensed in the same state to the one making those denials on the plan side. But we haven't always necessarily applied that to the AI space.
Last year, as Kim mentioned, California passed legislation to specifically apply those kind of qualified reviewer standards to the use of AI by health plans to make medical necessity determinations. And we've seen a ton of states attempt to go down that road this year. Not a lot has passed. And I can talk about that more later. But it's certainly an area where I think some legislators are seeing as low-hanging fruit for legislation this year. And Kim, I know you've been tracking other bills as well.
Horvath: Yeah. Thanks, Emily. Like, there's just a couple other things I would mention, tracking what we saw last year as well. A lot more activity, again, on task forces. And I think, here, it just really speaks to state legislators. They're interested in this issue. They want to work on AI. They see a need for some state parameters put in this place. But there is a continued push and pull between the fear of overregulating and potentially hindering innovation.
Like, I think everybody has said this. There's a lot of promise here. There's a lot of excitement about what AI can do, and how it can benefit patients, and help ease the burden on physicians in a lot of respects as well. But there is still this fear of overregulating. And then there's also, I think, a recognition that there is the need for some guardrails in place, some protections to put in place, and particularly when we're talking about AI in health care, right?
There's a reason why health care is regulated, right? We're talking about the safety of patients. And I think that's one of the reasons why we're seeing this continued concern or continued interest in having task forces to really dive into and study the issues. And one of the other reasons I think we continue to see interest in these task forces is there is an understanding that there are a lot of existing laws that are already in place that might not say the words, "artificial intelligence" or "augmented intelligence," but they do apply—consumer protection laws, privacy laws, professional licensing laws.
So there is kind of a, let's look at what we might already have in place, where we need to bolster state efforts in one area or another. Or do we need something specific, address AI in this space? I'll just note that the California Attorney General issued a policy bulletin on—or an opinion, I should say, on this very issue, kind of pointing out the laws that California has in place that AI is regulated by.
Other top issues that we're seeing at the state level this year—continue to see a ton of bills on transparency. A lot of these are modeled off the Colorado legislation that passed last year. We're also seeing a ton of anti-discrimination bills. Some of these are tied to the transparency bills, bills that prohibit algorithmic discrimination, and may specifically require governance policies, or validation, or ongoing testing to make sure that the AI tool in itself is not producing discriminatory results, and that any application of the tool is not discriminatory, or that over time—we know that AI tools can drift—so that over time, there is not an unintentional kind of discriminatory application of these tools.
Jared mentioned a number of bills that we're seeing on the clinical use of AI, including around mental health chatbots. And those are something that we're keeping a very close eye on as well. And I think I will stop there.
Dr. Scott: Well, Shannon, get ready, because I'm already seeing questions about prior authorizations, since you mentioned it. And one person, I think, jokingly wanted to know how we make sure the chatbot is of the same specialty as the doctor who's asking for the prior authorization. But we'll get into that in some of the questions. Shannon, do you want to add anything from a federal level in terms of 2025?
Curtis: Sure, to the extent that any of us can right now. But I'm sure, as anybody can guess, on the federal level, we're operating in just an incredibly unsettled environment right now that is really lacking any clear direction that at least has been made public and lacking any kind of consistency or clarity going forward. We don't really know exactly what we're going to get. But I think we are starting to get some hints about where we may be headed with the new administration.
To start off, the new Trump administration, one of the very first actions on the first day was to revoke the very sweeping Biden executive order on artificial intelligence that really spoke to some goals and actions for almost every department and agency across the federal government, including health care. So that was wiped off the books and replaced with an executive order that a new AI task force needed to study current policies and regulations and recommend changes somewhat quickly into this new administration.
So we've seen several RFIs being issued that, from what I understand, have gotten into the thousands or tens of thousands of responses to how the government should be looking to regulate or deregulate AI going forward. So I think what we're starting to pick up on is that we're clearly moving towards an administration, and potentially a Congress, that is much more interested in deregulation than looking at higher levels of regulation or more appropriate regulation in the AI space.
We've got a lot of interest in using AI among the agencies to make tasks more efficient or replace certain functions with algorithms that may be able to do it better, faster, stronger—remains to be seen. But I think some of the most concerning developments that we've seen have just happened, actually, over the weekend—concerning in the way that we might not look at a lot of regulation, but might alleviate some of our state staff of a lot of their work on AI for the next 10 years.
But in the recent House budget bill that was released overnight Sunday and into Monday, there was a really alarming proposal within a portion of that bill that would, among other things, put a moratorium on any state regulation of AI for the next 10 years. So if that were to pass, no states would be able to pass new AI laws or new AI regs, again, from 10 years from enforcement of that act.
So we were very, very concerned to see that, given the fact that the federal government has been so slow in moving forward on any more appropriate regulatory schemes for AI, and particularly for health care AI. It's probably the most concerning when we're talking about AI tools that can impact the ultimate health and care of our patients at the end of the day.
So no further guardrails. I'm seeing an effort to completely deregulate anything at the state level, and really written in a way that is very clear. The goal at the federal level is going to be to remove a lot of legal protections, consumer protections, around AI. And that was deeply concerning for us.
I think there are going to be some challenges to that language surviving the budget reconciliation process in the Senate. So I'm not sure that we'll see it finalized. I think there's likely some constitutional issues that it raises as well. But it should clearly tell us where the priorities are of the Trump administration. I think it's pretty clear that this is being driven by the administration. If it doesn't make it through the budget process, I have no doubt that we will see this come up again somewhere else.
So we're very concerned about watching that play out. But we've got a big tech community that is absolutely cheering this from the sidelines. And another concerning tidbit that we saw over the weekends—the Trump administration summarily fired the head of the U.S. Copyright Office. And you might ask me why I'm telling you about the Copyright Office or why that matters.
However, we were just talking about this a little bit earlier as well. The Copyright Office just released a long-awaited report on copyright in AI that did raise a lot of the issues, and legal issues and challenges, and stated that that area of law was quite unsettled, but did raise a number of issues. Our big tech companies do not want to see training data potentially run into copyright issues. They want to be able to have free, unfettered access to train their algorithms on even copyrighted information.
Some nuance, a little bit of a niche issue there. But it should give you a hint into where this administration is going in thinking they want to allow the big tech companies free, unfettered access to everything that they want and with minimal regulatory protections, so a little bit of a concerning environment that's going to require a lot of work and a lot, I think, of collaboration, with maybe some even unlikely bedfellows, but that might want to move in the same direction on the federal level going forward.
Dr. Scott: Wow. There's a lot to be concerned about there. So let me punt it over to Jared for a second. And Jared, hearing all that, and what you know, what keeps you up at night in terms of these regulations and lack of regulations in this changing environment?
Augenstein: Yeah, it's been something different each night for the past few nights, given all of the activity, as Shannon mentioned. I think the macro issue that keeps me up at night is the balance of regulation and innovation. And I feel that we were in a place in 2023 and 2024 where there was a lot of learning happening about how the technology was advancing, what the risks were, putting in place guardrails, and a desire to—amongst stakeholders who had different points of view about that tension, to work together to identify a reasonable path forward.
I think we had that with the Colorado AI Act, which was imperfect. And even in Governor Polis's signing statement, as Kim mentioned, there were concerns about how the bill would be implemented. And there was a lot of work that was happening behind the scenes to try and improve on the bill, which ultimately failed last week. And the bill is, as of now, going to go into effect as written early next year.
I feel that where we're at now is much more aggressive sort of positioning from the stakeholders who are interested in a deregulatory agenda. As Shannon mentioned, a 10-year moratorium on any state-based legislation seems extreme, in the least. And so I worry about our ability to come to consensus and put even basic guardrails in place.
We saw it with what happened in Virginia. Virginia was one of 18 states that had a sort of similar bill to the Colorado AI Act. It imposed certain requirements on developers, deployers, and end users of AI tools. The bill passed, but was ultimately vetoed by the governor because of—there was significant pushback, primarily from the tech community, about the potentially stifling effect it would have on innovation.
In the veto letter, Governor Youngkin wrote that he was vetoing the bill because it would, quote unquote, "establish a burdensome AI regulatory framework." And so I think we're starting to see the power dynamic, if you will, sort of shift. And so I—that concerns me, given just how much is unknown and the real risks that there are from using these tools, in many cases, without some guardrails around protecting consumers, promoting transparency and ensuring they're not used to ill effect.
Dr. Scott: In my opening remarks, I mentioned physicians' concerns about transparency, and liability and application within their practices. It sounds like—I'm hearing an echo that it sounds like their concerns were justified. Let's focus specifically on physicians for a moment. And Kim and maybe Shannon, if you all want to chime in, in terms of state and federal, from a physician perspective, from an impact on our practices, what do you think that we should be—AMA should be most closely tracking right now?
Curtis: I can start off on that one, I guess, from the federal level. I think, as you mentioned, transparency has been something that's been really key for us. It's something that we've been searching for for a long time and asking for stronger federal mandates around AI transparency.
And for us, that actually really goes two ways, as far as the physician is concerned. First, we really, really need to see mandated transparency requirements for developers of AI, for clinical AI application, even potentially for an administrative use of AI. But physicians need to start understanding what they need to know about an AI tool. And they need that information to be communicated from a developer so they can understand the tool the best that they can and make the best decisions about engaging with an AI tool to make sure it's right, the right tool at the right time for the right patient.
They need to understand potential limitations. We need to have that kind of clinical validation and performance validation information. All of that transparency is really, really, really important for physicians, not only to ensure their patients are getting the best possible care, but also from a liability perspective. Making a mistake with a medical device or another type of software application, saying "I didn't know" is not going to be extremely helpful to you pleading yourself out of a medical malpractice case, at the end of the day.
So from that aspect, transparency about what you're using is really, really important going forward. We've also thought it's very important, frankly, from an ethical perspective, that patients understand when they're engaging with AI. If they're engaging with an email, or a chatbot, or things of that nature, we think, from an ethical perspective, that they need to understand that a response or a message they've gotten has been generated by AI on behalf of their physician.
And if there's, again, a chatbot or some other type of function that they're engaging with, they need to understand that it's not a person on the other end. And doing this, we think, helps to enhance trust. At the end of the day, if your patient gets an email from your doctor, they can probably tell if it's not their doctor, and it doesn't sound like them. It doesn't sound like what they're used to dealing with.
So we think that it will help enhance trust and is the ethical and right thing to do, that patients know when they're directly engaging with an AI algorithm versus a human on the other end. So that's been a big federal focus of ours, is pushing towards those transparency mandates. But I'll let Kim talk to the state side.
Horvath: Yeah. So at the state side, we issued a policy issue brief at the end of last year, kind of honing in on what we felt like were—what we thought the policy priorities were for, at the state level, related to AI in health care, and both where the states are uniquely focused or where there seems to be an interest in regulating.
And those areas are transparency, payer use, health plan use of AI and liability. And those are the three areas that we continue to focus on. We continue to see state legislation in those areas. And liability, we're not necessarily seeing legislation. But it's definitely top of mind for physicians, as Shannon mentioned, and I think feeds into why we need the transparency requirements.
And then I'll finally just make a plug, since we talked about this—the importance of states having a role here. States pass a lot of bills in a lot of different areas, a lot more than you see at the federal level. They move much faster. And they're often viewed as laboratories for these potential policy solutions. And a lot of things that trickle up to the federal level start at the state level.
And while maybe our congressional federal colleagues don't—that isn't always as evident. But some of those ideas do stem from the state level, where there's an opportunity to hash out some of the issues before maybe applying it at a broader—at the higher level. So it is important to keep the state work here moving forward.
Dr. Scott: So we've heard some negatives. I'm going to try to shift and be a little positive here. I'll turn to Emily. Emily, is there a bright spot in all of this, in this regulation? And particularly, I'm seeing a lot of questions in the chat about payer use of AI. What are the bright points in terms of policy development in that area?
Carroll: Well, I think, in terms of bright spots, I think that while we are recognizing the efficiencies that AI can bring to physician and health plans' interactions, it's really important that the states are considering what role AI should play when it comes to such consequential decisions as determining whether a patient can access care.
And to me, so far, it seems the states are being really generally thoughtful and measured in their approaches. For example, if you consider that an AI can sometimes get us faster to yes, then maybe that's a great thing. But maybe if an AI tool is getting us more nos and is increasing biases and utilization management requirements, maybe that's a great role for legislators and regulators to step in.
We've seen a bunch of payer-related activities, beyond just that idea of a qualified physician making the medical necessity determination. And I think some of those are really interesting and fantastic too. There are states like Washington that had language this year that aims to ensure that AI, when it's being used for prior authorization determination—or medical necessity determinations—is not basing the decision on a group data set—Shannon mentioned this earlier—but rather, the individual's clinical history and medical record.
We're seeing a couple states, like California and Washington, that are aiming to increase transparency of AI use by payers and requiring public disclosure of how AI is being used to manage claims and coverage, some of this idea that we have in a lot of our prior authorization efforts, public reporting of that data. We're seeing some states work AI into that prior authorization and utilization management public reporting.
And then we're seeing a number of anti-discrimination bills targeting health plans' use of AI. New York has some language, and a couple of other states. And then I'm seeing a couple of states, like Illinois, really work to boost the authority of insurance commissioners to regulate health plan use of AI through the use of investigations, or market conduct exams, or audits, and things like that. So to Kim's point, we're seeing a lot of space where the states are jumping in and figuring out what might work best in terms of balancing that regulation with innovation.
Dr. Scott: I will tell you that I had an opportunity to be on a panel with different representatives from insurance companies. And one was from a company that we believe is using AI to just more rapidly deny more and more prior authorizations. That's not the reform we're looking for. Meanwhile, the other individual—and I will give them a shout out. It's one of the Blue Cross Blue Shield, who actually says they were using AI specifically to approve prior authorization.
And then there was some question of the hypocrisy of that. And my point was, well, you've already had a human being, a physician, who's recommended the course of treatment that you're approving. So in my mind, that's OK. The problem is when you have a bot basically disagreeing with a professional who's recommended a course of treatment.
So it's good to see that there is at least some positive on the state level. Shannon, anything to add from a positive perspective on the federal level?
Curtis: I don't know that I have anything additional to add, necessarily, on that front. I think we're all, on the payer elements in particular, in a little bit of a wait-and-see mode about where we go forward. I know there's a really strong interest up at CMS about engaging more with AI within the agency.
So I don't know what that means. And we could be optimistic about the burden reduction that that could result in. It also could go in some directions that could be concerning, both on, I think, the claims review, claims determinations, even potentially on the fraud and abuse fronts. It could lead to some interesting outcomes there. So I think, on the federal front, we'll be really closely watching all of this, but not quite sure what to expect.
Dr. Scott: I will note that someone has put into the chat—one of the staff has put in a link to the survey that Emily was talking about for prior authorization, if anyone wants to use that link, any of you listening. Jared, you had something to add?
Augenstein: Yeah. I think just—I think because we're seeing a little bit of pushback at the state level, at least with respect to some of the bigger Colorado AI bills, and we've even seen on some of the payer use of AI bills a lot of difficulty in getting the language right of what those bills say. And some of those bills require, basically, a human to be in the loop and a human to review whatever the AI bot had recommended.
And then, what does it mean to have a human in the loop? Like, are they reviewing the decision and saying, yes, no? And is that meaningful enough? And OK, you could say meaningful human review. And what does that mean? And so I think it's hard to legislate a lot of these things. I think that's the reality of where we're at.
And so that means that it puts more pressure at getting this right at the institutional level, at health systems, or physician groups, or payers, in terms of payer use of AI, and having really strong institutional-level governance models in place becomes really important, where there aren't—are no—almost, basically, no federal guardrails, outside of HTI-1, which is sort of limited, and then a really fragmented state view, with only a handful of states that have really done anything meaningful.
And we see organizations struggling with that too, because there's not—they're operating in an information-poor environment. And then you get a lot of variability across physician groups, across health systems, across insurance companies, which makes it sort of challenging as well, just to know what is normative.
Dr. Scott: So I've got a very specific question for you, Jared. This is straightforward. Someone said, I believe that you said that there were three states that had leading policy regs. Maybe this wasn't you. I don't know—California, Colorado. And they didn't hear the third state.
Augenstein: Yeah, Utah. And Kim covered that bill after I commented.
Dr. Scott: So Utah. So another question that's in here that I think is interesting is about liability. And it was directed to Shannon. So we'll see if Shannon wants to handle this, or other panelists, feel free to jump in—ask about, with the algorithms, creating the potential for a new standard of care, and potentially use of AI becoming the standard of care and the impact upon physician liability. So Shannon, do you want to take a stab at that?
Curtis: Sure. I could probably talk about this topic all day, so I'll try really hard not to. But the liability questions surrounding AI right now, I think, are really interesting, and something physicians in particular need to keep in mind and be pretty aware of as you move forward to potentially start engaging with some of these tools.
We're in a situation now where AI, I don't think, is generally considered the standard of care. Some specialties are probably closer to that becoming a truth than others. But I don't know that we're there yet anywhere. And until the use of AI becomes the standard of care within your specialty, or for that procedure or otherwise, liability is going to be in a really interesting animal to tackle here.
I don't think that the AMA would agree with this. But there is somewhat of a prevailing sentiment as of right now that if you engage with AI as a physician, that ultimately, you are responsible for the outcomes of a chosen path of treatment, a diagnosis, et cetera, because you are the physician, and you are the ultimate arbiter of any decision making with this patient.
The AMA has long held that we think that the person, or the tool, or the developer that's best situated to manage or mitigate risks of poor performance of AI should be the one that holds the liability. If a physician did not know or did not have a reason to know that there could be a problem with an AI, but, say, it's FDA regulated, and they theoretically should have been able to trust it, saying that then they still are at fault for the ultimate decision, when they should have been able to rely on that AI, gets into a really complicated liability situation really quickly.
But there are a lot of folks out there that are promoting the idea that you will still ultimately, at this point in time, be liable for any mistakes that have happened from the AI that you end up relying on. So it's something to keep in mind. It makes transparency really, really important, so you can understand what type of tool you're engaging with. It makes this idea of starting to educate yourselves, and specialties educating their members, about AI and how to evaluate an AI tool. It makes that really important if you want to help start mitigating your risks on the liability front.
So we're at a really interesting, unsettled time, I think, for liability. None of these questions have really moved through the court system yet. I doubt they'll really be legislated or regulated at the end of the day. But I know it's top of mind for both physicians, for patients, for the developers. And it's probably going to get more complicated as you start moving towards more complicated AI tools as well. You get into this idea of really highly personalized clinical decision support.
And when you start legitimately engaging with shared decision making with a machine, where you don't necessarily know its full logic, you'll never quite understand how the algorithm works, it starts raising some very interesting questions, kind of along the lines of, who's liable for a self-driving car if it hits a pedestrian? We don't know yet. We don't have a lot of clarity there.
So be careful. Get the education that you need. Be mindful of the tools you're engaging with. Learn as much as you can before engaging with them. And you'll probably be OK. But keep abreast of developments here, because I think this is an unsettled and rapidly changing area. And sorry, that was a lot of information about liability. So—
Dr. Scott: Shannon, I think you just answered about seven or eight questions that I'm seeing here. One was about—the concern is what extent physicians have to use AI or become liable. Another one is about, what about the outcomes when you do choose to use AI? One, maybe from an attorney, that wants to know how to represent his clients when there is a lawsuit regarding AI. I mean, goodness gracious. I mean, clearly, liability is at the top of people's minds.
You mentioned transparency. And that's a word that's often thrown around. And Jared, let me ask you to clarify a little bit in terms of, what all do we mean by transparency? Transparency between the physician and the patient, between the developer of the AI and the physician? Tell me, what are we talking about, particularly in terms of legislation, around transparency?
Augenstein: Yeah. So unfortunately, all of the above. And it depends on the state. And it depends on the bill. But the way that I think about it is kind of like layer cake, like between the developer—so the think of a technology company developing an AI tool—to a deployer, which is the organization that's deploying the AI, so think about a hospital system in the case of an imaging AI tool—to the user, which may be, in that instance, an individual radiologist.
So that's a little bit of an oversimplification—and then to the patient who may be receiving the information. And then there's—so there's that layer cake. And off to the side, there's the state. So in many cases, what you see—and there's all versions of these bills. There have been 29 states that have introduced some sort of transparency-type bill. There are over 100 bills that have been introduced in those states.
Most of those bills are focused on transparency to the end user. Like, when is AI being used? So you engage with a chatbot. And you want to know that the response you're getting and the tool that you're engaging with is not a human. It's an AI tool. About half of those bills were focused on transparency between the developer and the deployer of AI and the state. So this is where the state is trying to get its arms around, like, When is AI being used, in which situations? and trying to have more transparency around that.
And then the smallest number of bills, which I think is actually the most important in a health context, in a health care delivery context, is transparency between the developer of AI and the deployer of AI. And that sort of gets into some of the issues that Shannon had brought up around understanding how the model was trained, what populations—what types of individuals it was trained on, in which populations it works, in which populations it doesn't work, things to watch out for in terms of the outputs.
And so it—unfortunately, transparency does mean transparency between many different types of stakeholders. And the Colorado bill that we've mentioned a lot, the Colorado AI Act, does have transparency requirements from developer to deployer and deployer to end user. And it sort of tracks through, which makes logical sense, if you think about it. Like, if you're the end user, the organization that's purchasing that tool, if you will, or implementing that tool, needs to tell you about the tool. And for the deployer to know how that information, they need the information from the developer. And so I think we'll see more language and legislation that sort of has the transparency requirements flow through.
Dr. Scott: OK. Trying to move through a few more questions. We have pretty limited time. I don't know who wants this one. But someone wants to know about, what role does the AMA want the FDA to play in post-marketing surveillance of health AI tools? Whoever wants to chime in on that one, go for it.
Curtis: It's probably me. I usually handle our FDA regulatory work. FDA is going to have to ultimately be, I think, the entity that will have to mandate a certain level of post-market surveillance on AI-enabled medical devices. Important to note that FDA's authority is limited to oversight of FDA—obviously, FDA regulated—but regulated medical devices.
There are a number of AI tools that are going to fall outside FDA purview and will not be regulated by them, or not be regulated by them at all. But the technologies that meet the definition of an FDA-regulated medical device will fall under their purview and do, I think, need very, very strict post-marketing surveillance requirements.
It's frankly—strong post-market surveillance is going to be one of the only ways that we're going to be able to ensure good, high-performing, high-quality AI, at the end of the day. It's going to probably be one of the only ways to identify things like algorithmic drift, which we know absolutely does happen over time with a continuously learning algorithm. How else are we supposed to identify that if we don't have strong post-market surveillance capabilities in place?
My take would be that they absolutely have to be mandated by FDA. FDA has not always traditionally been great on the post-market surveillance space. But I think there's a strong recognition across the FDA, across industry stakeholders, across physician stakeholders, that this is an absolutely critical element of AI adoption going forward, to build trust and ensure these tools work, and ensure that they keep performing at a high level without error, without drift, over time. So—
Dr. Scott: So I'm going to shift gears a little bit about something we haven't talked as much about, but is obviously important. And that's patient privacy. So the question is worded, if big tech is able to obtain as much information as they want, how would patient data be affected? Would HIPAA still protect patient data? How do you foresee the administration allowing access to data for AI development in the future? Who wants to tackle that one?
Curtis: Listen, I just want to be clear that I am actually not a data privacy expert or our HIPAA staffer. He is unfortunately not on this call. So I unfortunately can't speak to the specifics of HIPAA and data privacy, except on the terms that I think we're all well aware of there being significant patient data privacy concerns with the use of AI.
And it's a really, really challenging topic going forward, because there is some truth to the claim that in order for AI to be good, it needs good data to train itself on. Garbage in, garbage out with AI, to a certain extent. But you do need that data available to you to train a good algorithm at the end of the day. And these tech companies want access to large, large, large amounts of data.
But there is the other problem of, frankly, once data goes into an algorithm, or even like a ChatGPT prompt or things like that, it never really truly comes out of that algorithm or that training data set at the end of the day. And I think, for the most part, patient data can be re-identified.
So there are a lot of concerns here going forward. Again, I wish I was a HIPAA expert. Unfortunately, I'm not. But it's really, I think, going to be a challenging environment to balance big tech's desire for huge amounts of data with an administration that, I think, frankly, really wants to give them that, with these patient data privacy protections that we know have to be in place.
We've long pushed for more comprehensive federal data privacy legislation. I think there's a huge amount of the industry at large and other stakeholders that actually are, frankly, starting to echo that call pretty loudly. So we're hoping that's something that gets tied up into these larger conversations about AI regulation. But I think we're all a little nervous about where that might go at the end of the day in this new environment, but a critical key component of any conversations about AI going forward and something that has to be addressed if we're going to get our patients to trust us, if adoption is going to be successful in a more broader sense.
Dr. Scott: I'm going to give each of our other panelists an opportunity to make a 30-second closing comment, because we're almost out of time. Jared, any comment about anything we've talked about, or any message you want to leave our audience with?
Augenstein: I would just say, I think that state and federal regulators and policymakers need really good real-world examples of what's working, what's not working, what's concerning folks, and how to regulate the use of AI in a way that makes sense. And so I imagine the audience on this call is a mix of state society, specialty society representatives, physicians. And so all of you are really well positioned to help educate policymakers in terms of how to think about this space. And so just encourage everyone to engage in that way. I think it'll lead us to much better policy landscape a few years from now.
Dr. Scott: Kim, a closing comment?
Horvath: Yeah, sure. I'll just say that I think it's the importance of having physicians continue to be involved, both in terms of developing AI, to make sure that it's something that's actually useful when it gets into the system, and to also just keep the interest of making sure that those who are developing AI, that they are creating tools that will be most beneficial to patients and physicians and help streamline the practices and such.
Dr. Scott: Emily, closing words?
Carroll: Real quickly, I'll just say, for the state advocates out there, don't count out your regulators. There's a lot of opportunity, I think, to consider AI—how AI can be regulated through existing laws that are already in place, and authorities that are already within your regulators that already exist. So be sure you're engaging with your insurance commissioners and others to get some of these goals accomplished.
Dr. Scott: Obviously, a lot of energy and interest. Thank you to our panelists. Thank you to all the audience. We were not able to get to all the questions, but thank you for joining us for today's advocacy series. Please join us for future AMA Advocacy Insights webinars, when we will take you inside other important issues inside medicine and health care. Thank you all for being here with us today. And goodbye.
Disclaimer: The viewpoints expressed in this video are those of the participants and/or do not necessarily reflect the views and policies of the AMA.