Text/HTML

Open Book

Text/HTML

Navigating the AI Frontier: Regulatory Challenges and Opportunities for State Licensing Boards 

State licensing boards need to be aware of the implications of AI across the regulatory landscape as it will affect licensing operations, healthcare best practices, and more. This article summarizes the main topics and issues presented at the 2024 Annual Education Meeting by Frank Meyers.

Text/HTML

Untitled Document

Mr. Meyers provided his background information, highlighting his over ten years of experience in professional licensure regulation with his most recent focus on the intersection of Artificial Intelligence (AI) and healthcare. Mr. Meyers clarified that while he will be highlighting various articles, legislative bills, policies, and position statements, the views expressed were his own and not that of any organization, including FSBPT or FSMB.

Understanding the Impact and Evolution of AI Technologies

Mr. Meyers' presentation began in earnest with a discussion of the rapid transformation of various sectors due to Artificial Intelligence (AI), including healthcare, finance, and manufacturing. He explained that AI is an umbrella term for technologies that can perform tasks that typically require human intelligence, such as learning, reasoning, problem-solving, perception, and language understanding. Mr. Meyers categorized AI into several types, including Machine Learning, Generative AI, and Natural Language Processing (NLP). He elaborated that Machine Learning involves training algorithms on large datasets to recognize patterns and make predictions. Generative AI, such as ChatGPT, creates new content based on existing data, while NLP enables machines to understand and respond to human language.

Next, Mr. Meyers highlighted the advancements in AI technology over the years. He noted that in 2016, AI could generate images, but the quality was low. However, by 2023, the technology had greatly improved. On the text and language side of things, Mr. Meyers highlighted that older models of NLPs were trained by ingesting sentences sequentially but had difficulty retaining the connections between the words and concepts. With the advent of the Transformer, this limitation was eliminated, resulting in models that are far more useful and effective (e.g., ChatGPT, Claude, Llama, etc.).

Data Requirements and Use Cases

Mr. Meyers then discussed the extensive work required to make AI effective. He emphasized that data needs to be reviewed and verified by humans, and models require a lot of data. This intensive work is happening in many fields, including healthcare. For instance, Med-PaLM leverages data to help answer medical questions, and AI is used in pre-clinical drug research to identify ideal patients for trials. Chatbots can help patients get answers faster and streamline administrative tasks involved in insurance programs. Significant advancements have also been made in leveraging AI to review and analyze medical imaging.

Mr. Meyers provided some examples of AI already being used in healthcare, such as the Mayo Clinic, which is already using Google’s AI chatbot. Additional AI models designed to detect sepsis, or similar conditions, have proven to be effective. However, Mr. Meyers cautioned that a model’s accuracy and effectiveness can vary depending on the environment, as many variables are involved. Each model needs to be fine-tuned to the appropriate environment and situation.

Mr. Meyers gave an overview of ambient listening tools such as Dax Co-pilot, which help clinicians create accurate medical notes from clinician-patient conversations. These models are trained on medical terminology and designed for this specific purpose, which can save clinicians a lot of time. However, they are very expensive, so third-party products are filling the void—but they don’t quite live up to the hype of the high-quality products. Therefore, they should be used with caution.

Pros and Cons and the Hallucination Problem

Mr. Meyers discussed the pros and cons of custom GPTs, including an overview of an example custom GPT for writing SOAP notes. In particular, Mr. Meyers demonstrated how this GPT created information that wasn’t provided (e.g., patient names), which is officially called a “hallucination” for an AI tool. Mr. Meyers went on to explain that AI models are designed to provide answers, and if proper guardrails aren’t put in place or quality training isn’t performed, then they may just “make up” answers as they’re simply providing the most likely output. Mr. Meyers stressed the importance of providing good prompts, reviewing all citations in AI responses, and proceeding with caution. Mr. Meyers concluded this portion of his presentation by highlighting the need for creators of AI to put in place solid guardrails and for users to be aware of what those guardrails are.

Potential Regulatory Issues with AI

Mr. Meyers moved on to discuss Google’s work in the insurance space by streamlining healthcare authorization. He noted that California passed legislation ensuring that any coverage denials must be made by healthcare professionals, not automation or AI. More jurisdictions will likely need to address this ongoing issue, as it is just one of the many regulatory issues AI introduces.

Regulators must also consider and reexamine many regulatory questions that have been taken for granted. Mr. Meyers posed several questions: How do we define the practice of a profession? If a nurse oversees a physical therapy application leveraging AI, then who is responsible for practicing physical therapy? The application is not licensed, and the nurse is not a licensed physical therapist. Do we need to rethink how we define the practice of various professions?

Mr. Meyers then mentioned the FDA’s recent approval of an autonomous AI diagnostic system, LumineticsCore. It can analyze images and create a diagnosis—specifically for diabetic retinopathy. The FDA also cleared Pearl, a diagnostic software that helps dentists interpret radiographs and visualize dental problems more clearly. He pointed out that more of these tools are becoming available and have implications for the standard of care. At what point will it be below the standard of care not to use an evidence-based, proven AI model?

Existing AI Policies and Guidance

Mr. Meyers emphasized that while many AI companies have policies and principles, we can’t necessarily trust them to regulate themselves. Regulation is needed because the stakes are too high. AI will impact society greatly. However, the United States currently has no overarching AI regulation. The EU has some regulations around the development of the tools, but they have little impact on the user. Ultimately, we will need federal legislation to provide consistent nationwide standards, but until then, a patchwork approach to AI regulation is expected. Utah, Colorado, and California have introduced relevant legislation, and more states are likely to do so soon.

Healthcare regulatory boards can participate in this process. Mr. Meyers mentioned that the Federation of State Medical Boards released “Navigating the Responsible and Ethical Incorporation of Artificial Intelligence into Clinical Practice” to help guide the discussion.

As regulators, Mr. Meyers stressed the importance of ensuring that professionals are held to a high standard. Professionals have an ethical responsibility to understand the tools they are using and to ensure that the care they provide meets acceptable standards. Transparency is crucial when using AI in high-risk settings like healthcare. Patients and users deserve to know when they are interacting with AI or human professionals. Companies offering AI products must be transparent about the risks, limitations, and appropriate use of their tools.

Ethical principles should guide the use of AI in professional licensing. Mr. Meyers outlined these principles, including respect for patient autonomy, beneficence, non-maleficence, and justice. AI governance should focus on ensuring that AI tools are used responsibly and transparently, addressing potential biases in AI algorithms, and ensuring equitable access to AI benefits.

Mr. Meyers concluded his presentation by highlighting the potential of AI to revolutionize various industries, including healthcare, by automating tasks, improving efficiency, and enhancing patient care. However, the rapid advancement of AI technologies also poses significant regulatory challenges. Ensuring transparency, ethical use, and continuous education are crucial for the successful integration of AI into professional licensing. By addressing these challenges, state licensing boards can harness the potential of AI to improve public health and safety.

Text/HTML

Frank Meyers

Deputy Legal Counsel, Federation of State Medical Boards (FSMB)

Frank Meyers is a seasoned attorney specializing in healthcare regulation and professional licensing. He is currently serving as Deputy Legal Counsel at the Federation of State Medical Boards (FSMB). His expertise extends to the intersection of law, healthcare, and technology, with a particular focus on the use of Artificial Intelligence (AI) in these fields.

His unique blend of legal expertise, deep understanding of healthcare regulations, and hands-on experience with AI tools make him a sought-after thought leader. His commitment to ethical practices in the use of AI by attorneys is reflected in his presentations and discussions, where he emphasizes the importance of competence, confidentiality, and truthfulness.