Health IT End-Users Alliance Responds to RFI on NIST’s Assignments Under Executive Order Concerning Artificial Intelligence
15663
post-template-default,single,single-post,postid-15663,single-format-standard,wp-custom-logo,bridge-core-3.0.2,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-child-theme-ver-1.0.0,qode-theme-ver-28.8,qode-theme-bridge,disabled_footer_top,qode_header_in_grid,wpb-js-composer js-comp-ver-6.9.0,vc_responsive

Health IT End-Users Alliance Responds to RFI on NIST’s Assignments Under Executive Order Concerning Artificial Intelligence

Health IT End-Users Alliance Responds to RFI on NIST’s Assignments Under Executive Order Concerning Artificial Intelligence

 

January 31, 2024

Dr. Laurie Locascio
Under Secretary of Commerce and Technology
Director of the National Institute of Standards and Technology (NIST)
Submitted electronically to www.regulations.gov

RE: NIST–2023–0309, Request for Information (RFI) Related to NIST’s Assignments Under Sections 4.1, 4.5 and 11 of the Executive Order Concerning Artificial Intelligence

Dear Dr. Locascio:

The Health IT End-Users (HITEU) Alliance appreciates the opportunity to provide the National Institute of Standards and Technology (NIST) with feedback on the Request for Information (RFI) Related to NIST’s Assignments Under Sections 4.1, 4.5 and 11 of the Executive Order Concerning Artificial Intelligence, as published in the December 21, 2023 Federal Register.

The HITEU Alliance brings together health information professionals, physicians, hospitals, and other front-line health care providers and organizations that use health IT in the provision of care to ensure that policy and standards development activities reflect the complex web of clinical and operational challenges facing those who use technology tools for care. By working collaboratively across settings of care, the Health IT End-Users’ Alliance is focused on priorities for how technology can best support clinical care and operations.

Our comments are grounded in the HITEU Alliance’s Consensus Statements on Data to Support Equity and Real-World Testing. They focus on:

  • Ensuring that end-users are sufficiently engaged in standards and guidance development, including for complex issues such as generative artificial intelligence (AI); and
  • Providing end-users with adequate tools and information to safely and effectively use AI and other technology for care.

Standards and Guidance Development Must Include End-Users

Under Executive Order 14110 – Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence – the National Institute of Standards and Technology (NIST) has been tasked with the crucial role of establishing guidelines and best practices to support the development and deployment of safe, secure and trustworthy AI systems. Among other things, NIST will develop a companion resource to its existing AI Risk Management Framework to address generative AI. NIST has also been tasked with providing guidance to a wide range of federal agencies on their own use of AI, and possible regulatory activities.

As a coalition of those who use technology to provide health care, the HITEU Alliance urges NIST to consider the unique attributes of the health care sector and actively engage the end-users of AI tools as you complete your work.

The development of new health information technologies (IT) can bring benefits to both clinical care and operations. Artificial intelligence is no different – research has shown that using AI can increase the accuracy of imaging studies, improve the safety of drug pumps and other medical devices, support clinical diagnosis, and optimize the scheduling of operating rooms, among other things.

However, those who design these tools are generally not the ones who are using them to provide care. Over the past decade, end-users have found that new technical approaches to gathering and sharing health information that are then included in regulatory requirements are not sufficiently grounded in real-world experiences and do not adequately consider the implementation pathway for health care. This includes issues such as how new approaches work with the existing infrastructure that is deployed, workflow constraints to adopting new technology (including limitations confronting small, solo, and rural medical clinics), technology costs, engaging with and educating patients on their role in utilizing the technology, and how new tools and requirements will fit into the array of regulatory requirements that health IT end-users face.

All of these real-world concerns are likely to also apply to AI tools that will be created and used in the years to come. In addition, we would note that AI tools may also impact the health care workforce by automating some tasks, creating new roles, and resulting in significant additional training needs.

The HITEU Alliance encourages NIST to include in its standards and guidelines both risk management approaches and transparency measures that support safe, equitable, and appropriate use of AI tools in health care.

Health care providers will need information from the developers of AI tools to ensure safe and appropriate use. For example, they will need to know that a developer has undertaken efforts to ensure that an AI tool is accurate and safe for specific populations, and whether it has gone through the Food and Drug Administration’s (FDA) approval process. They will also need information that explains how a model works.

Researchers and others have identified concerns about AI tools in use in health care today that may embed bias or have other unintended clinical consequences. More transparency on the training data and algorithms used to create AI tools, as well as results from analyses and evaluations of AI tools, will support clinicians and other end-users in assessing whether and how to use these tools. The development of model cards or other standardized forms of transparency will be crucial for demonstrating the benefits and limits of a particular tool. For example, in a 2023 survey nearly eight in 10 physicians said a summarization of key points, purpose, capabilities, and limitations along with examples of real-world scenarios would be most useful in providing them information about AI tools. Moreover, in health care, end-users will play a vital role in communicating and sharing information with patients. Therefore, guidelines for AI tool developers should address both what the user of a tool needs to know, as well as the information they will then need to share with patients.

As noted in our Data to Support Equity Consensus Statement, achieving equity in health and health care is a key priority. Given the ways in which AI has been shown to replicate bias in the data used to train and deploy AI tools, we urge NIST to provide guidance on ways to detect and mitigate bias in AI tools, as well as transparency requirements for AI tool developers related to:

  • Disclosure of any known concerns and steps to mitigate bias;
  • Attributes of the data used to train a tool;
  • Specific populations for which a given tool may be appropriate or inappropriate;
  • Data use and protection policies; and
  • Updates over time on whether a tool is experiencing “drift” that may affect its accuracy.

The HITEU Alliance recommends that NIST actively engage end-users and other stakeholders to identify the most appropriate list of AI attributes to include in any transparency guidelines for health care. Working together, the clinical, operational, and technical communities can identify how best to balance usability and completeness in creating a “nutrition label” for AI tools used in caring for patients.

Conclusion

HITEU Alliance members appreciate the opportunity to comment on this RFI and would welcome the opportunity to collaborate with NIST to share the end-user perspective on IT tools used in health care as this important work continues.