President Biden has directed the Department of Health and Human Services and other leading health agencies to develop a plan for regulating AI tools extensively used in hospitals, health insurance companies, and other healthcare ventures. As part of this directive, a safety program is to be established to receive reports on AI-related hazards and unsafe practices and to provide solutions for the same. This development is part of a larger order to create standards for using AI across the government. The directive seeks a balance between managing the risks of AI and promoting innovation beneficial to consumers. It supports efforts against harmful practices and discrimination and encourages grants and funding for AI-related research like drug discovery. It also stipulates that any company developing a generative or foundation model AI tool that could potentially harm public health must notify the government when training the model and share the results of its safety tests. The HHS has been given 180 days to create a strategy to determine the quality of AI tools before they are used in healthcare, including performance evaluation and maintenance standards for AI models.
To read more, click here.
[Source: STAT, October 30th, 2023]