By MATT O’BRIEN (AP Technology Writer)
The White House has introduced new rules stating that U.S. federal agencies must demonstrate that their artificial intelligence tools are not harming the public, or else they must stop using them.
Vice President Kamala Harris stated, “When government agencies use AI tools, we will now require them to verify that those tools do not endanger the rights and safety of the American people,” ahead of the announcement.
By December, each agency must have a set of specific protections that govern everything from facial recognition screenings at airports to AI tools used for controlling the electric grid or making decisions about mortgages and home insurance.
The new policy directive issued to agency heads by the White House’s Office of Management and Budget is part of a broader AI executive order signed by President Joe Biden in October.
President Biden’s broader order also aims to protect more advanced commercial AI systems created by leading technology companies, such as those powering generative AI chatbots. Thursday’s directive will also impact AI tools that government agencies have long used for decisions regarding immigration, housing, child welfare, and various other services.
For instance, Harris mentioned, “If the Veterans Administration wishes to use AI in VA hospitals to assist doctors in diagnosing patients, they would first need to prove that AI does not produce racially biased diagnoses.”
According to a White House announcement, agencies that cannot implement the protections “must stop using the AI system, unless agency leaders can justify why doing so would increase overall risks to safety or rights, or create an unacceptable obstacle to crucial agency operations.”
The new policy also mandates two other “binding requirements,” Harris explained. The first requires federal agencies to appoint a chief AI officer possessing the “experience, expertise, and authority” to oversee all AI technologies used by the agency. The second necessitates that agencies annually publish an inventory of their AI systems, along with an evaluation of the potential risks they may pose.
Some regulations exempt intelligence agencies and the Department of Defense, which is currently discussing the use of autonomous weapons separately.
Shalanda Young, the director of the Office of Management and Budget, stated that the new requirements are also intended to reinforce the positive applications of AI by the U.S. government.
“When used and supervised responsibly, AI can assist agencies in reducing wait times for critical government services, improving accuracy, and expanding access to essential public services,” Young remarked.
This new oversight was commended on Thursday by civil rights organizations, some of which have been advocating for years for federal and local law enforcement agencies to restrict the use of face recognition technology linked to wrongful arrests of Black men.
A report released in September by the U.S. Government Accountability Office, which reviewed seven federal law enforcement agencies including the FBI, revealed that they collectively conducted over 60,000 searches using face-scanning technology without ensuring sufficient staff training on its functionality and interpretation of results.