The White House said Thursday it is requiring federal agencies using artificial intelligence to adopt "concrete safeguards" by Dec. 1 to protect Americans’ rights and ensure safety as the government expands AI use in a wide range of applications.
The Office of Management and Budget issued a directive to federal agencies to monitor, assess and test AI’s impacts "on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI." Agencies must also conduct risk assessments and set operational and governance metrics.
The White House said agencies "will be required to implement concrete safeguards when using AI in a way that could impact Americans' rights or safety" including detailed public disclosures so the public knows how and when artificial intelligence is being used by the government.
President Joe Biden signed an executive order in October invoking the Defense Production Act to require developers of AI systems posing risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the U.S. government before publicly released.
The White House on Thursday said new safeguards will ensure air travelers can opt out from Transportation Security Administration facial recognition use without delay in screening. When AI is used in federal healthcare to support diagnostics decisions a human must oversee "the process to verify the tools’ results."
Generative AI - which can create text, photos and videos in response to open-ended prompts - has spurred excitement as well as fears it could lead to job losses, upend elections and potentially overpower humans and catastrophic effects.
The White House is requiring government agencies to release inventories of AI use cases, report metrics about AI use and release government-owned AI code, models, and data if it does not pose risks.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.