Apple has joined several other tech companies in agreeing to abide by voluntary AI safeguards laid out by the Biden administration. Those who make the pledge have committed to abide by eight guidelines related to safety, security and social responsibility, including flagging societal risks such as biases; testing for vulnerabilities, watermarking AI-generated images and audio; and sharing trust and safety details with the government and other companies.
Amazon, Google, Microsoft and OpenAI were among the initial adoptees of the pact, which the White House announced last July. The voluntary agreement, which is not enforceable, will expire after Congress passes laws to regulate AI.
Since the guidelines were announced, Apple unveiled a suite of AI-powered features under the umbrella name of Apple Intelligence. The tools will work across the company’s key devices and are set to start rolling out in the coming months. As part of that push, Apple has teamed up with OpenAI to incorporate ChatGPT into Apple Intelligence. In joining the voluntary code of practice, Apple may be hoping to ward off regulatory scrutiny of its AI tools.
Although President Joe Biden has talked up the potential benefits of AI, he has warned of the dangers posed by the technology as well. His administration has been clear that it wants AI companies to develop their tech in a responsible manner.
Meanwhile, the White House said in a statement that federal agencies have met all of the 270-day targets laid out in a sweeping Executive Order related to AI that Biden issued last October. The EO covers issues such as safety and security measures, as well as reporting and data transparency schemes. The White House says that agencies have met all the stipulated deadlines to date.
This article originally appeared on Engadget at https://www.engadget.com/apple-agrees-to-stick-by-biden-administrations-voluntary-ai-safeguards-144653327.html?src=rss