Eight more major tech firms involved in artificial intelligence (AI) development signed the White House’s AI safety pledge.The White House announced on Sept. 12 that the firms had agreed to voluntarily…
Eight more major tech firms involved in artificial intelligence (AI) development signed the White House’s AI safety pledge.
The White House announced on Sept. 12 that the firms had agreed to voluntarily follow standards for safety, security, and transparency, related to their use of artificial intelligence.
Adobe, IBM, Palantir, Nvidia, Salesforce, Stability AI, Cohere, and Scale AI joined Amazon, Anthropic, Google, Inflection AI, Microsoft, and OpenAI, which signed the pledge in July.
The Biden administration initiated an industry-led effort on AI safeguards with tech companies during the summer.
All of the signatories have committed to AI testing and other security measures, but these are all voluntary and not regulations that can be enforced by the government.
Potential Threats Concern Washington
The rapid advancements in AI have become a major concern in Washington since OpenAI released its ChatGPT chatbot last year.
AI is facing scrutiny from lawmakers for its potential threat to certain jobs, its ability to spread disinformation, creation of deep fakes, and the possibility of developing its own self-awareness.
Many lawmakers and regulators are increasingly debating on how to handle the technology.
The White House said those firms that joined the initiative agreed to ensure that AI products were safe before making them public, put security first, and earn the public’s trust.
In addition to voluntary commitments, the Biden administration is drafting an executive order with the same goals and encouraging legislative efforts in Congress to regulate AI.
“The President has been clear: harness the benefits of AI, manage the risks, and move fast—very fast,” Chief of Staff Jeff Zients said in a statement regarding the latest pledges. “And we are doing just that by partnering with the private sector and pulling every lever we have to get this done.”
The tech companies further agreed to share information on potential dangers from the technology and to develop mechanisms to let consumers know when content is generated by AI.
Congress Acts to Regulate AI
The move by the White House comes as Sen. Chuck Schumer (D-N.Y.) plans on hosting a number of tech companies for an AI forum on Sept. 13, Axios reported.
CEOs from a dozen of the world’s biggest tech companies, several lawmakers, labor officials, and nongovernmental organization representatives will join the senator for the event, which is expected to last six hours.
There are bills pending in Congress that have been proposed to regulate AI, including the Artificial Intelligence and Biosecurity Risk Assessment Act and the No Robot Bosses Act.
Under the purported proposal from Mr. Thune, the Commerce Department would enforce civil action against any company if noncompliance were discovered and not appropriately remedied.
The No Robot Bosses Act, introduced by Sen. Bob Casey (D-Pa.), would ban employers from using only algorithms, machine learning, and other AI tools to make employment decisions, while the bipartisan Artificial Intelligence and Biosecurity Risk Assessment Act, would require regulators to monitor the risks of technical advancements in AI and how the technology could be used to develop lethal pathogens.
On Sept. 12, Microsoft President Brad Smith and Nvidia’s chief scientist William Dally testified about AI regulations in front of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, led by Sens. Richard Blumenthal (D-Conn.) and Josh Hawley (R-Mo.).
Big Tech Seeks Self Regulation
The series of pledges shows a growing momentum by Big Tech firms to set voluntary industry standards before the government acts on its own.
Although the pledges are not legally binding, the companies agreed to ensure internal and external testing before releasing future products, label AI-generated content using watermarks or similar technology, and share information with the industry and the government about potential risks, biases, and vulnerabilities in their systems.
Adobe encouraged the signers of the pledge and those that have yet to sign to support the FAIR Act, another proposed bill that would ensure that celebrities and others retain the right to their digital likenesses.
Adobe General Counsel Dana Rao told Axios that Adobe was working on AI responsibility efforts for the past four years and has been a leader in the Content Authenticity Initiative, which identifies when content is created or edited using AI.
“I’m really excited to see the White House step in,” Rao said. “We need that momentum from the White House to really push these initiatives to where they need to be.”
Meanwhile, consumer advocacy groups and others are worried about the influential role of tech companies in discussions about AI regulations and their self-regulatory pledges.