News

Microsoft Joins Amazon, Google, OpenAI and Other Tech Giants in AI Safety Pledge

Over a dozen companies at the forefront of today's generative AI boom have agreed to a set of "AI safety commitments" as part of last week's AI Seoul Summit.

Microsoft, OpenAI, Amazon and Google are among the 16 organizations that have signaled their willingness to police their own procedures around AI development, with the goal of limiting AI misuse and promoting responsible deployments. The companies' participation marks a "world first," according to a press release from the U.K. government, which co-hosted the summit alongside the Republic of Korea.

The companies agreed to three main goals outlined in the "Frontier AI Safety Commitments" document, which, according to Microsoft President Brad Smith, is "an important acknowledgement of how safety frameworks must help to address risks that may emerge at the frontier of AI development, especially as their capabilities advance."

In the first goal of the document, organizations are asked to "effectively identify, assess and manage risks when developing and deploying their frontier AI models and systems."

Regarding this goal, many of the signatories already have internal requirements meant to ensure the safety of their AI technologies. Microsoft, for example, abides by its Responsible AI Standard, developed in 2016, and its Copilot development process includes extensive red-teaming. Meta and others are independently exploring ways to "watermark" content created by their AI systems to limit misinformation, especially in light of this year's elections. And OpenAI, widely considered generative AI's bellwether, recently formed a new AI Safety and Security Committee, albeit after disbanding its previous AI safety committee.

Critically, however, a tenet of this first commitment is that organizations must agree to kill development of AI systems that are beyond saving.

Specifically, they must define "thresholds at which severe risks posed by a model or system, unless adequately mitigated, would be deemed intolerable," and "commit not to develop or deploy a model or system at all, if mitigations cannot be applied to keep risks below the thresholds."

The companies are tasked with defining their kill thresholds over the coming months, with the goal of publishing a formal safety framework in time for the AI Action Summit happening February 2025 in France.

The two other goals outlined in the document are:

  • Organisations are accountable for safely developing and deploying their frontier AI models and systems.
  • Organisations' approaches to frontier AI safety are appropriately transparent to external actors, including governments.

The document also lists several AI safety best practices that the signatories pledge to apply, if they haven't already. These include red-teaming, watermarking, incentivizing third-party testing, creating safeguards against insider threats, and more.

Said U.K. Prime Minister Rishi Sunak, "These commitments ensure the world's leading AI companies will provide transparency and accountability on their plans to develop safe AI." The pledges laid out in the document do not carry legal weight, however; in fact, they're described as "voluntary commitments."

Here is the full list of signatories:

  • Amazon
  • Anthropic
  • Cohere
  • Google/Google DeepMind
  • G42
  • IBM
  • Inflection AI
  • Meta
  • Microsoft
  • Mistral AI
  • Naver
  • OpenAI
  • Samsung Electronics
  • Technology Innovation Institute
  • xAI
  • Zhipu.ai

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

  • IBM Giving Orgs a Governance Lifeline in Agentic AI Era

    Nearly overnight, organizations are facing brand-new challenges caused by self-directed AI systems (a.k.a. agentic AI). Big Blue is extending them some help.

  • Microsoft Launches Integrated E-mail Security Ecosystem for Defender for Office 365

    Microsoft is expanding its e-mail security capabilities with the launch of a new Integrated Cloud Email Security (ICES) ecosystem for Microsoft Defender for Office 365.

  • Microsoft Joins Workday's AI Agent Partner Network

    Microsoft has become a key partner in Workday's newly launched AI Agent Partner Network, aligning with other industry leaders to integrate AI agents into enterprise workforce systems.

  • LinkedIn CEO Ryan Roslansky To Lead Microsoft's Productivity Initiatives

    In a strategic leadership realignment, Microsoft has appointed LinkedIn CEO Ryan Roslansky to oversee its consumer and small business productivity software division, encompassing Microsoft 365, Teams and AI-driven tools like Copilot.