Four major tech companies — Google, OpenAI, Microsoft, and Anthropic have come together to form a new industry body designed to ensure the “safe and responsible development” of “frontier AI” models.
In response to growing calls for regulatory oversight, these tech firms have announced the formation of “Frontier Model Forum” which will draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem and develop a public library of solutions to support industry best practices and standards.
The Forum aims to help — advance AI safety research to promote responsible development of frontier models and minimise potential risks, identify safety best practices for frontier models, share knowledge with policymakers, academics, civil society and others to advance responsible AI development, and support efforts to leverage AI to address society’s biggest challenges.
Although the Frontier Model Forum currently has only four members, the collective said it is open to new members.
Qualifying organisations must be developing and deploying frontier AI models, as well as showing a “strong commitment to frontier model safety”.
“We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation. We’re all going to need to work together to make sure AI benefits everyone,” said Kent Walker, President, Global Affairs, Google & Alphabet.
Over the coming months, the Frontier Model Forum will establish an Advisory Board to help guide its strategy and priorities, representing a diversity of backgrounds and perspectives.
Moreover, the founding companies will also establish key institutional arrangements, including a charter, governance and funding with a working group and executive board to lead these efforts.
“We plan to consult with civil society and governments in the coming weeks on the design of the Forum and on meaningful ways to collaborate,” the companies wrote in a joint statement on Wednesday.
The Forum will focus on three key areas over the coming year to support the safe and responsible development of frontier AI models — promote knowledge sharing and best practices among industry, governments, civil society, and academia, support the AI safety ecosystem by identifying the most important open research questions on AI safety, and facilitate information sharing among companies and governments.