AI Safety Summit

AI Safety Summit

It is very difficult to judge at this stage what the concrete outcomes of the AI Safety Summit will be. Like most industry practitioners, I wasn't invited to be in the room at Bletchley Park, but I did sit through the Closing Plenary Livestream yesterday to see if it gives us any clues. Here is my summary of what I heard...

Session Summaries

Whilst you have to feel for the session reps, given a few seconds onstage to sum up complex and presumably often diverging discussions, the ideas and topics they raise give us a clue to what key people in international academia and governments think.

Risks to Global Safety from Frontier AI Misuse

First up was François-Philippe Champagne, Canadian Minister of Innovation, summing up the session on AI misuse risks.

Talked about the three 'A's

  1. Acknowledgment of AI Risks: There was a general consensus among the group that AI has the potential to empower malicious actors globally. They highlight concerns similar to those related to biological and chemical weapons and express worry about the impact of deepfakes on the upcoming 50 elections in 2024 affecting 4 billion people. He stressed that the risks are real and that technology can enable harmful actions.
  2. Action: it is widely agreed that both domestic and international actions are necessary to address AI-related risks. He tentatively called for an IPC panel on AI safety and discussions around testing and ensuring the intrinsic safety of AI systems before deployment. He drew an analogy with the nuclear field, where manufacturers must ensure safety before use.
  3. Adaptation: he stated the group recognized that AI's rapid evolution makes it challenging to predict its capabilities over the next five to ten years. Consequently, regulatory frameworks need to be adaptable, again mentioning United Nations and other international bodies as potential avenues for addressing AI-related challenges.

Overall, as could be expected from a politician summarising a diverse group, lots of well articulated great intentions, phrases like "act with confidence", "seize the initiative" but no clear ideas about regulatory direction of travel.

Risks of Unpredictable Advances in Frontier AI Capability

Next up, Yi Zeng, Chinese Academy of Sciences, with a very articulate and nuanced summary on this topic. I understand inviting a Chinese delegation was controversial, this contribution showed why it was essential.

Again, three key themes:

  1. Current AI systems do not pose a significant risk of loss of control as they require human prompting, fail to plan over time towards goals, and have limited real-world action capabilities. However, it is acknowledged that future AI models are likely to improve in these aspects, although the case for severe loss of control in the future has not been fully established.
  2. Pressing pause on frontier development: sounds good in theory, but difficult arises because we can rely on responsible developers, but there aren't many levers to pull to affect irresponsible developers. Bad actors can go ahead to to further develop frontier models without our knowledge so that they, not responsible environments lead the science. Instead levers and incentives need to be addressed in actionable steps...
  3. Actionable Steps: three no-brainer actions were identified: growing global expertise in AI safety and R&D, deepening collaborations in AI testing and auditing, and continuing multi-stakeholder exchanges to better understand and mitigate AI-related risks.

Risks from Loss of Control over Frontier AI

The next chair's name wasn't announced and I didn't recognise him, but he summed up the session on risks of loss of control. Let me know if you recognise him and I will give due credit,

  1. Unexpected nature of both advances and failures: stated the importance of continuing efforts in AI safety despite the inability to make accurate predictions, highlighting the beauty of the ongoing discussion.
  2. Open Source Challenges: This one is really interesting, they acknowledged the role of Open Source in getting computing to where we are but questioned whether it was safe to allow open source AI going forwards as this would give equal access for both beneficial and abusive uses. [my own comment: this is the germ of a very ugly and dangerous idea!].
  3. Domestic and International Collaborations: The group advocated for both domestic and international collaborations in AI safety. They noted that the UK and the US are establishing their AI safety institutes but suggested that every country should have its own AI safety initiative and emphasized the need for a global AI safety network stressing the importance of addressing AI challenges not just among advanced countries but also in the context of the rest of the world. They highlighted the need for global cooperation on AI safety issues through the UN platform, calling out the ITU as having a role.

[Opinion: personally I found the ideas of banning open source AI models, and letting the ITU anywhere near anything that we don't want strangled at birth scary. The session members that contributed those ideas are misguided (or worse)].

Risks from the Integration of Frontier AI into Society

Next up Marietje Schaake, who was an excellent choice to chair the session on Societal Risks. As could be expected, some really good balanced points, very eloquently put:

  1. Existential Risks: The societal risks associated with deploying existing frontier AI models badly are existential in their own right. The risks to democracy, human rights, civil rights, fairness, economic inequality, access to healthcare, and global inequality have the ability to de-stabilise society.
  2. Leveraging Existing Laws: Importance of using existing laws effectively to address AI-related challenges. This includes clarifying how existing rules apply to AI, such as in the context of privacy, intellectual property rights, and liability.
  3. Technical Evaluations: The need for more comprehensive and high-quality technical evaluations of AI systems, defining concrete and societal metrics and ensuring that evaluations are context-specific and continuous throughout AI product life cycles.
  4. Investing in Research: Investment in basic research was deemed essential to better understand how AI systems and government use of these systems work. This knowledge can help governments become better leaders in AI utilization.
  5. Opportunities of AI: The group also acknowledged the significant opportunities AI presents. AI can be a powerful tool to solve major problems, strengthen democracy, process vast amounts of information, combat climate change, and address societal bias.
  6. Citizen Inclusion: The importance of including citizens, especially young people, in AI governance was emphasized. Governments were encouraged to create AI advisory bodies that include a random sample of citizens, not just experts. This approach recognizes the value of diverse perspectives in decision-making.

Absolutely solid and comprehensive commentary on societal risks.

What should Frontier AI developers do to scale responsibly?

Session lead by Rebecca Finlay, CEO at Partnership on AI.

  1. Building Government Capacity: The leaders from government emphasized the importance of artificial intelligence (AI) and the need to balance innovation with the establishment of necessary guardrails. They recognized that governments might move more slowly than technology but discussed ways to build capacity within government.
  2. Collaboration and Shared Resources: The summit was seen as an example of leaders coming together to learn from each other and develop shared resources. The focus was on sharing what works and what doesn't, as well as discussing international conversations about standards, interoperability, and research support.
  3. Regulation and Innovation: There was a view that regulation and innovation can go hand in hand. Instead of being seen as binary choices, both were considered essential. Regulation can drive innovation, and innovation can inform regulation. [comment: to me this seems to be counter intuitive, I can't think of a single example of regulation which has driven gross innovation, and many, many counter examples - wich I had been in that room].Various regulatory proposals were discussed, including product safety laws, liability approaches, and sandboxing.
  4. Deeper Understanding of AI: It was emphasized that policymakers should move beyond viewing AI as a single entity and delve deeper into understanding its capabilities, domains, and potential risks and harms across different models and approaches.
  5. Role of Safety Institutes: Safety institutes were suggested as a means to conduct specific work to inform regulation and action related to AI.
  6. Multi-Stakeholder Communities: There was a strong belief in the importance of creating multi-stakeholder communities of action. Governments have trust, and communities can play a vital role in building trust, ensuring citizen protection, and promoting education, skill development, and digital literacy to bridge the digital divide.

What should the International Community do in relation to the risk and opportunities of AI?

Next up, in a demonstration of diversity of organisations present, Tino Cuéllar, President of the Carnegie Endowment for International Peace. Very slick, energetic presentation of the session...

  1. Emphasis on Values: Participants recognized the importance of starting with the right values when considering international collaboration. These values included principles such as digital solidarity, respect for different countries and their approaches, inclusivity, and awareness of the evolving risks associated with AI, both in existing models and frontier models.
  2. Implementable Action: The discussion highlighted the need for not just values and principles but also implementable and realistic actions. These actions could take place at the local and national levels but should include mechanisms for verification and oversight to ensure compliance with agreed-upon principles.
  3. Concrete Steps for the Future: The group discussed concrete actions to be taken in the next 12 months. This included a shared understanding of the capabilities of frontier AI models, the potential establishment of an international panel on AI safety, coordinated research efforts, and national collaborations to balance AI benefits and risks. The importance of aligning these efforts with existing processes like the G7 and OECD was emphasized.

Motherhood and apple pie summary of the importance of aligning values, implementing actionable measures, and fostering international collaboration to harness the benefits of AI while mitigating its risks on a global scale. Touted as solved all the problems, certainly lots of good words, not sure about timely actions.

What should the Scientific Community do in relation to the risk and opportunities of AI?

Last up, UK Government Chief Scientific Adviser, Angela McLean talking about the Scientific Communities group:

  1. Need for Engineered Safe Models: Necessity of developing AI models with new architectures designed to be safe from the outset, importance of learning from safety engineering disciplines and incorporating features like kill switches.
  2. Modesty and Uncertainty: In the quest for safer AI models, it is crucial to remain humble and recognize the prevalence of uncertainty in AI research and development.
  3. Handling Existing Models: Discussed the challenge of dealing with the AI models currently in use, stressing the importance of understanding the associated risks and involving multiple actors in the evaluation and decision-making processes.
  4. Expanding Conversations: Broadening the conversation within the scientific community about AI model evaluations and the values used in those evaluations. Vendors should bear the burden of proof for safety.
  5. Open Research Questions: Need to compile a list of open research questions related to AI safety, with the intention of gathering input from various stakeholders.
  6. Diverse Methodologies: Importance of drawing on various research methodologies to address AI safety comprehensively, recognizing both technical and social questions.
  7. Inclusivity: Inclusivity emerged as a recurring theme, with the team stressing that AI discussions should involve a wide range of voices, including the public. They highlighted the need for geographical and linguistic inclusivity and the importance of hearing from the public rather than just consulting them.
  8. Avoiding concentration of power in the hands of a few individuals or organizations in the AI domain, learning from the lessons of internet development.
  9. Public Engagement: Finding ways to genuinely listen to the public, valuing their input and diverse perspectives in addressing AI-related questions.

All good stuff, I especially liked the inclusivity and learning from the lessons of Internet development.

Questions/Statements

There were then a bunch of soundbites from the floor, I'm not going to attribute each one as this write up is long enough as it is but a few themes:

  • Technology regulation is fundamentally about people, including those who create and regulate technology and those whose lives are impacted by it. We need to understand how technology affects people's lives, acknowledging that social values, risks, and harms must be considered in the development and deployment of technology. Regulation backed by law and informed by evidence and tools on the ground is essential, stressing the necessity of incentives for mitigation. We need to learn from domains like air travel where effective regulation and mitigation measures have been successfully implemented.
  • Every country has, and necessarily will have their own approach to AI regulation but need to establish common denominators.
  • AI encapsulates the values of it's creators, the ways they view the world and the training datasets they choose. In regulating AI, we are regulating values.
  • Open source, mostly supportive statements about the role of open technologies but definitely some voices expressing concern about this model going forwards.

Summary

I was, and still am slightly sceptical about the likelihood of useful actionable outcomes from this event. Its focus on government regulation and frontier models meant that one or two large corporates and self promoting academics were over-represented, innovative startup practitioners and application builders entirely unrepresented (except for government contractors).

Actually there are some really diverse and interesting threads in this plenary. Lots of people around this particular table understand the nuanced issues around AI and the limited ability of governments to pull all the right levers.

Slightly concerning though are those injecting ideas about open source being a danger in this context, rather than the healthier society view that it is a transparency tool that solves, not creates issues. Looking at the attendee list it is pretty easy to imagine where these comments came from. Quite a lot of them need a moat. There is going to be lots of work that needs doing to make sure the superficially attractive idea of closed, easy to regulate silos does not gain traction.

Overall, some really good exposition though. Certainly many positive acknowledgements of the opportunities as well as risks.