Hey 👋
I’m Oliver Patel, author and creator of Enterprise AI Governance.
This free newsletter delivers practical, actionable and timely insights for AI governance professionals.
My goal is simple: to empower you to understand, implement and master AI governance.
For more frequent updates, follow me on LinkedIn.
As many of us wind down for the holiday season, it is a fitting opportunity to reflect on a bumper year for AI governance.
This week’s newsletter is a comprehensive review of the year, including:
✅ 8 key themes which defined the year
✅ Cheat Sheet: AI Governance in 2024
✅ The ultimate AI governance timeline: 90+ key milestones in 2024
⚠️ For a better experience, please view this post and the full timeline directly on the post’s webpage.
I suspect many of us are feeling a bit like this right now…
This is also how I felt when I launched Enterprise AI Governance last week.
To be honest, I was overwhelmed and humbled by the positive response. Over 550 people have subscribed in the first few days. Thank you to all!
I guess this means that I now actually have to write the newsletter 😅
The next edition, which will feature my predictions and tips for 2025, will be sent on Monday 6th January.
In the meantime, I will be taking my mum to Dubai and Abu Dhabi for the holidays. If anyone is there and up for a coffee meeting, drop me a message ☕🇦🇪
Happy Holidays to you and your loved ones!
8 key themes which defined 2024
AI safety takes centre stage. AI Safety Institutes have been established in several countries, including the U.S., UK, Canada and Japan. November saw the launch of the International Network of AI Safety Institutes. This represents a shift away from the focus on high level values and principles, which dominated the past few years, towards more practical and operational aspects of AI governance, such as model evaluation, red teaming, benchmarks and anticipating frontier capabilities.
AI governance goes global. Although the EU, U.S. and China are the biggest players, AI governance has gone truly global in 2024. Every part of the world has introduced new policies, laws and standards. My timeline below features examples from the Association of Southeast Asian Nations (ASEAN), the African Union, Saudi Arabia, Brazil and Australia, to name but a few.
Generative AI policies and guidance. The risks of generative AI, such as copyright infringement, inaccurate content and harmful deepfakes, have dominated the conversation. In response, many international organisations, countries, businesses and public sector bodies have published generative AI policies and guidelines. Educational and media institutions have been particularly active.
Foundation models under the microscope. AI adoption and deployment in 2024 often means leveraging foundation models via APIs. In response, safety and governance work has increasingly focused on the risks of foundation models and the appropriate guardrails and mitigations. For example, the U.S. and UK AI Safety Institutes partnered with OpenAI and Anthropic to conduct detailed evaluation of their models. AI labs have also published increasingly detailed overviews of their model safety and risk mitigation efforts.
EU AI Act compliance looms. Once the EU AI Act entered into force in August, attention immediately turned to implementation. With the initial compliance deadlines looming, many organisations signed up to the EU AI Pact, to signal their commitment to responsible AI. In parallel, several initiatives are underway to clarify and augment aspects of the AI Act, including the drafting of the General Purpose AI Code of Practice and the consultation on prohibited AI.
China’s ascendance in technical standards development. China’s stated goal is to be an AI standard setter, not a standard taker, and they are pursuing this in a committed way. The National Information Security Standardisation Technical Committee (TC260) has been extremely active, announcing plans for 50+ AI standards by 2026 and releasing several this year, including on generative AI safety.
Deepfakes on trial. There has been justified public and civil society outcry regarding the use of AI to create and distribute harmful deepfakes, such as non-consensual pornography and CSAM. In response to these horrendous practices, several jurisdictions have proposed and passed bespoke laws, criminalising such acts. There have even been prosecutions in some countries, like the UK and South Korea.
The professionalisation of AI governance. It has been a breakout year for the AI governance profession. What only a few years ago was an esoteric field of research, has now become a booming industry. According to the IAPP, over 60% of large corporates surveyed have established or are building a dedicated AI governance function. Also, more than 10,000 people have signed up for the IAPP’s AIGP training
Cheat Sheet of the Week: AI Governance in 2024
The ultimate AI governance timeline: key milestones in 2024
January
🇬🇧 UK Government publishes Generative AI framework for HM Government.
🌐 World Health Organisation (WHO) releases AI ethics and governance guidance for large multi-modal models.
🇺🇸 NIST publishes Adversarial Machine Learning taxonomy.
🇸🇦 Saudi Data and AI Authority publishes Generative AI Guidelines for the Public and for Government entities.
February
🌐 ASEAN publishes Guide on AI Governance and Ethics.
🇬🇧 UK Government publishes formal response to AI Regulation White Paper consultation.
🇯🇵 Japan establishes AI Safety Institute.
🇨🇳 China’s TC260 releases Technical Document on Basic Safety Requirements for Generative Artificial Intelligence Services, to support regulatory compliance and standardisation.
March
🇮🇳 Government of India issues Generative AI Advisory for firms.
🌐 OECD publishes explanatory guidance on the updated definition of an AI system.
🇪🇺 European Parliament approves EU AI Act.
🌐 UN General Assembly adopts AI resolution.
🇺🇸 Utah enacts SB 149, regulating the use of AI in various sectors.
🇺🇸 Office of Management and Budget (OMB) issues government-wide policy on AI, requiring federal agencies to take action to manage AI risks.
April
🇺🇸 9 U.S. federal departments and agencies publish Joint Statement on enforcement of civil rights, fair competition, consumer protection and equal opportunity laws in automated systems.
🇪🇺 🇺🇸 EU-US Trade and Technology Council agrees Joint Statement on AI.
May
🌐 OECD updates the AI Principles.
🌐 World leaders sign the Seoul Declaration for Safe, Innovative and Inclusive AI, at the AI Seoul Summit 2024.
🌐 The International Scientific Report on the Safety of Advanced AI is published, as agreed at the AI Safety Summit in 2023.
🇺🇸 Colorado enacts SB 205, which regulates the use of high risk AI.
🇪🇺 Council approves the EU AI Act.
🇺🇸 NIST launches the Assessing Risks and Impacts of AI (ARIA) programme.
🇸🇬 Singapore AI Verify Foundation publishes the Model AI Governance Framework for Generative AI.
🇪🇺 European Commission establishes the AI Office.
June
🇪🇺 European Data Protection Supervisor (EDPS) publishes guidelines on generative AI.
🇫🇷 CNIL, France’s data protection authority, publishes recommendations on AI development.
🇷🇺 Russia passes Bill No. 512628-8, which strengthens the AI liability regime.
July
🇨🇳 China’s Ministry of Industry and Information Technology (MIIT) releases guidelines and plans for building a comprehensive system of over 50+ AI standards by 2026.
🌐 NATO releases updated AI strategy.
🇪🇺 EU AI Act published in the Official Journal of the EU.
🇬🇧 UK introduces AI legislation plans in the King’s Speech.
🌍 African Union publishes the Continental AI Strategy.
🇳🇿 New Zealand Government publishes high level approach to AI regulation.
🇦🇪 UAE publishes Charter for the Development and Use of AI.
🌐 OECD launches public consultation on AI risk thresholds.
🇰🇷 South Korea’s Personal Information Protection Commission (PIPC) publishes standard on AI and public data processing.
🇺🇸 NIST publishes Generative AI Profile and Secure Software Development Practices for Generative AI, to support implementation of the NIST AI Risk Management Framework.
August
🇪🇺 EU AI Act enters into force.
🇺🇸 Illinois enacts HB 3773, which regulates the use of AI in employment.
🌐 17 countries in Latin America and the Caribbean sign the Cartagena Declaration, committing to responsible AI.
🇦🇺 Australia passes Criminal Code Amendment (Deepfake
Sexual Material) Bill 2024, which criminalises creating non-consensual sexually explicit deepfakes.
September
🇦🇪 UAE publishes International Stance on AI policy.
🇦🇺 Australian Government publishes Voluntary AI Safety Standard, outlining 10 AI guardrails.
🇦🇺 Australian Government launches consultation on proposals for mandatory guardrails for AI in high-risk settings.
🌐 The Council of Europe Framework Convention on Artificial Intelligence and human rights, democracy and the rule of law opens for signature.
🇪🇺 First meeting of the EU AI Board held, following EU AI Act entry into force.
🇬🇧 🇺🇸 🇨🇦 UK, U.S. and Canadian nuclear regulators outline principles for AI deployment in the nuclear sector.
🇸🇦 Saudi Data and AI Authority publishes Deepfake Guidelines.
🇨🇳 China publishes the AI Safety Governance Framework.
🌐 G20 Digital Economy Ministerial Declaration on AI for inclusive sustainable development and inequality reduction.
🇲🇾 Government of Malaysia publishes National Guidelines on AI Governance and Ethics.
🇯🇵 Japan AI Safety Institute publishes two guidance documents: i) Red Teaming Methodology on AI Safety and ii) Evaluation Perspectives on AI Safety.
🇪🇺 100+ companies sign the EU AI Pact pledge.
🇰🇷 South Korea passes law to criminalise sexually explicit deepfakes.
🇳🇱 Dutch Data Protection Authority launches public consultation on prohibited AI systems (manipulative and exploitative AI).
🇺🇸 California Governor approves suite of new state AI laws, including:
October
🇺🇸 The OMB released a memorandum on responsible procurement of AI in government.
🌐 Montevideo Declaration on AI adopted by leaders following the Ministerial Summit on the Ethics of AI in Latin America and the Caribbean
🇸🇬 The Cyber Security Agency of Singapore published Guidelines on Securing AI Systems.
🇪🇺 European Commission seeks feedback on establishment of Scientific Panel of Independent Experts.
🇸🇬 Singapore passed the Elections (Integrity of Online Advertising) (Amendment) Bill, which restricts the use of deepfakes in elections.
🇺🇸 White House publishes National Security Memorandum on AI and associated Framework for AI Governance and Risk Management in National Security, applicable to federal agencies.
🌐 17 national data protection authorities sign joint statement on data scraping and the protection of privacy, focusing on AI development.
🇺🇸 U.S. formally implements Executive Order 14105, to restrict investment in certain advanced Chinese AI systems.
🇹🇭 The Government of Thailand publishes Generative AI Governance Guidelines, to support businesses.
🇳🇱 Dutch Data Protection Authority launches public consultation on prohibited AI systems (emotion recognition).
November
🇬🇧 UK Government releases AI Management Essentials tool for consultation and use.
🇨🇦 Canada establishes AI Safety Institute.
🇪🇺 European Commission launches EU AI Act consultation, covering both prohibited AI and AI system definition.
🇪🇺 European Parliament establishes Working Group on EU AI Act implementation.
🇪🇺 First draft of the General Purpose AI Code of Practice published by the European Commission.
🌐 G20 Rio de Janeiro Leaders’ Declaration published, following the Brazil 2024 meeting. G20 leaders reaffirm commitment to trustworthy AI.
🇺🇸 🇬🇧 U.S. and UK AI Safety Institutes publish safety evaluation findings for Anthropic’s Claude 3.5 Sonnet model.
🌐 Launch of the International Network of AI Safety Institutes and publication of Mission Statement.
🇰🇷 South Korea establishes AI Safety Institute.
December
🇸🇬 Monetary Authority of Singapore publishes AI Model Risk Management review and best practices, focusing on financial services firms.
🇪🇺 The updated EU Product Liability Directive entered into force. The Directive expands the definition of a product to include AI (and software more broadly).
🇧🇷 Brazil’s Senate votes to approve the ‘AI Bill’ (Bill 2338/2023). It must now be approved by the Chamber of Deputies.
🇨🇳 China’s MIIT establishes AI Standardisation Technology Committee, with over 40 members, including leading tech companies.
🇦🇺 Australian Government publishes v2 of Voluntary AI Safety Standard for consultation.
🇺🇸 Bipartisan House Task Force report on AI published.
🇬🇧 UK launches AI and copyright consultation, with new text and data mining licensing proposals.
🇪🇺 European Data Protection Board (EDPB) publishes opinion on AI models and personal data.
🇪🇺 Second draft of the General Purpose AI Code of Practice published by the European Commission.
🇺🇸 🇬🇧 U.S. and UK AI Safety Institutes publish safety evaluation findings for OpenAI's o1 model.
🇳🇱 Dutch Data Protection Authority launches public consultation on prohibited AI systems (social scoring).
This is the most comprehensive list of AI Governance documentation that I've come across. With some free time for the holidays, I'm looking forward to reading through many of these to prepare myself for the year to come. Thank you for sharing, and I look forward to staying up-to-date with your content!