Global and United States AI regulatory landscape
The current global AI regulatory landscape is marked by diverse approaches and emerging trends. Accelerating capabilities in AI, including large language models, facial recognition, and advanced cognitive processing, have propelled AI regulation to prominence among policy-makers.
Europe has been the frontrunner in this journey towards AI regulation. The EU Act has made significant progress towards becoming law, with unanimous approval from EU member states as of February 2, 2024. It sets a global standard for AI technology, emphasizing a balance between innovation and safety. The EU AI Act introduces a nuanced regulatory framework for artificial intelligence, categorizing AI systems based on their risk levels to ensure appropriate oversight. Systems posing an “unacceptable risk,” such as those capable of cognitive manipulation or implementing social scoring based on certain
protected traits, biometric identification, and the categorization of people, are outright banned, with narrow exceptions for law enforcement under stringent conditions. “High-risk” AI systems, impacting safety or fundamental rights, are subject to strict assessment and registration requirements, covering a wide range of applications from critical infrastructure management, assistance in legal interpretation, and education to law enforcement. Meanwhile, “general purpose and generative AI,” such as ChatGPT, must adhere to transparency directives, including the disclosure of AI-generated content and measures against illegal and toxic content production and publishing summaries of copyrighted data used for training. Systems deemed “limited risk” should comply with minimal transparency requirements. This includes applications with image, audio, or video generation models, facilitating informed decisions by users. This stratified approach aims to balance the innovation potential of AI with necessary safeguards against its potential harms (https://www.europarl.europa.eu/news/en/ headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence).
Conversely, India initially opted against AI regulation, focusing on policy and infrastructure to foster AI growth, but later considered a regulatory framework addressing algorithm biases and copyrights. The US hasn’t moved towards comprehensive federal AI legislation but has seen regulatory responses from agencies such as the National Institute of Standards and Technology (NIST), the Federal Trade Commission (FTC), and the Food and Drug Administration (FDA) regarding public concerns over AI technologies.
Regulatory frameworks are developing globally to balance AI’s benefits against its risks. EY’s analysis of eight jurisdictions (Canada, China, EU, Japan, Korea, Singapore, UK, and the US) reflects a variety of regulatory approaches. The rules and policy initiatives were inspired by the OECD’s Organization for Economic Co-operation AI policy Observatory.
OECD is an international organization comprising 38 member countries, established to promote economic progress and world trade by offering a platform for democratic, market-economy nations to discuss policies, share experiences, and co-ordinate on global issues.
As per this research from Ernst and Young, released in September 2023, five common regulatory trends have emerged globally:
- Alignment with key AI principles: The AI regulation and guidance being evaluated align with the key AI principles of human rights for respect, sustainability, transparency, and robust risk management, as established by the OECD and supported by the G20. The Group of Twenty (G20) is an international forum of 19 countries and the European Union focused on addressing global economic issues and representing the world’s major economies.
- Risk-based approach: These jurisdictions adopt a risk-based approach to AI regulation, meaning they customize their AI rules based on the perceived risks AI poses to fundamental values such as privacy, non-discrimination, transparency, and security.
- Sector and sector-agnostic rules: Due to the diverse applications of AI, certain jurisdictions are emphasizing the importance of sector-specific regulations alongside more general, sector-agnostic rules.
- Digital priority areas: In the realm of other digital priority areas such as cybersecurity, data privacy, and intellectual property rights, jurisdictions are advancing in their creation of AI-specific regulations, with the European Union leading in adopting a comprehensive strategy.
- Collaboration with private sector and policy-makers: Numerous jurisdictions employ regulatory sandboxes, allowing private sector collaboration with policy-makers to craft rules that both ensure safe, ethical AI and address the potential need for closer oversight in higher-risk AI innovations.