Global Tracker

Global Tracker

🗺️
Global Tracker
Instrument
logo
Jurisdiction
logo
Continent/Region
logo
Enforcement Status
logo
Brief Summary
European Union
Europe
Binding
The EU AI Act is the world’s first comprehensive law to regulate artificial intelligence, creating a risk-based framework that governs how AI can be developed, deployed, and used. It aims to ensure safety, protect fundamental rights, and foster trustworthy innovation across the EU.
1. Four-fold risk-based categorization of AI (unacceptable risk, high risk, limited risk and minimal risk) with prohibition of AI with “unacceptable” risk.
2. Major safety obligations (risk mitigation) liability on developer of AI – especially in case of high-risk AI.
3. Users – refers to persons who deploy AI for professional purposes – have certain obligations as well – especially in case of high-risk AI.
4. Jurisdiction expands to any developer or user in relation to whom AI enters EU markets (even if developer/user is outside EU territory).
5. Prohibited AI systems include: social scoring (like China’s Social Credit System), biometric filtering that causes inference of sensitive attributes (except filtering which is lawfully mandated), subliminal, manipulative, or deceptive techniques to distort behavior and impair informed decision-making among others.
6. High-risk AI must follow strict obligations, including risk management, high-quality training data, detailed technical documentation, record-keeping, human oversight, and cybersecurity safeguards
7. All AI systems interacting with humans, generating synthetic content, or using biometric/emotion recognition must clearly disclose their nature. Deepfakes and AI-generated material must be labelled, making it detectable and traceable as artificial in origin.
8. The Act introduces rules for GPAI and large foundation models, especially those with systemic risk (e.g., extremely compute-intensive training). Providers must conduct risk assessments, maintain technical documentation, and grant regulator access to safeguard against misuse.
9. The Act creates a multi-tiered governance system: the EU AI Office (for central oversight), the European AI Board (to coordinate member states), and expert/scientific panels. This ensures consistent supervision, guidance, and enforcement across the Union.
10. Violations invite heavy penalties: up to €35M or 7% of global turnover for using prohibited AI, €15M or 3% for breaching high-risk/GPAI duties, and €7.5M or 1% for supplying false information. This strong deterrence underlines the seriousness of compliance.
11. To encourage safe innovation, the Act provides regulatory sandboxes where AI can be tested under supervision. Special measures reduce compliance burdens for SMEs and start-ups, including simplified procedures and lighter fees.
12. The Draft Guidance on Incident Reporting was also introduced under Section 73 of the EU AI Act – now open for stakeholder feedback – according to which providers of high-risk AI systems will be required to report serious incidents to national authorities.

As of now, there is a debate on withholding the application of the Act so businesses and other AI operating entities can streamline operations to meet compliance requirements.
European Union
Europe
Non-binding
The EU released the AI Continent Action Plan with the objective of integrating AI into the EU economy and governance – effectively making the EU an “AI Continent,” specifically by encouraging the “open innovation” model of the EU. It introduces 5 focus areas:
1. Computing infrastructure: build a two-tier AI infrastructure: AI Factories (regional hubs for startups, SMEs, and researchers) and AI Gigafactories (massive, CERN-like computing centres for cutting-edge frontier AI).
2. Data Union: create a unified market for data to ensure high data quality for AI training – Data Governance Act is an established base.
3. Incorporation of AI: stimulate development of AI and incorporate into strategic sectors. The Apply AI Strategy (currently underway) and European Digital Innovation Hubs will interact and encourage such development.
4. AI education, training, and research, by attracting more women to AI, by raising awareness of AI among the wider society and public administration, as well as by attracting and retaining AI talent from outside the EU.
5. Ensure smooth rollout of the AI Act across the economy, supported by the AI Act Service Desk, regulatory sandboxes, and guidance tailored to SMEs and startups.
Italy
Europe
Binding
Italy is the first European country to pass sweeping AI regulations of its own. Importantly, this regulation aims to be entirely compliant with the EU AI Act. It:
1. Establishes principles for the development of AI – specifically that development of AI must not impact the constitutional rights and safeguards afforded to all citizens, and that training and development of AI should include risk identification and mitigation strategies.

2. It provides for certain sector-specific rules:
a. Healthcare: Among other provisions, the Act states that AI can aid prevention, diagnosis, treatment, and disability inclusion but cannot replace medical decision-making; data and systems must be reliable and regularly updated.
b. Labour: AI should improve working conditions, protect worker dignity, and avoid discrimination; employers must inform workers about AI use.
c. Security and Defense: National security and defense AI applications are excluded from the law’s scope but must still respect constitutional rights.

3. A national AI strategy, updated at least every two years, will guide AI policy. The Italian Digital Agency (AgID) will handle innovation, accreditation, and conformity checks, while the National Cybersecurity Agency (ACN) will oversee inspections, sanctions, and cybersecurity aspects. Market supervision remains with the Bank of Italy, CONSOB[1], and IVASS[2] for financial sectors. A coordination committee at the Presidency of the Council ensures alignment among agencies.

4. The law also brings in novel penal provisions for misuse of AI:
a. Criminal penalties: Novel prison terms of 1-5 years for unlawfully distributing harmful AI-generated content, especially deepfakes. Crimes like fraud, identity theft, and market manipulation committed with AI attract aggravated penalties of up to 2–7 years and multi-million-euro fines. Using AI in any crime is now a statutory aggravating circumstance, raising sentences by up to one-third.
b. Civil Liability + Copyright: Civil and criminal liability is tied to the level of control over AI, with sanctions for failing to adopt adequate safeguards. Victims benefit from eased burden of proof in AI-related damages.[3]
c. Copyright: Copyright is limited to AI-assisted works with genuine human input, and unauthorized text/data mining of copyrighted content with AI is a punishable offense.

5. Minors under age 14 require parental consent to access AI services with mandatory age verification required from all AI service providers.
United Kingdom
Europe
Non-binding
The Playbook outlines a set of 10 principles required for safe usage of AI, and goes on to further introduce policies to guide such safe use of AI which complies with the GDPR/UKDPA and relevant laws.
1.    The principles:
a. Know AI and its limitations – Understand what AI can and cannot do, and be aware of risks like inaccuracy or bias.
b. Use AI lawfully, ethically, and responsibly – Ensure compliance with laws, ethical norms, and responsible practices throughout the project.
c. Use AI securely – Build and deploy AI in line with cyber security standards to resist attacks and safeguard data.
d. Maintain meaningful human control – Keep humans involved at critical stages, especially in high-risk or impactful decisions.
e. Manage the full AI life cycle – Plan for adoption, monitoring, updates, and safe decommissioning of AI systems.
f. Use the right tool for the job – Apply AI only when it is the most appropriate and proportionate solution.
g. Be open and collaborative – Share knowledge, code, and practices across government and with civil society.
h. Work with commercial colleagues from the start – Involve procurement and commercial experts early to ensure responsible and effective market engagement.
i. Ensure skills and expertise – Equip teams with the technical, ethical, and strategic skills needed to use AI safely.
j. with organisational policies and assurance – Follow both these principles and your organisation’s governance and assurance frameworks.
 
2.    Government Departments must set up AI governance boards or integrate AI representation into existing boards. Ethics committees with external input are encouraged, alongside maintaining an inventory of AI systems. Risk management, audit trails, and quality assurance frameworks are mandatory for accountability. Departments to ensure that procurement complies with the Digital, Data and Tech Playbook.
 
3.    Departments must apply the Algorithmic Transparency Recording Standard (ATRS) to disclose AI use in decision-making. Public communication should clearly identify when automated responses are provided. Explainability requirements mean users must be able to understand how outputs are generated.
 
4.    Policy requires mechanisms for individuals to challenge or appeal AI-driven outcomes. AI must never be the sole decision-maker in high-risk areas (health, safety, rights). Systems should allow for human intervention and escalation where harm could occur.
 
5.    AI deployment must comply with the Government Cyber Security Strategy and Secure by Design principles. Departments must manage risks like prompt injection, hallucinations, and adversarial attacks. Validation checks, filtering, and red-team testing are required safeguards.
 
6.    Civil servants are entitled to structured AI training, with at least 5 learning days annually. Policy mandates upskilling across all levels: beginners, policy staff, data professionals, digital specialists, and leaders. AI literacy must extend beyond technical teams to decision-makers.
 Playbook - Unenforceable
North America
United States of America
North America
Non-binding
1. Safe and effective systems to be ensured with adequate pre-deployment testing, risk assessments, and ongoing monitoring.
2. Algorithmic discrimination on basis of protected characteristics to be prevented or rectified – proactive equity assessments, representative data use, and regular disparity testing suggested.
3. Data collected to follow principles of data minimization, purpose limitation and meaningful consent. This includes plain-language explainers as to the use of the data, the impacts of such data as well as explainers about system use, and accessible explanations of decisions.
4. Human review and remedies to be available as a fallback in case AI systems do not function as expected or as required.

North America
United States of America
North America
Non-binding
1.     Based on 3 pillars: (i) Innovation; (ii) AI Infrastructure, and; (iii) International AI Diplomacy.
 
Pillar 1: The following is recommended:
a.      Remove red tape, promote open-source/open-weight AI[1], and safeguard free speech in AI systems.
b.      Build sandboxes for AI testing, speed up adoption in key sectors (healthcare, energy, defense), and prepare workers with AI skills, retraining, and education.
c.      Invest in AI-enabled science, next-gen manufacturing, world-class datasets, and frontier AI research (interpretability, robustness, evaluations).
d.      Expand AI use in government and military workflows, while protecting innovations and addressing risks like deepfakes.
 
Pillar 2: The following is recommended:
a.      Streamline permits for data centers, chip factories, and energy projects; modernize and expand the electric grid to meet AI demand.
b.      Restore U.S. semiconductor manufacturing and build high-security data centers for sensitive government and military AI use.
c.      Train and upskill workers in AI infrastructure jobs (electricians, technicians, engineers) through apprenticeships, technical education, and partnerships.
d.      Strengthen cybersecurity of critical infrastructure, promote secure-by-design AI, and enhance AI incident response across government and industry.
 
Pillar 3: The following is recommended:
a.      American AI technology stack to allies, ensuring global adoption of U.S. standards.
b.      Resist Chinese influence in international AI governance bodies and close loopholes in semiconductor export controls.
c.      Lead in assessing national security risks from frontier AI models (cyber, bio, nuclear threats) and invest in AI-enabled biosecurity safeguards.
 


[1] Open-source AI provides full access to a model's architecture, training code, and data, enabling transparency and community-driven improvements. Open-weight AI, in contrast, shares only the model's learned parameters (weights), which are essential but do not reveal the complete training process or data
North America
Canada
North America
Other
The AIDA is reflected in Part III of Bill C-27 – introduced to the Canadian Parliament in 2022 and currently under review at the House of Commons. The following components are important:
1.    High-impact systems:
a.      Defined as AI systems with a high potential to cause harm or biased outputs. Operators of high-impact systems are required to perform frequent risk assessments, mitigation strategies etc. “Harms” are not just physical, psychological, and economic harms to individuals, but also systemic harms like discrimination against marginalized communities. Firms must proactively assess and mitigate bias based on grounds in the Canadian Human Rights Act.
b.      Plain-language descriptions of these systems, including intended use, outputs, and mitigation measures, must be published on publicly accessible websites, and disclosure to the AI minister is mandatory in case the system causes or is likely to cause material harm.
 
2.    Transparency and data: When AI systems use anonymised data[1] for training or any other purposes, the operator (or the appropriate entity) is to disclose anonymisation measures.
 
3.    Distinct Lifecycle Obligations: The Act regulates specific activities in the AI creation lifecycle from system designing to developing, making available for use, and operating high-impact AI systems. Each activity carries distinct obligations, such as risk assessment, documentation, transparency, monitoring, and human oversight.
 
4.    Administrative Structure: A two-fold structure – with Minister of Innovation, Science, and Industry overseeing administration and enforcement, and new AI and Data Commissioner will support enforcement, coordinate with regulators, and act as a centre of expertise.
 
5.    Enforcement: Initially, enforcement will focus on education and voluntary compliance before moving toward stricter penalties. However, the Act does envision a three-fold penal structure based on the crime committed:
 
a.      Administrative Monetary Penalties (AMPs), which are flexible fines for non-compliance.
b.      Regulatory Offences, which lead to prosecution when entities/operators seriously ignore obligations.
c.      Criminal Offences, which are reserved for malicious or reckless uses causing serious harm (e.g., fraud, using stolen personal data, deploying harmful AI knowingly).


[1] Data anonymization is the process of altering or removing personally identifiable information (PII) from datasets to protect the privacy of individuals, making it difficult or impossible to link the data back to a specific person
China
Asia
Binding
The usage of AI algorithms on various platforms – whether browsers or social media – influences user behavior, which is widely accepted as a deterrent to free consent. The relevant regulations have the following characteristics:
 
1.    Applies to all internet services in China that use algorithmic technologies such as personalized push, content ranking, filtering, or automated decision-making.
2.    Providers must not use algorithms to spread illegal or harmful content, fake news, or manipulate rankings and trending lists; algorithms should instead promote “mainstream values” and positive content.
3.    Providers are required to review and audit their algorithms, prevent addictive or manipulative designs, and ensure data security and ethical use of recommendation systems.
4.    Users must be informed about algorithmic recommendations and be given the option to turn them off, delete or adjust tags, and access non-personalized feeds. Special protections apply for minors, elderly users, workers, and consumers.
5.    Algorithms cannot be used to impose discriminatory pricing, interfere with other lawful services, or engage in monopoly and unfair competition.
 
China
Asia
Binding
Regulation of deep synthesis is the broad answer to the currently pervasive issue of deepfakes. The relevant regulations have the following characteristics:
 
1.    Applies to all internet services in China that use deep synthesis technologies, including AI-generated or edited text, audio, video, images, and virtual scenes. Covers service providers, technical supporters, and users.
2.    Prohibits use of deep synthesis to create or disseminate illegal or harmful information, false news, or content that endangers national security, damages national image, disrupts social order, or infringes lawful rights.
3.    Imposes primary responsibility on providers for information security, requiring governance systems for user registration, algorithm and ethics review, data protection, fraud prevention, and emergency response.
4.    Requires real-name authentication for users through phone numbers, ID credentials, or national identity systems, and restricts access for unauthenticated users.
5.    Regulates use of training data, mandating compliance with personal information protection rules and explicit consent for editing biometric identifiers such as faces or voices.
6.    Obligates providers to add technical identifiers (e.g., watermarks) to AI-generated content and to prominently label synthetic text, voices, faces, videos, and immersive scenes. Prohibits tampering with or removing such labels.
7.    Mandates content review of both inputs and outputs, creation of harmful content libraries, and timely action against violations, including deletion, restriction, rumor-refuting measures, and reporting to regulators.
8.    Requires clear complaint and appeal mechanisms, disclosure of processing rules, and timely responses to users.
9.    Services with public opinion or mobilization potential to be registered with the CAC and security assessments. Authorises regulators to suspend services, order rectification, impose fines, and pursue civil or criminal liability for violations.
 
China
Asia
Binding
These measures were introduced in the absence of a clear policy or standard for AI-generated content and services. Broadly, it:
1.    Applies to generative AI services offered to the public in China that generate text, images, audio, video or similar content; excludes purely internal R&D and non-public uses.
2.    Prohibits generating illegal or harmful content, requires prevention of algorithmic discrimination, respect for IP and trade secrets, protection of personal rights, and ensuring accuracy and transparency.
3.    Training data must be lawful, respect IP, and meet consent requirements for personal data; providers must ensure authenticity, accuracy, objectivity, diversity, and proper labeling standards.
4.    Providers are responsible as content producers and data processors: protect user inputs, minimize collection, avoid unlawful retention or disclosure, and provide user rights to access, correct or delete data.
5.    Generated images and videos must be clearly marked; providers must define service scope, prevent addiction among minors, and ensure service stability.
6.    Illegal content must be blocked or removed, with retraining to prevent recurrence; offending users may face restrictions or termination, and authorities must be notified.
7.    Services with public opinion or mobilization functions require security assessments and algorithm registration; regulators may inspect, demand disclosure, and impose penalties or suspend services for violations.
 
China
Asia
Binding
Referred to collectively as the “labelling rules,” these were introduced in China as an extension of AI policy required clear marking of AI-generated content (refer to interim measures). The following characteristics are seen:
1.     Two types of labels: explicit – i.e., visible/audible indicators in the content, and implicit – i.e., metadata[1] embedded within AI-generated content, containing essential details such as the service provider’s name and a content ID.
 
2.     Further, providers of online content distribution services (like social media platforms) are required to implement mechanisms to detect and reinforce AI content labeling, ensuring traceability by categorizing AI-generated content into three groups and embedding the relevant metadata:
a.      Confirmed AI-Generated Content:  If an implicit label is detected, distribution platforms should add a clear label indicating the content is AI-generated when distributing it.
b.      Possible AI-Generated Content:  If no implicit label is detected but the user reports the content as AI-generated, platforms should add a label reminding the public that the content is possibly AI-generated.
c.      Suspected AI-Generated Content:  If neither an implicit label is detected nor a user report suggests the content is AI-generated, but explicit labeling or other evidence indicates the content was generated through AI tools, platforms should label it as suspected AI-generated content.
 


[1] Essentially means “data about data” – contains information that helps to explain, organize, find, and manage other data sets. It includes details such as the author, creation date, file type, and size of a piece of data, making it easier for both humans and machines to understand and utilize information effectively.
Taiwan
Asia
Other
Given Taiwan’s eminence in semiconductor manufacturing,[1] it is no surprise that AI development in Taiwan is projected to be amongst the best in the world. The AI Basic Act looks to supplement this:
1.     Defines AI as autonomous systems using sensing and machine learning/algorithms to generate outputs, such as predictions, content, recommendations, or decisions that impact real or virtual environments.
 
2.     Embeds seven principles: sustainability, human autonomy, privacy and data governance, safety, transparency and explainability, fairness, and accountability, referencing OECD, EU, U.S., and Singapore frameworks.
 
3.     Directs the Ministry of Digital Affairs to establish a risk classification framework; regulators must adopt proportionate, risk-based rules, including restrictions or prohibitions for harmful applications.
 
4.     Imposes liability, remedies, and insurance mechanisms for high-risk AI deployments, while exempting academic or pre-market R&D from these obligations.
 
5.     4 areas of major concern:
a.      Innovative Collaboration and Talent Cultivation: Ensuring the resources and talent needed for AI. Includes creation of sandboxes, AI literacy in schools, state R&D in AI, etc.
b.      Risk management and application responsibility: Risks must be identified and managed before AI systems can be safely applied. Refer to point (3).
c.      Protection of rights and access to data: People's basic rights, such as privacy, cannot be compromised. Ensured through mandatory privacy by design framework, mandatory data minimization and purpose limitation as well as accuracy of datasets and extensive copyright protections.
d.      Regulatory Adaptation and Business Review: Policies and regulations must be agile to keep pace with AI development.
 


[1] Semiconductors are essential because they act as the "heart and brain" of all modern electronics, enabling the functionality of everything from smartphones and cars to medical devices and advanced AI systems.
African Union
Africa
Non-binding
The African AI Strategy is fairly unique in that majority of its focus is on leveraging AU for human resource development. The focus is on the following sectors:
1.    Agriculture: Combined with geospatial technologies, AI enables precision farming, early-warning systems, and better climate forecasting. Pilot projects already help farmers diagnose crop diseases, predict market prices, and access financial services, and the strategy emphasizes knowledge-sharing and centres of excellence to scale these solutions
 
2.    Health: The strategy notes that the African health sector has had extensive use of AI solutions during the COVID-19 pandemic. Recommends introducing centres of excellence here as well. AI adoption in health is recommended as a tool for tackling Africa’s shortage of medical infrastructure and professionals by enabling telemedicine, early diagnosis, epidemic prediction, and personalized treatment. Tools like AI-driven diagnostics, drug discovery, and health data systems can strengthen disease surveillance and improve patient outcomes.
 
3.    Education: AI in education could improve teaching, learning, and access. It involves building AI competency frameworks for teachers and students, integrating AI into curricula, and validating educational AI tools. The strategy stresses expanding Technical and Vocational Education and Training (TVET), designating centres of excellence, and continuously reviewing AI’s long-term impacts on learning to ensure inclusive, locally relevant applications.
 
4.    Climate Change: Similar technologies as the AI in agriculture – including weather prediction, monitoring deforestation, mapping ecological impacts, and developing early-warning systems for floods, droughts, and cyclones. Despite urgent need,[1] uptake remains limited, so the strategy pushes for awareness, cross-border cooperation, and public–private partnerships.
 
Apart from this, the strategy also recommends creating a development friendly environment so private players can easily incorporate AI into operations.
 


[1] Owing to climate change, tens of millions in Africa could be internally displaced by 2050, with estimates ranging from 70 to 88 million internal climate migrants in Sub-Saharan Africa alone, according to the Africa Climate Mobility Initiative.
Vietnam
Asia
Binding
Vietnam has become the first country in the world to enact a standalone law specifically dedicated to digital technology industry, according to its Ministry of Science and Technology. The Act shall be enforceable fully from 01.01.2026. Section 5 of the Law on Digital Industry deals with Artificial Intelligence, and establishes the following:
1.     It defines AI to be digital technologies that use data-driven algorithms to carry out tasks with varying degrees of autonomy and adaptability. These systems can generate outputs such as predictions, content or decisions, which can impact both physical and digital environments.
 
2.     It sets certain non-derogable principles which form the basis of AI operation – i.e., reliable, human-centric and ethically operated. In light of this, the Communications Ministry of Vietnam is required to submit annual as well as 5-year plans for the promulgation and development of AI.
 
3.     It prohibits marketing, developing or deploying certain AI activities:
a.      AI that distorts/manipulates individual behavior or impairs their decision making in any way. This includes systems which infer human emotions in the workplace and educational establishments.
b.      AI that exploits weaknesses due to social/political/economic/physical or mental conditions.
c.      AI that profiles individuals based on any protected characteristics which may cause adverse treatment in situations unrelated to where the data came from or causing harm that is excessive compared to their actual behavior. Even if the AI collects the data and stores it (without profiling), such behavior is prohibited.
 
4.    Provides for risk-based classification, the metrics of which are to be established by the Ministry of Information and Communications.
 
5.    Mandatory labelling of AI-created products in a machine-readable and detectable format created or manipulated artificially.
 
Vietnam
Asia
Non-binding
Vietnam’s AI Strategy sets out ambitious goals to be achieved by 2030 – such as rising to the top 50 nations globally in the field of AI, and creating at least 1 top-tier AI training/research institution in ASEAN, and universalize AI skills among workers. To do so, it creates a sectoral approach, with the relevant ministry being tasked with policy enforcement:
1.     Vietnam will invest in AI for military modernization, including smart weapons, tactical planning systems, and automated defense responses. National centers for big data and high-performance computing will be set up by the Ministry of Defense and Public Security. AI will also enhance resilience in cyber, biological, and chemical warfare, alongside disaster prevention and rapid-response systems.
 
2.     The strategy requires AI and data science to be embedded in curricula from secondary schools to universities, supported by STEAM education for youth. Universities should expand graduate and postgraduate AI training, while virtual teachers, predictive career tools, and adaptive learning technologies will be piloted.
 
3.     AI adoption in factories recommended to improve automation and manufacturing efficiency. In e-commerce and trade, AI will forecast demand, optimize supply chains, automate negotiation and pricing, and personalize digital marketing.
 
4.     In the field of healthcare, AI will enhance telemedicine, assist doctors in remote diagnostics, and provide personalized treatments. Strategy calls for building open health datasets to train AI models, alongside fostering AI-driven drug research and medical device innovation.
 
5.     Further, AI will be used for environmental monitoring, climate change response, and pollution control. Open environmental datasets will support real-time tracking of land, water, and air quality.
 
The Strategy recommends incorporation of AI in other sectors as well, such as Banking & Finance, Tourism, Labor, etc.
 
South Korea
Asia
Binding
The Basic Act’s general direction is to promote the growth of Korea’s AI industry and competitiveness, while embedding ethics, safety, and human rights protections into every stage of AI development and deployment. The Act was enacted in January 2025 and will be enforceable from January 2026. It sets out the following:
1.  The law mandates a national AI Basic Plan, updated every three years, to guide research, regulation, and investment. A National AI Committee, chaired by the President, oversees strategic decisions, regulatory reform, and infrastructure development. This ensures coordinated national policy, future-focused planning, and adaptability to rapid AI evolution.
 
2.  Dedicated bodies like an AI Policy Center and an AI Safety Research Institute will lead policy development, monitor social impacts, and research safety standards. These institutions create a backbone for AI governance, ensuring that safety, ethics, and societal well-being are embedded in both policy and practice.
 
3.  The government will support R&D, commercialization, standardization, and learning data infrastructure, alongside initiatives for startups, SMEs, and convergence with other industries. Policies also supporting the creation of AI clusters, demonstration facilities, and data centres to strengthen domestic capabilities and global competitiveness.
 
4.  The Act formalizes AI ethics principles and encourages the creation of private ethics committees. It mandates transparency, risk management, and safety obligations for AI developers, especially for high-impact systems, and introduces impact assessments to safeguard fundamental rights. In fact, operators of high-impact AI must implement stringent measures: risk assessments, explainability protocols, user safeguards, and documentation. They must also notify users of AI-generated content and undergo verification and certification. This risk-based oversight ensures accountability where societal stakes are highest.
 
5.  The law applies extraterritorially to foreign AI services affecting Korean users, requiring non-resident companies to appoint a domestic representative. It empowers authorities to conduct investigations, impose fines, and order corrective actions.
 
6.  The Act also introduces detailed penal/administrative measures:
a.      Members of the National AI Committee or entrusted officials who leak confidential information face up to 3 years’ imprisonment or fines of up to 30 million won.
b.      AI operators may be fined up to 30 million won for (a) failing to notify users that a service/product is AI-based, (b) failing to appoint a domestic representative, or (c) ignoring corrective or cease orders.
c.      The Minister of Science and ICT can order AI business operators to cease, rectify, or remedy violations after an investigation.
d.      Authorities may require data submissions or enter business premises to check compliance with obligations on transparency, safety, and reliability.
 
Japan
Asia
Binding
The enactment of this act made Japan the second major economy in the Asia-Pacific (APAC) region to enact comprehensive AI legislation. Most provisions of the Act (except Chapters 3 and 4, and Articles 3 and 4 of its Supplementary Provisions) took effect on June 4, 2025. The following has been set out by the Act:
1.    The Act has been enacted with the objective of making Japan the most AI-friendly country by promoting AI research, development, and use as a driver of socio-economic growth. This is not a detailed policy but a fundamental law, laying down principles and policy direction rather than detailed compliance obligations.
 
2.    It sets out the following guiding principles:
a.      Alignment with national science and digital strategies.
b.      Promotion of AI as foundational to economy, society, and security.
c.      Comprehensive advancement from research to application.
d.      Transparency to protect rights and prevent misuse.
e.      International leadership in shaping global AI norms.
 
3.    It provides for certain “basic measures” as well as the relevant stakeholders responsible for enforcement:
a.      Promote R&D from basic to applied stages.
b.      Develop and share infrastructure: data sets, computing, equipment.
c.      Create guidelines consistent with international norms.
d.      Secure and train human resources across disciplines.
e.      Promote public education and awareness of AI.
f.       Monitor trends, analyze rights infringements, and provide policy guidance.
g.      Foster international cooperation and active norm-setting.
 
4.    The Act establishes Artificial Intelligence Strategy Headquarters (AISH) under the Cabinet to promote AI research, development, and utilization in a systematic manner. The Headquarters is tasked with drafting and implementing the Basic Plan for Artificial Intelligence, coordinating important national measures, and ensuring overall policy integration. It is headed by the Prime Minister as Chief, assisted by the Chief Cabinet Secretary and the Minister of State for Artificial Intelligence Strategy as Deputy Directors, while all other Ministers of State serve as members. The AISH is tasked with the creation of a “Basic Plan for AI,” covering policy goals and measures in accordance with the Basic Principles and taking into account the basic measures set out.