Balancing Risk and Innovation: AI Governance Strategies in South Korea, Japan, and Taiwan
As AI becomes a defining force in global innovation and economic competitiveness, governments are establishing regulatory frameworks to oversee their use. Three of East Asia’s leading digital economies — South Korea, Japan, and Taiwan — are emerging as early movers in the development of AI laws, all aiming for more innovation-friendly regimes. Each jurisdiction has taken a distinct approach: South Korea’s AI Basic Act introduces an expansive risk-based regulatory regime; Japan’s AI Promotion Act favors a more permissive, innovation-driven model; and Taiwan’s draft AI Basic Law proposes a principles-based framework that may develop into a more risk-based approach. Together, these efforts offer a case study on the diversity of AI governance strategies and their effects on digital trade.
South Korea passed its Basic AI Act in January 2025. The Act, which introduces tiered obligations based on risk levels and applies to both AI developers and deployers, is among the most ambitious regulatory efforts outside of the EU. Under Korea’s Act, providers of “high-risk” AI services must notify users in advance and submit risk assessments and explainability documentation to government authorities before deployment. Given the Act’s rushed development and lack of deep stakeholder engagement, many implementing details hinge on implementing regulations. These rules could establish additional obligations for AI systems that surpass certain thresholds for computing power and AI operators that meet certain definitions of business scale.
Japan followed with the passage of its AI Promotion Act in May 2025. Contrasting with South Korea’s effort, Japan’s approach is incentive-driven and aims to stimulate innovation through a light-touch regulatory framework. Rather than imposing sweeping new obligations, the Act defers to existing sector-specific regulations. It addresses concerns such as criminal misuse, data privacy violations, and copyright infringement by promoting transparency measures, but stops short of mandating hard compliance requirements.
Finally, Taiwan is in the process of finalizing its own AI Basic Law. The draft legislation sets out principles centered on data governance, transparency, explainability, fairness, and non-discrimination. While the current version imposes only limited obligations, such as labeling or disclosure of AI-generated content, it addresses high-risk AI through standards, verification mechanisms, testing frameworks, and liability guidelines. These efforts are guided by a risk-classification model to be developed in line with other international approaches, including the EU’s AI Act.
With South Korea, Japan, and Taiwan each charting distinct regulatory paths, East Asia’s leading digital economies are poised to serve as a real-world testbed for how different approaches to AI governance affect innovation, investment, digital trade, and consumer welfare.
All three jurisdictions reflect a mix of hard regulatory obligations and soft-law transparency- and incentive-based models. They advance risk-based approaches to AI governance, but define and operationalize risk through different lenses. Japan focuses on specific harms of criminal misuse, privacy violations, and copyright infringement through existing legal frameworks. Taiwan adopts a broader model, aligned with the EU’s, identifying high-risk sectors where AI could negatively impact public health, safety, fundamental rights, or the environment. South Korea takes the most expansive approach, combining a high-impact definition tied to potential harm to life, safety, or rights with a secondary, structural proxy through compute thresholds used for training AI systems. While it is still too early to measure the full impact of these frameworks on innovation, some early insights are emerging. First, tiered risk models remain the most viable path forward, given the increasing ubiquitous application of AI across diverse sectors and use cases. Second, aligning with existing sectoral rules, where possible, helps reduce compliance burdens and fosters innovation. Third, proxy metrics such as compute thresholds may not reliably capture actual risk. As the DeepSeek breakthrough illustrates, efficiency gains in model training have weakened the link between compute usage and system capability, underscoring the need for more nuanced risk indicators.
These approaches also carry important international implications, as AI regulations increasingly intersect with cross-border service provision. Avoiding regulatory fragmentation is key to ensuring the continued flow of AI-enabled services and digital trade writ large. Taiwan’s draft legislation references alignment with international frameworks, including the EU’s, as part of its risk management approach, while providing leeway to the government to adopt its own definitions of high-risk. Japan’s law highlights the role of international norms in shaping responsible AI use, with a nod towards the Hiroshima AI Process International Code of Conduct, a voluntary framework developed under Japan’s G7 Presidency in 2023 to guide the development and use of AI. South Korea’s law similarly emphasizes the importance of international cooperation and supporting access to overseas markets. However, in the case of South Korea, such soft commitments to upholding cross-border services run counter to some of the law’s stricter regulatory requirements. Specifically, the use of compute-based thresholds, local agent designation requirements, and the possibility of on-site enforcement raids have raised concerns about disproportionate impacts on foreign AI developers, particularly those based in the U.S. In response to such criticisms, lawmakers from Korea’s ruling Democratic Party proposed legislation in April to delay certain provisions of the Act to allow for further assessment of their impact. This case study underscores the importance of addressing cross-border implications in the design of AI governance frameworks, given the race to attract critical foreign investment. While alignment with international norms is frequently emphasized, equal attention must be paid to the practical impact of domestic requirements on foreign service providers to avoid creating unintended barriers to investment and the cross-border delivery of services.
The varied approaches taken by South Korea, Japan, and Taiwan underscore the challenges of crafting AI governance frameworks that manage risks while avoiding disrupting digital trade. While all three emphasize risk management and international alignment, South Korea’s broad and accelerated approach highlights the potential downsides of moving too quickly without fully developed implementing rules. Its sweeping obligations and structural thresholds risk placing undue burdens on foreign providers and may inadvertently disrupt digital trade. In contrast, Japan’s more measured, incentive-based strategy demonstrates the advantages of building on existing legal frameworks to support innovation while addressing key risks. As governments around the world, including Taiwan, advance down their own AI regulatory paths, these early cases offer important lessons on the value of balanced regulatory approaches that uphold core principles for enabling trade in AI-enabled technologies.