- Introduction
On 14 January 2026, Taiwan promulgated the Artificial Intelligence Basic Act, a 20-article statute that establishes seven governance principles, creates a multi-agency coordination architecture, and sets a two-year window for sector-specific regulators to develop implementing rules. This is a significant legislative milestone for a jurisdiction long celebrated for civic technology innovation yet still navigating the construction of its digital legal infrastructure. But the Act’s real interest for European observers lies not in what it shares with the EU AI Act — the principles largely align with the OECD framework that also underpins the European approach — but in what it deliberately omits: penalties, individual rights, mandatory compliance procedures, and an independent oversight body. This is not a case of legislative oversight. It is a calculated design choice by a democratic polity, and one that presents a different logic of AI governance worth examining on its own terms. For a detailed account of the Act’s provisions and legislative history, see Chen (2026).
- Design Choices: Innovation First, Protection Deferred
The Act’s architecture reveals a structural asymmetry between its promotional and protective provisions. On the promotional side, the instruments are either immediately operative or can be readily deployed through existing institutional channels: Article 10 provides subsidies, tax incentives, and financial preferential measures; Article 11 establishes a statutory hierarchy under which, where AI-related regulatory interpretations conflict with existing laws, fostering innovation takes priority, provided the Act’s basic principles are satisfied; and Article 17 exempts R&D-phase activities from the high-risk liability framework. Regulatory sandboxes are authorised. An annual performance reporting obligation ensures accountability for promotional spending.
The protective side is a different story. Article 4’s seven principles — sustainability, human autonomy, privacy, security, transparency, fairness, accountability — articulate aspirations without establishing enforcement mechanisms. Article 5 requires labels and warnings for high-risk AI applications, but neither Article 5 nor any other provision defines what “high-risk” means; that task is delegated to the Ministry of Digital Affairs (MODA) under Article 16, and the risk classification framework does not yet exist. Article 17 directs the government to clarify liability attribution for high-risk AI, but provides no substantive content. No penalty provisions appear anywhere in the statute. No individual rights are conferred — though notably, Article 15 devotes three paragraphs to labour protections, requiring the government to safeguard workers’ rights, bridge AI-driven skills gaps, and provide employment counselling to displaced workers. This is unusual in comparative AI legislation, but the obligations are directed at the state, not at employers, and do not create individually enforceable entitlements.
The Judicial Reform Foundation characterised the Act as ‘a policy-declaration-filled industry development white paper rather than a framework law,’ observing that while Taiwan’s Education Basic Act grants four rights and the Indigenous Peoples Basic Act grants two, the AI Basic Act grants citizens none. What remains is a statute whose promotional instruments can draw on existing policy mechanisms, while its protective apparatus depends entirely on institutional infrastructure — a risk classification framework, liability attribution rules, remedy mechanisms — that does not yet exist and may or may not materialise within the two-year deadline set by Article 18.
- Why Taiwan Chose Soft Law: The Political Economy of Restraint
From the outside, Taiwan’s choice may appear puzzling. Here is a jurisdiction with a vibrant constitutional court, an active civil society, and a well-documented history of rights-based advocacy — yet its legislature produced an AI governance framework that the Taiwan Association for Human Rights called “a castle in the air without stairs.” But the puzzle dissolves once one recognises that Taiwan has never had a high-density digital regulatory regime to begin with. Understanding the Act’s design requires attention to three contextual factors that are rarely visible in comparative law surveys.
The first is the broader landscape of digital regulation. Taiwan has maintained personal data protection legislation since 1995, but the framework has long faced criticism for insufficient safeguards. The legislature amended the Act in late 2025 to establish an independent Personal Data Protection Commission; however, the Commission’s enabling legislation remains pending and the body has not yet been formally constituted. Taiwan has also not obtained an adequacy decision under the EU’s GDPR. When the National Communications Commission attempted in 2022 to introduce a Digital Intermediary Services Act closely modelled on the EU’s Digital Services Act, the proposal was met with strong public opposition on free speech grounds and was ultimately shelved. Broadly speaking, digital legislation in Taiwan has consistently involved significant societal contestation, and the resulting regulatory density remains low by comparison with the EU. The AI Basic Act’s design reflects, rather than departs from, this pattern.
The second factor is the developmental state tradition. Like Japan and South Korea, Taiwan’s regulatory culture in the technology sector has historically been shaped by close government-industry coordination oriented toward industrial upgrading and export competitiveness. In this tradition, the state’s primary role is to facilitate innovation rather than to constrain it through rights-based regulation. The AI Basic Act’s innovation-first statutory hierarchy (Article 11), its generous R&D exemptions (Article 17), and its reliance on industry self-regulatory guidelines (Article 16) are best understood not as regulatory failures but as expressions of a governance logic in which market facilitation takes precedence over ex ante risk regulation. The promotional provisions are not incidental to the Act; they are its centre of gravity.
The third factor is geopolitical. Taiwan’s semiconductor dominance gives it a strategic interest in positioning itself as an AI-friendly jurisdiction. The Lowy Institute has framed Taiwan’s AI governance as extending the “silicon shield” concept: if Taiwan becomes indispensable not only for chips but also for AI infrastructure and governance, the costs of coercion rise. President Lai’s “AI Island” vision anchors the Act’s innovation-first orientation in this broader calculus. The promotional provisions are not merely industrial policy; they serve a national security function.
- Implications for Digital Constitutionalism
Taiwan’s AI Basic Act engages with many of the same normative concerns that animate the European digital constitutionalism discourse — fundamental rights, accountability, transparency — but arrives at a markedly different regulatory response. As the preceding analysis suggests, this divergence reflects not a normative rejection of rights-based governance but a specific configuration of institutional conditions: lower existing regulatory density in the digital sphere in comparison with the EU, a developmental state tradition that privileges facilitation, and a geopolitical positioning that reinforces the innovation-first orientation. While the EU AI Act remains the most fully articulated expression of the rights-based approach, Taiwan presents a different logic operating under different constraints.
This does not mean the choice is unproblematic. Civil society critics are right that the Act confers no justiciable rights and that its protective provisions lack teeth. The next two years will be decisive. Whether sector-specific regulators produce meaningful implementing rules under the January 2028 deadline will determine if Taiwan’s principles-based wager generates adaptive governance or merely defers difficult choices. It also remains to be seen how Taiwan’s courts will respond: the Constitutional Court has historically played an active role in rights adjudication, and the Act’s silence on individual rights does not foreclose judicial development of AI-related constitutional protections through existing fundamental rights doctrines. The judiciary may yet supply the normative content that the legislature chose not to.
As argued in this contribution, Taiwan’s innovation-first approach can be situated within the developmental state tradition, a framing that finds resonance across East Asian democracies where the state has historically played a facilitative role in technology governance. Yet this resonance should not obscure the differences. South Korea’s AI Basic Act (December 2024) comes closest to the European regulatory approach while retaining a pronounced promotional orientation, combining specific criteria for high-impact AI and administrative penalties with extensive state support for AI industry development. Japan’s AI Promotion Act (May 2025) is, as its title indicates, fundamentally a promotional statute: it contains no penalties and relies on voluntary cooperation and reputational mechanisms, channelling the state’s role almost entirely toward facilitation. Taiwan’s AI Basic Act occupies its own position — principally declaratory in character, it sets out governance principles and institutional architecture while deferring substantive regulation to future legislation. What these jurisdictions share is that all three are navigating AI governance within constitutional democratic frameworks. This shared foundation makes the East Asian experience directly relevant to the digital constitutionalism discourse, which has drawn predominantly on European regulatory developments. Engaging with these varied democratic trajectories would enrich our understanding of how constitutional commitments translate into regulatory design under different institutional conditions.
Taiwan’s story is still unfolding. Through its distinct developmental logic, it offers an invitation to observe how a different democratic experience shapes the governance of a transformative technology, despite the considerable challenges that remain.






