Securing Agency and Managing Trade-offs in the Age of AI: Strategic Choices for Asia-Pacific Middle Powers

2026-01-27
Siwei Huang on the policy challenges facing the region as the US, EU, and China forge different AI governance frameworks.

Author

Siwei Huang

Center for Asia-Pacific Resilience and Innovation

Biography

Siwei Huang is director of engagement of the Center for Asia-Pacific Resilience and Innovation (CAPRI), where he leads engagement initiatives with diverse stakeholders worldwide based on CAPRI’s interdisciplinary policy research. He also publishes interdisciplinary research and commentary in English and Chinese on energy policy, trade relationships, and supply chain management in East Asia; internationalization and health system policy in Taiwan; Taiwanese electoral politics, and the evolution of identity in Taiwan and Hong Kong. Previously, Siwei was a researcher at the Education University of Hong Kong. He holds a master’s in global political economy from the Chinese University of Hong Kong and a bachelor’s in applied economics from Hong Kong Baptist University.

Artificial intelligence (AI) leadership, in terms of both R&D advancement and setting the rules of AI adoption and utilization, is increasingly shaped not only by technological breakthroughs but by control over platforms, foundation models, and the governance frameworks that determine how AI is developed, deployed, and regulated. As AI systems diffuse rapidly across economies and societies, governance choices are becoming inseparable from questions of competitiveness, security, and legitimacy.

Globally, the AI development and governance landscape is consolidating around three distinct approaches: a market-led, firm-driven model centered on proprietary innovation in the US; a regulation-first strategy in the EU that seeks to exercise normative leadership through standard-setting and accountability; and a state-coordinated approach in China that combines rapid innovation, open-source experimentation, and centralized oversight. The middle powers of the Asia Pacific—notably Japan, South Korea, Taiwan, the ASEAN economies, and Australia—operate under markedly different constraints: They lack the hyperscale digital platforms of US technology firms, the supranational regulatory authority of the EU, and the strong state capacity of China. Yet these middle powers are deeply embedded in global AI supply chains and increasingly exposed to the economic, security, and societal consequences of AI deployment. For these economies, AI is not only a technological issue but also a test of institutional capacity.

The challenges posed by AI are forcing policymakers worldwide to revisit longstanding questions about how societies drive innovation while managing its consequences—but under conditions of unprecedented technological speed. How can Asia-Pacific middle powers carve out a distinct path that prioritizes application-driven innovation while creating flexible yet credible governance models capable of managing the trade-offs among growth, risk, and dependence? Diverging pathways toward AI resilience in the Asia Pacific can be understood along three interrelated dimensions: strategic goal setting, innovation ecosystems, and governance design. Rather than prescribing policy solutions to contentious debates, this analysis poses key questions that public- and private-sector stakeholders must confront as the global AI landscape evolves.

AI governance archetypes: the US, EU, and China

The emerging global AI order is not converging on a single governance framework. Instead, three archetypal approaches to aligning AI innovation strategies with broader political, economic, and strategic priorities have emerged. These models define the external environment within which Asia-Pacific middle powers must make their own choices.

The US approach is anchored in market leadership as well as firm-level innovation and control over proprietary technologies. AI development is driven primarily by large technology companies—such as OpenAI, Google, and Microsoft—with the computational scale, data access, and capital to build and deploy frontier models. Under the Trump administration, this orientation has been reinforced by renewed emphasis on deregulation, industrial competitiveness, and strategic rivalry with China.[i] Frontier AI models and infrastructure dominance—particularly in producing cutting-edge AI chips—are increasingly framed as strategic assets for sustaining US technological leadership and geopolitical influence.[ii]

By contrast, the EU has positioned itself as a regulatory leader. The AI Act and related initiatives emphasize risk classification, transparency, and accountability across the AI lifecycle.[iii] Rather than competing directly in developing scalable platforms or foundation models, the EU seeks to shape the global AI ecosystem by establishing statutory rules and standards that others outside EU may adopt—or be required to comply with to access European markets. While the EU’s regulatory capacity and market size provide leverage in standard-setting, its AI innovation ecosystem remains comparatively fragmented and undercapitalized, casting doubt on the capacity of European firms—particularly start-ups and SMEs—to scale up and compete in a tightly regulated environment.[iv]

China’s approach combines rapid AI innovation with strong state oversight. Facing US-led export controls on advanced chips and driven by a strategic push for technological self-sufficiency, China has promoted open-source AI models and large-scale experimentation to accelerate the diffusion of AI. The emergence of cost-efficient training approaches showcased by DeepSeek and growing global adoption of Alibaba Cloud’s Qwen model underscore China’s ability to innovate under constraints and pursue efficiency-driven AI development.[v] Beijing recently signaled its preference for an experimental and targeted regulatory approach to manage AI development while ensuring tight political control and data security.[vi] Beijing has also sought to extend its influence internationally by advancing global AI governance proposals, including the Global AI Governance Action Plan, announced shortly after the White House unveiled America’s AI Action Plan.[vii] For China, innovation and regulation are complementary instruments of the state to secure advantages in both domestic governance and international agenda-setting.

These three models define the strategic landscape in which Asia-Pacific stakeholders operate, yet none of them can be replicated wholesale by Asia-Pacific middle powers.

Diversity among Asia-Pacific middle powers embracing AI

Rather than choosing among US, EU, or Chinese approaches, stakeholders in the region face a different challenge in AI advancement: how to make trade-offs that reconcile aspirations for national competitiveness with dependence on foreign technologies and the need to comply with—or influence—emerging international standards.

Strategic goal setting: What role do Asia-Pacific economies have in the global AI value chain?

The first question Asia-Pacific middle powers must address is what role they seek to play in the global AI ecosystem. Should they compete at the frontier of model development, exercise sector-specific leadership in regulation, or focus on AI integration and deployment? Japan’s national AI strategy as part of its Society 5.0 framework articulates long-term goals, emphasizing the integration of AI into manufacturing, healthcare, and public services.[viii] This reflects a strategic choice to leverage Japan’s existing industrial strengths rather than pursue scale-driven platform dominance. South Korea has similarly articulated ambitions to become a regional AI hub through coordinated investment in AI adoption in the semiconductor industry and cloud infrastructure.[ix] ASEAN economies, meanwhile, are emphasizing collective capacity-building in AI governance, responsible AI deployment, and digital transformation through region-wide guidance frameworks rather than competition at the technological frontier.[x]

Utilizing readily available AI models and cloud computing solutions by the leading global technology firms can significantly reduce the costs of research and infrastructure development. However, such reliance risks technological and platform dependency, raising concerns over data privacy, information security, service reliability in critical sectors, and contextual biases embedded in universal models.[xi] Given the rapid integration and deployment of AI in education, utility management, healthcare, and other sectors vital for society, how can middle powers adapt foreign AI solutions to maximize competitiveness and cost efficiency while managing long-term risks? Singapore’s decision to adopt and build on Alibaba Cloud’s Qwen architecture for its national AI initiatives—shifting away from Meta’s model family—illustrates pragmatic engagement with external ecosystems to support localized development. For Singapore, Qwen offers affordability and multilingual capability to meet the needs of Southeast Asian communities.[xii] Taiwan’s launch of a sovereign AI data center in late 2025 reflects a different strategy, aimed at preserving control over AI outcomes and supporting applications aligned with national priorities, such as promoting population health and preserving indigenous languages.[xiii]

Entrepreneurship and ecosystems: How can the region innovate without hyperscale platforms?

Most Asia-Pacific middle powers have not developed hyperscale AI platforms like those in the US or China. Consequently, the vitality of their AI ecosystems depends less on producing foundation models and more on enabling firms to innovate “on top of” current technologies and platforms. Start-ups and established firms alike tend to focus on vertical applications of AI that leverage industry-specific knowledge to provide tailored, context-sensitive solutions. Such a strategy emphasizes integrating AI into business operations, often manifesting as service-oriented innovation. Southeast Asia’s Grab offers a salient example. As a regional “superapp,” Grab has integrated AI extensively across its ride-hailing, food delivery, and digital payment services to optimize logistics, pricing, and customer engagement—demonstrating how application-driven AI can generate economic value without ownership of foundational models.[xiv]

Supporting these ecosystems in the Asia Pacific requires policy attention distinct from current debates over frontier AI benchmarking and efficiency that dominate discourse in the US and China. Asia-Pacific middle powers face questions of how to expand access to cross-border finance, national computing resources, data pools, and testing infrastructure. Public–private partnerships to develop shared tools and platforms that meet these needs can lower the barriers to innovation for startups, traditional industries, and the public sector. Concurrently, as AI is increasingly applied to functions such as language processing and automation, segments of the low-value-add labor force are likely to be displaced, even as new roles requiring effective use and management of AI systems emerge. Therefore, governments and the private sector must invest in AI literacy, workforce reskilling, and transition strategies to ensure that AI adoption translates into productivity and value gains rather than socioeconomic disruption.[xv]

Governance design: Should policymakers regulate for risk control or capability building?

As AI adoption accelerates, governance frameworks across the Asia Pacific remain diverse and largely experimental. Unlike the EU’s comprehensive AI Act, many governments rely on soft law approaches that are not enforceable but offer flexibility and adaptability. Japan’s AI Promotion Act issued in 2025 is a light-touch approach, defining core principles for AI governance without imposing rigid guardrails. Instead of pursuing comprehensive yet potentially premature regulation, such guidelines embed AI oversight within existing and industry-specific legal frameworks while paving the way for future legislative reform as societal expectations for AI governance and understanding of the associated risks of AI evolve.[xvi] South Korea’s Basic AI Act, which took effect in January 2026, marks the first comprehensive legal framework for AI in the region, making South Korea the first country to fully operationalize such a framework, given that the EU has postponed regulations on high-risk AI until December 2027.[xvii] It introduces risk-tiered obligations and predeployment assessments for high-risk systems, signaling that future regulations will follow a more structured trajectory. However, the Act’s broad scope and lack of specifics on how regulators will evaluate risk has sparked concern among industry stakeholders.[xviii] Taiwan’s AI Basic Act similarly proposes a risk-based framework that incorporates certification, testing, and disclosure mechanisms aligned with international norms and values of transparency and accountability, although details of its enforceability have yet to be specified in supplement regulations and guidelines.[xix]

Yet legislation alone does not guarantee governance capacity. Across the region, stakeholders are debating whether AI governance should primarily constrain AI development to manage its risks or enable responsible innovation. For firms applying frontier AI in in their daily operations, regulatory clarity and predictability matter as much as flexibility. Across sectors, stakeholders are raising questions of how regulators will differentiate high- and low-risk applications without discouraging experimentation and how regulators can build capacity to learn and keep pace with fast-evolving AI practices while ensuring reliable enforcement. Australia’s recently released National AI Plan illustrates a balancing attempt: it prioritizes national AI capacity building over immediate legislative intervention in AI development. Central to the plan is the establishment of an AI Safety Institute as an analytical and advisory body to liaise between industry and regulators, monitor and test AI systems, and advise policymakers on risks while promoting responsible AI adoption.[xx]

Pursuing agency in a fragmented AI order

As global AI governance continues to fragment, Asia-Pacific middle powers must navigate the competing US, EU, and Chinese models. They cannot replicate American platform dominance, European-style supranational regulation, or China’s centralized control. Their strategic agency will instead depend on clarifying the purpose of AI governance, institutional capacity to implement such governance, and the ability to align governance goals with practices that facilitate entrepreneurship and AI adoption. For Asia-Pacific societies, the central issue is not which AI governance approach to follow, but what trade-offs to make, especially if there is consensus to pursue innovation while maintaining national autonomy, and whether the economic benefits of AI adoption are distributed widely across sectors to sustain social cohesion as AI rapidly reshapes the economy.

In a world where computing power and regulatory leverage are increasingly concentrated among a few global players, the long-term influence of Asia-Pacific middle powers will rest less on the scale of their AI models than on strategic coherence in AI policy—between goals and means, autonomy and interdependence, and competitive strength and governance capacity. Whether governance, entrepreneurship, and agency can be made mutually reinforcing forces rather than competing priorities is the ultimate test to social resilience in the face of rapid technological change, and these forces will shape the position of the Asia-Pacific region in the global AI landscape in the decades ahead.


[i] “Winning the Race: America’s AI Action Plan,” White House, July, 2025, https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf.

[ii] Charles Wessner and Shruti Sharma, “The architecture of AI leadership: Enforcement, innovation, and global trust,” Center for Strategic and International Studies, November 6, 2025, https://www.csis.org/analysis/architecture-ai-leadership-enforcement-innovation-and-global-trust.

[iii] Nikolaj Munch Andersen, “Europe’s role in the global AI landscape,” in Global Innovation Reimagined, CAPRI, forthcoming

[iv] Brooke Tanner and Andrew W. Wyckoff, “Making the case for a third AI technology stack,” The Brookings Institution, September 12, 2025, https://www.brookings.edu/articles/making-the-case-for-a-third-ai-technology-stack/; Marc Selgas Cors and Renata Thiébaut, “Artificial intelligence and the impact of the EU AI Act in business organizations,” AI Magazine 46, no. 4 (2025): e70039, https://doi.org/10.1002/aaai.70039.

[v] Yasir Atalan, “DeepSeek’s latest breakthrough is redefining AI race,” Center for Strategic and International Studies, February 3, 2025, https://www.csis.org/analysis/deepseeks-latest-breakthrough-redefining-ai-race; Rachel Cheung, “Cheap and open source, Chinese AI models are taking off,” The Wire China, November 9, 2025, https://www.thewirechina.com/2025/11/09/cheap-and-open-source-chinese-ai-models-are-taking-off/.

[vi] Ben Hu and Adam Wu, “China resets the path to comprehensive AI governance,” East Asia Forum, December 25, 2025, https://eastasiaforum.org/2025/12/25/china-resets-the-path-to-comprehensive-ai-governance/; Stu Woo, “China is worried AI threatens party rule—and is trying to tame it,” The Wall Street Journal, December 23, 2025, https://www.wsj.com/tech/ai/china-is-worried-ai-threatens-party-ruleand-is-trying-to-tame-it-bfdcda2d.

[vii] “Reading between the lines of the dueling US and Chinese AI action plans,” Atlantic Council, August 7, 2025, https://www.atlanticcouncil.org/blogs/new-atlanticist/reading-between-the-lines-of-the-dueling-us-and-chinese-ai-action-plans/.

[viii] Sun Ryung Park, “Less regulation, more innovation in Japan’s AI governance,” East Asia Forum, May 21, 2025, https://eastasiaforum.org/2025/05/21/less-regulation-more-innovation-in-japans-ai-governance/.

[ix] Ahn Sung-mi, “Why big tech is betting billions on South Korea’s AI future,” Korea Herald, November 10, 2025, https://www.koreaherald.com/article/10612790.

[x] Faiza Saleem, “Which way for ASEAN’s AI governance approach?” The Interpreter, September 23, 2025, https://www.lowyinstitute.org/the-interpreter/which-way-asean-s-ai-governance-approach.

[xi] Bill Whyman, “Sovereign Cloud–Sovereign AI Conundrum: Policy Actions to Achieve Prosperity and Security,” Center for Strategic and International Studies, December 4, 2025, https://www.csis.org/analysis/sovereign-cloud-sovereign-ai-conundrum-policy-actions-achieve-prosperity-and-security.

[xii] Ann Cao, “Singapore picks Alibaba’s Qwen to drive regional language model in big win for China tech,” South China Morning Post, November 25, 2025, https://www.scmp.com/tech/big-tech/article/3334098/singapore-picks-alibabas-qwen-drive-regional-language-model-big-win-china-tech.

[xiii] Lauly Li and Cheng Ting-Fang, “Taiwan opens sovereign AI data center with Nvidia-powered supercomputer,” Nikkei Asia, December 12, 2025, https://asia.nikkei.com/business/technology/taiwan-opens-sovereign-ai-data-center-with-nvidia-powered-supercomputer.

[xiv] Tsubasa Suruga, “Singapore’s Grab to work with OpenAI to boost app accessibility,” Nikkei Asia, May 30, 2024, https://asia.nikkei.com/business/technology/singapore-s-grab-to-work-with-openai-to-boost-app-accessibility.

[xv] Yu-Che Chen, “Forces and innovations in AI governance for public value creation,” in Global Innovation Reimagined, CAPRI, forthcoming

[xvi] Hiroki Habuka, “Japan’s agile AI governance in action: Fostering a global nexus through pluralistic interoperability,” Center for Strategic and International Studies, October 9, 2025, https://www.csis.org/analysis/japans-agile-ai-governance-action-fostering-global-nexus-through-pluralistic.

[xvii] Seryon Lee, “An overview of Korea’s AI Framework: Main features and challenges.” The Korean Journal of International and Comparative Law 13, no. 2 (2025): 232-243; Kim Kang-han, “South Korea enacts world’s first comprehensive AI legal framework,” The Chosun Daily, January 22, 2026, https://www.chosun.com/english/industry-en/2026/01/22/PBTYDFZWWJFODKNQXCKTU7QQW4/.

[xviii] Sejin Kim and Hodan Omaar, “One law sets South Korea’s AI policy—and one weak link could break it,” Information Technology & Innovation Foundation, September 29, 2025, https://itif.org/publications/2025/09/29/one-law-sets-south-koreas-ai-policy-one-weak-link-could-break-it/.

[xix] Keoni Everington, “Taiwan passes AI Basic Act,” Taiwan News, December 23, 2025, https://www.taiwannews.com.tw/news/6270744.

[xx] Ian Gribble, “Australia bets on old laws to manage new AI risks,” The Interpreter, December 3, 2025, https://www.lowyinstitute.org/the-interpreter/australia-bets-old-laws-manage-new-ai-risks.

Global Innovation Reimagined

Global Innovation Reimagined showcases reflections and research on innovation in its many forms across Asia, North America, and Europe. The perspectives offered herein draw from discussions during the trilateral Reimagining Entrepreneurship and Innovation conference, hosted by CAPRI, CAPRI USA, the University of Virginia, and Copenhagen Business School from July 22 to 25, 2025.

About the Author

Siwei Huang

Biography
Biography
Biography

More