Cutting-edge Trends in AI Development, Governance Challenges, and Response Strategies by XUE Lan and WANG Jingyu
Effective AI governance approaches are needed due to the rapid development of the emerging technology and the consequent risks.
Welcome to the 24th edition of our weekly newsletter! I recently participated in the Munich Security Conference (MSC) as a Munich Young Leader, and it has been an incredibly rewarding experience. I had the opportunity to listen to numerous speeches and discussions, and we also participated in various policy debates. Yesterday, South China Morning Post published a commentary I wrote on current transatlantic relations according to my experience at MSC. Feel free to click the link if you're interested: https://www.scmp.com/opinion/world-opinion/article/3298942/if-trump-wants-peace-ukraine-he-must-reach-consensus-europe
ChinAffairs+ is a weekly newsletter that shares Chinese academic articles focused on topics such as China’s foreign policy, China-U.S. relations, China-European relations, and more. This newsletter was co-founded by me and my research assistant, ZHANG Xueyu. I am SUN Chenghao, a fellow with the Center for International Security and Strategy (CISS) at Tsinghua University, Council Member of The Chinese Association of American Studies and a visiting scholar at the Paul Tsai China Center of Yale Law School (fall semester 2024).
Through carefully selected Chinese academic articles, we aim to provide you with key insights into the issues that China’s academic and strategic communities are focused on. We will highlight why each article matters and the most important takeaways. Questions or criticisms may be addressed to sunchenghao@tsinghua.edu.cn
Today, we have selected an article written by XUE Lan and WANG Jingyu, which focuses on the cutting-edge trends in AI development, governance challenges, and response strategies.
Summary
The development of artificial intelligence (AI) has entered the era of large models. With the growing trend of accelerated advancement of foundational research and the rapid implementation of industry applications, positive and negative externalities of artificial intelligence are increasingly being unleashed. While AI generates significant economic and social development benefits, it simultaneously poses considerable security threats to individuals, nations, and humanity across multiple dimensions, including intrinsic risks, application risks, and economic and social risks.
Against this backdrop, challenges in AI governance have become more pronounced, including the misalignment between technological development and governance, information asymmetry between regulators and the regulated, disproportionate costs and benefits of risk prevention, lack of coordination in complex governance mechanisms, and geopolitical instability. In response, it is urgent to strengthen AI governance systems through increased investment in security, the improvement of regulatory frameworks, the encouragement of self-regulation, and enhanced international cooperation, so as to better address the risks and challenges posed by the rapid development of AI.
Why It Matters
Nowadays, large models such as ChatGPT and Sora make groundbreaking advancements, signaling the rise of a new wave of AI revolution. The discussions about AI approaching the "technological singularity" have been intensified. Many researchers believe that the substantial increase in AI capabilities and autonomy will further amplify its technological externalities, bringing enormous opportunities to the world, while also introducing unpredictable risks and complex challenges. However, at present, countries around the world have insufficient responses to the potential risks posed by the emerging technology.
This paper establishes the logical framework of "development trends—benefits and risks—governance challenges—governance strategies", combines the latest advancements in AI technological innovation and industrial applications to analyze the development benefits, security risks, and governance difficulties brought about by frontier AI, and offers policy recommendations to strengthen AI governance. Therefore, it will theoretically and practically contribute to the improvement of AI governance in China and around the globe.
Key Points
Cutting-edge Trends in the Development of Artificial Intelligence
In the 21st century, artificial intelligence (AI) has entered a new period of rapid development, marked by significant breakthroughs driven by advanced algorithms, the integration of multimodal big data, and the aggregation of extensive computational power. Some research suggests that AI has experienced two pivotal phases of development so far, one of which is the current era dominated by large models based on big data, with ChatGPT serving as the latest symbol of this trend. Against this backdrop, the pace of AI development has accelerated, with an overall trend towards the fast-track advancement of foundational research, and the rapid implementation of industrial applications.
Acceleration of Basic AI Research and Development
First, the number of AI-related publications continues to grow and machine learning has risen to prominence. Second, the number of AI patents has surged, experiencing explosive growth after 2018. Both China and the United States hold a dominant position in AI patenting, with China leading the global rankings since 2013. Third, the number of open-source AI projects has increased annually, witnessing explosive growth in 2023.
Acceleration of AI Industrial Applications
First of all, the industry has become the primary driving force behind AI development, and the boundaries between academia, industry, and research are increasingly blurred. Second, the performance of AI technologies has dramatically improved, and their application scenarios have expanded to more fields. Third, the commercial models for AI are becoming increasingly diverse. In particular, "Model as a Service" (MaaS) has gradually become mature at the core of AI industry ecosystem. Large-scale AI model providers at the foundational layer, small and medium tech enterprises at the intermediary layer, and traditional enterprises at the application layer all fulfill their duties to facilitate the implementation of AI industry applications.
The Benefits and Risks of Artificial Intelligence Development
Development Benefits
First, the core industries related to AI have the potential to bring significant economic gains. The complex industrial chain and the high technological added value associated with AI provide these industries with vast market prospects. Second, AI exhibits a strong "leader effect"(头雁效应, tóu yàn xiào yìng), which can empower numerous industries, leading to an overall leap in social productivity. Considering the trends in AI development, influenced by accelerated technological innovation and increased policy support, the empowerment effect of AI will be further enhanced. Third, AI can offer new solutions to global challenges such as climate change and biodiversity conservation, thereby contributing to sustainable development.
Security Risks
Some scholars, based on technological logic, categorize the risks associated with AI into three main categories including “risks of uncontrolled technology”, “risks of improper technological use”, and “risks related to social effects of technology”. Other scholars, from the governance perspective, classify AI risks into “technical risks”, which are inherent in AI technology, products, or services, and “business risks”, which arise from the associated systems, environments, and actors involved in the AI application process. While researchers from different fields may adopt different classification approaches, there is a broad consensus that AI development will pose substantial security threats to individuals, nations, and humanity from multiple dimensions, including the technology itself, its usage, and its social impact.
To begin with, AI technology can cause intrinsic technological illusion, algorithmic discrimination, and even technological control failure. First, existing studies suggest that current large-scale AI models lack mechanisms for distinguishing truth from falsehood, posing risks in applications that require high standards of content authenticity. Second, due to inherent human and data biases, AI faces the risk of algorithmic discrimination, which may exacerbate structural inequalities. Third, there are shortcomings in the transparency and interpretability of AI technologies. If AI systems develop self-preservation or self-improvement capabilities and operate beyond human control, a technological control failure may occur.
Moreover, the large-scale deployment of AI Misuse may lead to the risk of misuse, abuse, and malicious use. As a crucial general-purpose and foundational tool, AI, when misused or abused, poses risks such as data breaches, deepfakes, low-cost forgery, terrorism, and militarization of technology. These risks can significantly challenge global strategic stability.
Further, the disruptive impacts of AI trigger labor market restructuring and increased social inequality. First, some studies argue that AI has the potential to replace human labor in entirely new ways, which could disrupt labor markets. Second, AI will lead to changes in market and skill structures. Without effective public policies, this could result in issues such as declining employment rates, job and wage polarization, increasing inequality of income and wealth. Additionally, AI's role in societal transformation may give rise to a series of unforeseen conflicts and risks, potentially impacting economic security and social stability.
Difficulties and Challenges for AI Governance
Asynchrony between Technological Development and Governance Systems
Firstly, the iteration of AI outpaces human expectations, and the existing governance systems are unable to keep up with the development of technology. The classic standard for predicting computing power, Moore's Law, is often used for traditional computing systems. However, the iteration speed of AI, particularly general AI systems, far surpasses this law. In this context, it is challeging to overcome the high costs associated with the evolution of governance systems in areas such as society, economics, and law, and synchronize the governance system with AI development.
Secondly, intelligence emergence of AI technologies is highly unpredictable, making it difficult for governance systems to accurately forecast. On one hand, compared with traditional technological transformations, AI, particularly large models, has initially demonstrated self-creativity, superhuman learning capabilities, and hyper-evolution. This technological potential makes it hard to predict the trajectory and breakthrough points of AI's development. On the other hand, the potential for future technologies to trigger fundamental and disruptive transformations in socio-economic structures presents an additional challenge. Governance systems need to identify the emerging, unforeseeable conflicts in time and prepare for these risks in advance.
Information Asymmetry Between Regulators and Regulated Entities
Firstly, regulators find it difficult to accurately grasp the real-time dynamics of technological development. Generally, governments and the regulated parties need to design appropriate “principal-agent” structures to alleviate information asymmetry. However, the inherent complexity and “black-box” nature of AI innovation further widen the information gap, potentially creating a “mutual ignorance” scenario. This exacerbates the difficulty of communication and collaborative governance between regulators and the regulated.
Secondly, the regulated entities struggle to clearly understand the governance objectives of regulators. In traditional technological systems, the operational logic, functional performance, and application scenarios are relatively clear, and the risks and governance goals are generally well-defined. However, the rapid iteration and unpredictability of AI development mean that regulators must continuously explore, balance, and adjust governance objectives and mechanisms. On this basis, regulated entities face increasing difficulties in staying updated on the latest regulatory concerns and adapting to the governance demands of the AI era.
Disproportion of Risk Prevention Costs and Benefits
If AI were to fall outside of human control, it could even pose existential threats to humanity. However, the general availability and ubiquity of AI increase the potential risks, making risk prevention a costly endeavor. Mitigating these risks requires vast amounts of public resources, which could even come at the cost of curtailing AI development. Balancing the costs and benefits of risk prevention and balancing development and security remains a major challenge in AI governance.
Incoherent Governance by Mechanisms Complex
AI governance involves a complex system that encompasses legal frameworks, industry standards, and international coordination, requiring joint participation of different stakeholders. Currently, various international actors—including organizations like the UN, multilateral cooperation mechanisms like the G7, major AI powers such as China and the U.S., and multinational corporations like Microsoft and Google—are deeply involved in AI governance. These participants differ in their governance philosophies, concerns, preferences, and capacities, leading to a fragmented landscape of international AI governance. This fragmentation results in governance deficits in terms of rationality, fairness, and effectiveness.
Instability of Geopolitical Environment
As a disruptive frontier technology, AI plays a crucial role in great power competition. Currently, AI governance becomes even more complex. On one hand, international AI governance requires major countries who possess cutting-edge technologies to establish communication and coordination mechanisms and play a more positive role in the international governance system through information exchange and self-regulation. On the other hand, AI has the potential to significantly tip the balance of national power. The geopolitical environment of strategic competition complicates coordination between major countries. Some nations even draw ideological lines or build exclusive blocs to create development barriers, and maliciously disrupt the global AI supply chain. This not only hampers the development of AI globally, but also negatively impacts AI governance.
Conclusion
From the strategic requirements of the 2017 “New Generation Artificial Intelligence Development Plan” to a series of laws, guidelines, management measures and international initiatives after 2020, China’s AI governance mechanism has been continuously improving. The framework of a unique Chinese AI governance system characterized by a multi-stakeholder approach, multi-dimensional co-governance, diverse tools, and agile coordination, is beginning to take shape. China will further refine its AI governance system in the ways as follows.
First, China will increase investment in the field of AI safety and accelerate the construction of AI safety governance capabilities. Second, establishing a prudent and inclusive as well as tiered AI safety regulatory system, and adopting agile governance to get rid of current governance dilemmas are also important. Third, China will emphasize the central role of enterprise-driven technological innovation and encourage self-regulation at industry and corporate levels. Fourth, enhancing AI governance collaboration within the UN framework, using capacity-building as a lever to promote the construction of standard norms with broad international consensus can be helpful. Fifth, China will improve bilateral dialogue mechanisms, and strengthen policy communication with major AI powers such as the U.S.
About the Author
Xue Lan 薛澜:Cheung Kong Chair Distinguished Professor, Dean of Schwarzman College and Dean of Institute for AI International Governance of Tsinghua University (I-AIIG). In Tsinghua University, he also serves as a Deputy Director of Strategic Research Institute for Engineering, Science and Technology, Director of China Institute for S&T Policy, Co-Director of Global Institute for SDGs. His teaching and research interest includes STI policy, crisis management, and global governance. From 2000-2018, He served as Associate Dean, Executive Associate Dean, and Dean of the School of Public Policy and Management at Tsinghua University.
He also serves as the Convener of the State Council Public Administration Disciplinary Review Committee, a member of the National Committee for Strategic Consultation and Comprehensive Review, the Chair of the National Expert Committee on AI Governance, a member of the Advisory Group of STI Directory of OECD, an adjunct professor at Carnegie Mellon University, a Non-Resident Senior Fellow of the Brookings Institution. He is a recipient of the Fudan Distinguished Contribution Award for Management Science, the Distinguished Contribution Award from Chinese Association for Science of Science and S&T Policy and the Second National Award for Excellence in Innovation.
Wang Jingyu 王净宇: Postdoctoral fellow of Schwarzman Scholar at Tsinghua University, and assistant researcher at the Institute of Artificial Intelligence International Governance at Tsinghua University.
About the Publication
The Chinese version of the article is published in Administration Reform(《行政管理改革》). It is a professional journal in the field of administrative management, sponsored and supervised by the Central Party School of the Communist Party of China. It was founded in 2009 and is currently a monthly publication. This journal is committed to publishing high-quality original research results, reviews, and newsletters, mainly focusing on the reform practices and theoretical innovations of administrative management in China. This journal highlights its applicability, advisory nature, and policy relevance, providing intellectual support for the establishment of a sound socialist administrative management reform system with Chinese characteristics.