📢 Gate Square Exclusive: #PUBLIC Creative Contest# Is Now Live!
Join Gate Launchpool Round 297 — PublicAI (PUBLIC) and share your post on Gate Square for a chance to win from a 4,000 $PUBLIC prize pool
🎨 Event Period
Aug 18, 2025, 10:00 – Aug 22, 2025, 16:00 (UTC)
📌 How to Participate
Post original content on Gate Square related to PublicAI (PUBLIC) or the ongoing Launchpool event
Content must be at least 100 words (analysis, tutorials, creative graphics, reviews, etc.)
Add hashtag: #PUBLIC Creative Contest#
Include screenshots of your Launchpool participation (e.g., staking record, reward
The AI regulatory conundrum: How to 'beat magic with magic'
Original source: China News Weekly
Author: Hallik
The process of global AI legislation has accelerated significantly, and regulations around the world are catching up with the evolution of AI.
On June 14, local time, the European Parliament voted 499 in favor, 28 against and 93 abstentions, and passed the draft negotiating mandate for the Artificial Intelligence Act (AI Act). According to the EU legislative process, the European Parliament, EU member states and the European Commission will start "tripartite negotiations" to determine the final terms of the bill.
The European Parliament said it was "ready for negotiations" to enact the first-ever artificial intelligence bill. U.S. President Biden released a signal to control AI, and some members of the U.S. Congress submitted proposals for AI regulatory legislation. U.S. Senate Democratic leader Chuck Schumer unveiled his "Framework for AI Safety Innovation" and plans to enact a federal AI bill in just "a few months."
my country's relevant legislation has also been put on the agenda, and the draft artificial intelligence law is going to be submitted to the Standing Committee of the National People's Congress for deliberation within this year. On June 20, the first batch of domestic in-depth synthesis service algorithm filing lists were also released, with 26 companies including Baidu, Alibaba, and Tencent, and a total of 41 algorithms on the list.
Although China, the United States, and the European Union all advocate principled AI regulatory concepts such as accuracy, safety, and transparency, there are many differences in specific ideas and methods. The promulgation of comprehensive AI laws is behind the output of its own rules, and it wants to grasp the advantages of rules.
Some domestic experts have called for the rapid development of legal regulations on artificial intelligence, but the current practical problems cannot be ignored. In addition, there is another important consideration: to regulate or to develop. This is not a binary choice, but in the digital realm, balancing the two is not easy.
EU sprint, China and the United States speed up
If all goes well, the Artificial Intelligence Bill passed by the European Parliament is expected to be approved by the end of this year. The world's first comprehensive artificial intelligence regulatory law is likely to land in the EU.
"This draft will affect other countries that are on the sidelines to accelerate legislation. For a long time, whether artificial intelligence technology should be included in the scope of legal supervision has always been controversial. Looking at it now, after the "Artificial Intelligence Act" is implemented, relevant network platforms, such as business content and users Platforms that focus on information generation are bound to assume higher audit obligations." Zhao Jingwu, an associate professor at the Law School of Beihang University, told China News Weekly.
As part of the digital strategy, the European Union hopes to comprehensively regulate artificial intelligence through the "Artificial Intelligence Act", and the strategic layout behind it has also been put on the table.
Peng Xiaoyan, executive director of Beijing Wanshang Tianqin (Hangzhou) Law Firm, told China News Weekly that the "Artificial Intelligence Act" not only applies to the EU, but also regulates system providers located outside the EU but whose system output data is used in the EU. or user. The scope of jurisdiction and application of the Act has been greatly expanded, and the clues of preempting the jurisdiction of data elements can also be glimpsed.
Jin Ling, deputy director and researcher of the European Institute of the China Institute of International Studies, also wrote in the article "The World's First Artificial Intelligence Legislation: A Difficult Balance Between Innovation and Regulation" that the "Artificial Intelligence Act" highlights the moral advantages of EU AI governance , is another attempt by the EU to give full play to its normative power and make up for technical shortcomings through the advantages of rules. It reflects the EU's strategic intention to seize the moral high ground in the field of artificial intelligence.
The Artificial Intelligence Act has been two years in the making. In April 2021, the European Commission put forward a proposal for artificial intelligence legislation based on the "risk classification" framework, which has been discussed and revised after several rounds. After the popularity of generative AI such as ChatGPT, EU lawmakers urgently added "patches".
A new change is that the latest draft of the "Artificial Intelligence Act" has strengthened the transparency requirements for general purpose AI. For example, the generative AI based on the basic model must label the generated content, help users distinguish deep fakes from real information, and ensure that illegal content is prevented from being generated. Providers of basic models such as OpenAI and Google also need to disclose the details of the training data if they use copyrighted data during the training of the model.
In addition, the real-time remote biometric technology in public places has been adjusted from the "high risk" level to the "prohibited" level, that is, AI technology must not be used for face recognition in public places in EU countries.
The latest draft also further increases the amount of penalties for violations, changing the maximum of 30 million euros or 6% of the global turnover of the infringing company in the previous fiscal year to a maximum of 40 million euros or 7% of the global annual turnover of the infringing company in the previous year. That's significantly higher than the General Data Protection Regulation, Europe's signature data security law, which can carry fines of up to 4 percent of global revenue, or 20 million euros.
Peng Xiaoyan told China News Weekly that the increase in the amount of punishment reflects the determination and strength of the EU authorities to supervise artificial intelligence. For technology giants such as Google, Microsoft, and Apple with hundreds of billions of dollars in revenue, if they violate the provisions of the "Artificial Intelligence Act", the fines may reach tens of billions of dollars.
In the United States on the other side of the ocean, while Washington was busy responding to Musk and others' call for stronger AI control, on June 20, US President Biden met with a group of artificial intelligence experts and researchers in San Francisco to discuss how to manage the risks of this new technology. risk. Biden said at the time that while seizing the huge potential of AI, it is necessary to manage the risks it poses to society, the economy and national security.
The background of risk management and control becoming a hot topic in AI is that the United States has not adopted strict anti-monopoly measures against AI technology, and has not yet introduced comprehensive AI regulatory laws at the federal level.
The US federal government's first formal foray into the field of artificial intelligence regulation was in January 2020, when it released the "Regulatory Guidelines for the Application of Artificial Intelligence" to provide guidance on regulatory and non-regulatory measures for emerging artificial intelligence issues. The "National Artificial Intelligence Initiative Act 2020" introduced in 2021 is more of a policy layout in the field of AI, and there is still a certain distance from artificial intelligence governance and strong supervision. A year later, the Blueprint for an AI Bill of Rights ("Blueprint") released by the White House in October 2022 provides a supporting framework for AI governance, but it is not an official U.S. policy and is not binding.
Little progress has been made in AI legislation in the United States, which has caused many dissatisfaction. Many people have criticized that the United States has fallen behind the European Union and China in formulating rules for the digital economy. However, perhaps seeing that the European Union's "Artificial Intelligence Act" is about to pass the final "hurdle", the US Congress has recently shown signs of accelerated legislation.
On the day of Biden's AI conference, Democrats Rep. Ted W. Lieu and Anna Eshoo, and Republican Rep. Ken Buck (Ken Buck) jointly submitted the "National Council on Artificial Intelligence Act". "proposal. Meanwhile, Democratic Senator Brian Schatz will introduce companion legislation in the Senate that will focus on AI regulation.
According to the content of the bill, the Artificial Intelligence Commission will be composed of 20 experts from the government, industry, civil society and computer science. They will review the current artificial intelligence regulatory methods in the United States and jointly develop a comprehensive regulatory framework.
"AI is doing amazing things in society. If left unchecked and regulated, it can cause significant harm. Congress must not stand idly by," Ted Liu said in a statement.
A day later, on June 21, Senate Democratic leader Chuck Schumer delivered a speech at the Center for Strategic and International Studies (CSIS), revealing his "Framework for Artificial Intelligence Security Innovation" ("AI Framework")— — Encouraging innovation while advancing safety, accountability, foundation, and interpretability, echoing the grand plan including the Blueprint. He proposed the framework in April, but gave few details at the time.
Behind the AI framework is a legislative strategy by Chuck Schumer. In this speech, he said that a federal artificial intelligence bill would be enacted in just "a few months." However, the U.S. legislative process is cumbersome. It not only needs to be voted by the Senate and the House of Representatives, but also needs to go through multiple rounds of hearings, which takes a long time.
In order to speed up the progress, as part of the AI framework, Chuck Schumer plans to hold a series of artificial intelligence insight forums from September this year, covering 10 topics including innovation, intellectual property, national security and privacy. He told the outside world that the Insight Forum will not replace congressional hearings on artificial intelligence, but will be held in parallel so that the legislature can introduce policy on the technology in months rather than years. He predicted that it may take until the fall "to start to see some concrete things" in U.S. AI legislation.
Although the progress has not caught up with the European Union, relevant legislation in my country has also been put on the agenda. At the beginning of June, the General Office of the State Council issued the "2023 Legislative Work Plan of the State Council", which mentioned that the draft artificial intelligence law was prepared to be submitted to the Standing Committee of the National People's Congress for deliberation.
According to the provisions of my country's "Legislative Law", after the State Council proposes a draft law to the Standing Committee of the National People's Congress, the chairman's meeting decides to include it in the agenda of the Standing Committee meeting, or first submits it to the relevant special committee for deliberation and submits a report, and then decides to include it in the Standing Committee The agenda of the meeting generally needs to go through three deliberations before being voted on.
Since the beginning of this year, many countries have accelerated AI legislation. Peng Xiaoyan believes that this is the result of both competition and technological development.
"Data elements are increasingly becoming national strategic elements, and countries also hope to establish jurisdiction through legislation and seize the right to speak in artificial intelligence. At the same time, the iterative update of artificial intelligence technologies such as ChatGPT has allowed the society to see new hopes for the development of strong artificial intelligence. New The development of technology will inevitably bring about new social problems and social contradictions, which require regulatory intervention and adjustment, and the development of technology has promoted the update of legislation to some extent." Peng Xiaoyan said.
Divergence far outnumbers convergence
China, the United States, and the European Union are the main driving forces for global AI development, but there are also some differences in AI legislation among the three.
The European Union's "Artificial Intelligence Act" divides the risks of artificial intelligence applications into four levels from the perspective of use and function. No matter how many rounds of revisions the draft has undergone, "risk classification" is still the core concept of the EU's AI governance.
In the latest draft, the European Parliament expanded the list of "unacceptable risks" to prevent intrusive and discriminatory AI systems. Six types of artificial intelligence systems, such as biometric identification in public spaces, emotional recognition, predictive policing (based on profiling, location, or past criminal behavior), and random grabbing of facial images from the Internet, are completely prohibited.
The second category is AI systems that negatively impact human security or fundamental rights and are considered "high risk". For example, AI systems used in products such as aviation, automobiles, and medical equipment, as well as eight specific fields that must be registered in the EU database, covering critical infrastructure, education, training, law enforcement, and more. Subject to AI regulations and prior conformity assessment, various "high risk" AI systems will be authorized to enter the EU market subject to a set of requirements and obligations.
In addition, artificial intelligence systems that influence voters and election results, as well as recommendation systems used by social media platforms with more than 45 million users, such as Facebook, Twitter and Instagram, will also be placed on the high-risk list under the EU's Digital Services Law .
At the bottom of the pyramid are AI systems with limited, little or no risk. The former has specific transparency obligations and needs to inform users that they are interacting with AI systems, while the latter has no mandatory regulations and is basically unregulated, such as applications such as spam filters.
Due to its strict regulatory provisions, the "Artificial Intelligence Act" is regarded by many industry insiders as having many sharp "teeth". However, the bill also attempts to strike a balance between strong regulation and innovation.
For example, the latest draft requires member states to establish at least one "regulatory sandbox" that can be used free of charge by small and medium-sized enterprises and start-ups, in a supervised and safe and controllable scenario, to test innovative artificial intelligence systems before they are put into use , until compliance requirements are met. The EU generally believes that the proposal will not only allow authorities to pay attention to technological changes in real time, but also help AI companies continue to innovate while reducing regulatory pressure.
Jin Ling said in the aforementioned article that on the one hand, the EU’s upstream governance method requires companies to bear more upfront costs, and on the other hand, the uncertainty of risk assessment also affects the investment enthusiasm of companies. Therefore, while the Commission has repeatedly emphasized that AI legislation will support innovation and growth in the European digital economy, realistic economic analysis does not seem to support this conclusion. The bill reflects the EU's inherent conflict between promoting innovation and protecting rights that is difficult to effectively balance.
Like the European Union and China, the United States supports a risk-based AI regulatory approach, advocating accuracy, security, and transparency. However, in Zhao Jingwu's view, the U.S. regulatory thinking is more focused on the use of AI to promote the innovation and development of the AI industry, and ultimately to maintain the leadership and competitiveness of the United States.
"Different from the 'risk prevention and technology security' regulatory concept held by China and the EU, the United States focuses on commercial development. Both China and the EU focus on the security of artificial intelligence technology applications to prevent the abuse of artificial intelligence technology from infringing on individual rights, while the United States focuses on Industrial development is the focus of supervision." Zhao Jingwu said.
Studies have found that the legislation of the US Congress mainly focuses on encouraging and guiding the government to use artificial intelligence. For example, the U.S. Senate introduced an AI Innovation Act in 2021, requiring the U.S. Department of Defense to implement a pilot program to ensure it has access to the best AI and machine learning software capabilities.
In the aforementioned speech, Chuck Schumer regarded innovation as the North Star, and his AI framework is to unleash the great potential of artificial intelligence and support the US-led innovation in artificial intelligence technology. At the beginning of the "Guidelines for the Supervision of Artificial Intelligence Applications", it is clear that the advancement of technology and innovation should continue to be promoted. The ultimate goal of the 2020 National Artificial Intelligence Initiative Act is also to ensure that the United States maintains a leading position in the global AI technology field by increasing research investment and establishing a workforce system.
Peng Xiaoyan said that from the perspective of guiding specification design, the development of artificial intelligence in the United States is still in a state of weak supervision at the legislative and institutional levels, and the society actively encourages the innovation and expansion of artificial intelligence technology with an open attitude.
Compared with the European Union, which has clearer investigative powers and comprehensive regulatory coverage, the United States has adopted a decentralized approach to AI regulation, with some states and institutions advancing AI governance to a lesser extent. As a result, national AI regulatory initiatives are very broad and principled.
For example, the "Blueprint" is a milestone event in the US artificial intelligence governance policy. It has formulated five basic principles including safe and effective systems, prevention of algorithmic discrimination, protection of data privacy, notification and explanation, and human participation in decision-making. There are no more detailed provisions.
Peng Xiaoyan believes that the "Blueprint" does not formulate specific implementation measures, but builds a basic framework for the development of artificial intelligence in a principled manner, aiming to guide the design, use and deployment of artificial intelligence systems.
"Standards like this are not mandatory. This is because the United States considers supporting the development of the artificial intelligence industry. At present, artificial intelligence is still in an emerging stage of development, and high-intensity supervision is bound to limit industrial development and innovation to a certain extent. Therefore, The United States maintains a relatively modest attitude in legislation," Peng Xiaoyan said.
“Without laws giving agencies new powers, they can only regulate the use of artificial intelligence according to the powers they already have. On the other hand, the ethical principles related to artificial intelligence remain less regulated, and agencies can decide how to regulate for themselves. Which rights to use.” According to Carnegie analyst Hadrien Pouget, this makes federal agencies led by the White House both restricted and free.
The concept of AI governance dominated by utilization and innovation is destined to not be too hard on the "fist" of the United States. Alex Engler, a researcher at the Brookings Institution, a well-known American think tank, pointed out that the European Union and the United States are adopting different regulatory approaches to artificial intelligence that has social impacts such as education, finance, and employment.
In terms of specific AI applications, the European Union's "Artificial Intelligence Act" has transparency requirements for chat robots, while there are no federal regulations in the United States. Facial recognition is considered "unacceptable risk" by the European Union, and the United States provides public information through the National Institute of Standards and Technology (NIST) facial recognition vendor testing program, but does not mandate rules.
"The EU's regulatory reach not only covers a wider range of applications, but sets out more rules for these AI applications. Whereas the US approach is more narrowly limited to adapting current institutional regulators to try to govern AI, the scope of AI It’s also much more limited.” Alex Engler says there are far more divergences than convergences in AI risk management, despite broadly identical principles.
Zhao Jingwu summarized the AI regulatory models of China, the European Union, and the United States, and found that China is limited to the application scenarios of artificial intelligence technology, and has formulated special regulatory rules for application scenarios such as face recognition technology, deep synthesis, and automated recommendation. The EU is guided by the risk level, depending on whether the risk level of artificial intelligence applications is an acceptable level. The United States judges the legality of the application of artificial intelligence technology within the framework of the existing traditional legal system.
In addition, the United States has also focused more attention on artificial intelligence research and invested more funds in it. Just in early May, the White House announced an investment of about 140 million US dollars to establish seven new national artificial intelligence research institutes. Some researchers believe that this move by the United States hopes to better understand AI, thereby alleviating concerns arising from the regulatory process.
Peng Xiaoyan said that my country has adopted measures to encourage the development of artificial intelligence technology while regulating the management of related fields in a limited way, and guide the development of artificial intelligence technology with coordinated policies and management requirements.
my country's legislation faces many practical problems
The European Union is accelerating the implementation of the world's first AI regulatory bill. Zhao Jingwu told China News Weekly that the European Union's "risk level system" artificial intelligence regulatory measures, the "general model" regulatory concept proposed in the "Artificial Intelligence Act", and the specific regulations for ChatGPT The disclosure obligations and data copyright compliance obligations of generative artificial intelligence applications are of reference value for my country's artificial intelligence legislation.
In fact, my country's legislation on artificial intelligence has already started. In 2017, the State Council issued the "New Generation Artificial Intelligence Development Plan", which proposed that by 2025, artificial intelligence laws and regulations, ethical norms, and policy systems should be initially established, and artificial intelligence security assessments and policies should be formed. Control ability.
Locally, in 2022, Shenzhen City promulgated the "Shenzhen Special Economic Zone Artificial Intelligence Industry Promotion Regulations", which is regarded as my country's first special legislation for the artificial intelligence industry. The "Regulations" mentioned that the supervision mechanism in the field of artificial intelligence should be improved to prevent possible ethical security risks and compliance risks that may arise in artificial intelligence products and services.
At present, my country's artificial intelligence regulation is mainly jointly promoted by several major ministries and commissions, which promote the regulation and development of artificial intelligence in different fields. Normative documents such as the "Regulations on the Administration of Algorithm Recommendations for Internet Information Services", "Regulations on the Administration of Deep Synthesis of Internet Information Services", and "Administrative Measures for Generative Artificial Intelligence Services (Draft for Comment)" have also been issued.
"From the perspective of historical management norms, my country's regulations on the field of artificial intelligence adopt measures to distinguish business fields and technical directions, and management norms tend to be decentralized. When the norms are introduced, they often have the characteristics of being timely. After the emergence of specific technologies Make special management regulations. The regulations are promulgated by the administrative department, focusing on supervision, and have not been upgraded to laws at the level of regulation.” Peng Xiaoyan said.
It is worth noting that on June 20, the first batch of domestic in-depth synthesis service algorithm filing lists were released. 26 companies including Baidu, Alibaba, Tencent, ByteDance, and Meituan, and a total of 41 algorithms were on the list.
As the heat of AI legislation heats up, domestic experts have begun to call for the rapid development of artificial intelligence legal regulations. However, in Zhao Jingwu's view, my country's special legislation on artificial intelligence is feasible, but it also faces many practical problems.
"The first is the problem of the system connection between legislative documents. The applicable relationship between artificial intelligence-specific legislation and other normative documents has not yet been resolved, especially the overlapping content of special legislation and current legislation that needs to be resolved urgently. The second is artificial intelligence technology. The speed of update and iteration is accelerating, and it is difficult to ensure the simultaneous development of law and technology; third, the regulatory rules for artificial intelligence lack integrity, and the regulatory rules for the three core elements of data, algorithms, and computing power are still in the exploratory stage; fourth, artificial intelligence legislation Whether the focus should be on security risk management or on industrial development is more controversial," said Zhao Jingwu.
Whether it is the European Union's "Artificial Intelligence Act", or China, the United States and other countries' regulations, initiatives, and plans for AI, they are trying to build a comprehensive regulatory framework that can not only ensure safety, but also create better conditions for AI.
Based on this principle of universality, Peng Xiaoyan told China News Weekly that the establishment of artificial intelligence laws in my country should first be based on actively encouraging development and innovation, so that artificial intelligence can be regulated and developed in a relatively open space. Development red line.
“In addition, legal issues in the field of artificial intelligence that everyone is concerned about now need to be resolved, including but not limited to the prohibition of illegal content in artificial intelligence, the protection of artificial intelligence data security, the protection of artificial intelligence ethics and security, the prevention of intellectual property infringement, etc. "Peng Xiaoyan said.
Zhao Jingwu believes that my country should establish an artificial intelligence law oriented towards industrial development protection.
"To a certain extent, the current legislation can basically meet the needs of artificial intelligence technology application supervision. Preventing technological risks and ensuring technological security are just the governance process, and its ultimate goal still needs to return to the development level of the artificial intelligence industry. After all, artificial intelligence laws are not restricting industries. development, but to guide and guarantee the sound development of related industries." Zhao Jingwu said.