In the fall and winter of 2025, a wave of global layoffs driven by artificial intelligence (AI) has quietly intensified. According to data released by Challenger, Gray & Christmas, as of September 2025, nearly 950,000 job cuts had been announced in the United States—marking the highest figure since the COVID-19 pandemic. In October alone, U.S. companies announced over 150,000 layoffs, a year-on-year increase of 175%. The hardest-hit sectors include technology, retail, logistics, and media. Amazon laid off approximately 14,000 employees; UPS announced plans to cut 48,000 positions; and companies such as Microsoft, Chegg, and Starbucks have also rolled out multiple rounds of layoffs.
What grabs the attention about this layoff wave is that the driving force behind it is no longer merely traditional cost-cutting. The rapid and large-scale adoption of AI technologies has prompted companies to undertake structural reorganizations. Many mid-skilled, or even some high-skilled positions are now being replaced at an unprecedented pace, particularly in management, customer service, content moderation, and entry-level analytics roles. The impact is both far-reaching and profound.
In the case of Amazon, the company has explicitly stated that it will leverage generative AI technologies to restructure its internal operations, with the goal of streamlining its organizational structure. Following the pilot launch of its warehouse AI scheduling program, Project Amelia, the number of logistics managers was reduced by 41%. Similarly, Salesforce saw its customer service team shrink by 51% after deploying Einstein Copilot, while J.P. Morgan reported an 80% drop in demand for junior analysts after adopting its large-model Suite. These cases are far from isolated; they point to a broader trend that AI is redefining job roles and reshaping how companies assess human value. This efficiency-driven restructuring logic, unfolding in the absence of robust policy constraints, is transforming the employment landscape at a pace never seen before.
This trend has already been anticipated in an earlier ANBOUND research. In its report “China Should Adopt a Prudent Technology Policy in the Age of Artificial Intelligence”, ANBOUND pointed out that the greatest challenge posed by AI is not the uncontrollability of the technology itself, but its systemic reshaping of the employment structure. On one hand, AI has a strong substitution effect, causing a wide range of mid-skilled and even some high-skilled jobs to be at risk of automation. On the other hand, the new employment opportunities created by AI are highly dependent on individuals’ capacity to learn and access resources, leading to clear class-based disparities. This technology-driven polarization of the job market is expected to significantly magnify social inequality, posing tangible risks to both economic sustainability and social stability. In light of AI’s profound impact on the labor market, policymakers must adopt a prudent and human-centered approach. Technological progress should ultimately serve humanity, not replace it.
Elon Musk has repeatedly spoken out in public, warning that AI could become “smarter than all of humanity combined”. He emphasized that AI must be “truthful, curious, and love humanity”. Across his posts on X, media interviews, and statements, Musk has called for a global pause on the technological arms race and for stronger ethical oversight of AI. He argues that establishing a transparent and accountable framework for AI development is an urgent priority to safeguard the common interests of humankind. Musk’s focus on the boundaries of AI governance aligns closely with ANBOUND’s earlier warnings about the risks of structural unemployment and rising social inequality caused by AI. It now appears that AI’s pace of replacing human roles may be even faster than previously anticipated. Against this backdrop, the creation of a governance framework grounded in ethical boundaries, policy guidance, and risk regulation has become an issue that can no longer be ignored.
In this regard, ANBOUND has proposed the concept of an “AI tax”, a systemic measure designed to address the wave of AI-driven layoffs. The “AI tax” would be levied on companies that significantly reduce their workforce as a result of large-scale AI adoption. The tax would be calculated based on the proportion of profits gained through AI-driven productivity improvements. Revenue from this special tax would then be redistributed through fiscal transfer mechanisms to fund retraining programs, basic livelihood support, and education and reskilling initiatives for individuals displaced by AI. The purpose of the AI tax is not to hinder technological advancement, but to introduce a corrective element into the market mechanism, thereby encouraging companies to balance their pursuit of efficiency with a stronger sense of social responsibility.
The policy vision behind this AI tax is, at its core, a mechanism for the social redistribution of technological dividends. As AI develops rapidly, companies inevitably choose to replace human labor on a large scale to cut costs and boost profits, an outcome consistent with the logic of capital. However, the workers who are displaced face a harsh reality: disappearing jobs, outdated skills, and, in many cases, disrupted livelihoods. This rupture cannot be bridged by individual effort alone. If policy remains indifferent to those who profit while leaving vulnerable groups to bear the risks, the result will not only be unjust but could also lead to severe social instability in the long run. Therefore, the AI tax serves as a key mechanism for balancing the tension between technological progress and social stability. It not only helps mitigate the structural unemployment risks brought about by AI but also establishes a cost boundary that encourages companies to adopt AI technologies more responsibly.
Similar taxation models can be found throughout history. For instance, in addressing carbon emissions, the world has largely embraced the principle of the carbon tax, requiring polluters to bear the external costs of their actions. By the same logic, the employment displacement caused by AI can be viewed as a form of “social pollution”. Its external costs should not be borne solely by the unemployed, but rather shared by the direct beneficiaries of AI, the corporations that increase profits through workforce reductions. From an institutional evolution perspective, the AI tax could become the first special tax in human history designed in response to a general-purpose technological revolution. Its significance extends beyond economic regulation, where it also serves to define the ethical boundaries of technological progress and lay the groundwork for a new social contract between technology, capital, and society.
Of course, imposing an AI tax does not mean treating AI technology as an enemy. On the contrary, it represents an attempt to humanize AI through institutional design. The concept emphasizes that technological progress must take employment ethics into account, and that corporate profitability must also reflect its social consequences. Moreover, the AI tax is not intended as a purely punitive measure. It can be structured as a phased, proportional, or even refundable incentive mechanism. For instance, companies that pursue human–machine collaboration rather than outright human replacement could receive tax rebates for maintaining employment density. Similarly, firms that invest in retraining programs to help employees transition into AI-related roles could qualify for tax credits. This flexible design approach would allow the AI tax to function as both a regulatory and an incentive-based policy instrument, promoting a balance between technological efficiency and social responsibility.
More importantly, the establishment of an AI tax would compel companies to reconsider the relationship between technological investment and organizational structure. When automation is no longer a “zero-cost substitution”, and every round of workforce reduction entails a corresponding policy cost, businesses will naturally begin to weigh the long-term social implications of their technological upgrades. Compared with today’s trend of blindly pursuing AI, such institutional regulation would encourage capital to adopt a more rational and balanced approach to AI development. In turn, this would enhance the overall efficiency and coherence of technological adoption across society, which in turn could create a more sustainable and socially attuned model of innovation.
As AI has become an irreversible force of our era, we neither can nor should attempt to halt the progress of technology itself. What we can, and must, do is guide that progress through institutional design, ensuring that it ultimately serves the public good. The AI tax embodies precisely this effort. It is a concrete and actionable policy instrument that responds to the real and pressing social challenges emerging behind technological advancement. More than a fiscal measure, it represents an expression of human rationality and responsibility. It is also our collective will to ensure that even in the face of overwhelming technological power, progress remains aligned with humanity’s broader ethical and social interests.
Final analysis conclusion:
The AI tax is more than just a policy idea for managing technology. It is a way for society to protect itself during a time of rapid change. It tackles the real challenges of job displacement caused by AI and shows that public policy is starting to keep pace with technological innovation. In an uncertain future shaped by AI, this tax offers a practical path that balances efficiency with fairness.
______________
Chen Li is an Economic Research Fellow at ANBOUND, an independent think tank.
