The concept of digital immortality, long confined to the plotlines of science fiction, has stepped out of fictional worlds and into mainstream reality, triggering fierce public and professional debate over ethics, human rights, and labor security. In the hit Hulu sci-fi series *Devs*, characters are reborn as digital entities inside a simulated universe, grappling with the question of whether their existence makes them “real” or just lines of code. Today, that fictional dilemma is playing out in real life, as artificial intelligence makes it possible to create convincing replicas of real people, from deceased public figures to currently employed workers.
The conversation around this technology intensified last month following the death of prominent Chinese higher education influencer Zhang Xuefeng, who passed away at the age of 41. Just days after thousands of followers mourned his passing, an AI-powered digital avatar titled *Zhang Xuefeng.skill* appeared online, trained on years of the influencer’s public content including livestream recordings, media interviews, and published books. The replica preserved Zhang’s approachable communication style and core professional values, but its unauthorised creation immediately sparked widespread public outrage and ethical debate.
Wang Ziyue, an AI researcher from Stanford University, publicly criticized the avatar in a viral video, arguing that the technology amounts to “extracting humanity from the human body and creating something that looks human but is not truly human” — a development that has left many observers with a deep sense of unease.
Weeks before the Zhang avatar controversy, a separate experimental project titled *colleague.skill* was published to the open-source code platform GitHub, which claimed it could convert an employee’s existing workplace data into a functional digital avatar capable of replacing the original worker in their daily role. The project’s developer used dark humor to acknowledge widespread public anxiety about AI-driven automation, writing: “You AI guys are traitors to the codebase — you’ve already killed frontend, now you’re coming for backend, QA, ops, infosec, chip design, and eventually yourselves and all of humanity.” The project framed the technology as a solution to the disruptions caused by employee turnover, pitching it with the tagline: “Turn cold farewells into warm skills. Welcome to cyber immortality!”
After going viral across Chinese social media, the project ignited a broader national conversation about the intersection of this new technology with job security, technological ethics, privacy, and personality rights. A small but growing number of companies have already begun quietly testing similar tools, according to industry insiders.
Jia, an employee at a major Beijing-based internet company who spoke on condition of anonymity, explained that high rates of worker turnover often create costly productivity gaps for businesses. Still, she argued that unauthorised replication of workers crosses a fundamental line: “If your chat logs, emails and work documents could be used to train an AI version of you without your knowledge after you leave, this is not just a data breach — it is a disrespect for individual labor.”
Public reaction to the trend has been deeply divided. On the Chinese social platform Xiaohongshu, one user shared a greeting from a digital replica of a former coworker that read: “I’m the digital avatar of the former employee. You may ask me questions, and I will answer based on documents from my time working here.”
One commenter responded to the post with unease, writing: “This is spine-chilling. In the past, when someone left a job, their desk was cleared and their work account deactivated. Now, even after your physical self has moved on, your ‘digital ghost’ remains trapped in your former workplace, working for the boss for free.” Another user made an unconfirmed claim that their employer forced them to train an AI model of their own work skills just before terminating their contract.
Legal experts have warned that unregulated use of this technology carries significant legal and ethical risks. Meng Zedong, a Beijing-based lawyer with Yingke Law Firm, explained that collecting an individual’s private work records, emails, and personal work documents without explicit consent qualifies as an abuse of personal information under Chinese law. “Intellectual property such as design drawings and technical plans created during employment belongs to the company,” Meng noted. “However, logical thinking, communication habits and work experience are part of personal privacy. Companies have no right to use such data to train AI without the individual’s knowledge.”
Meng added that if an AI avatar can be traced back to a specific identifiable individual, it may also violate that person’s personality rights. “Chinese law stipulates that personal dignity is inviolable. Such acts may violate that principle and contravene public order and good morals,” he said.
Wang Yegang, a professor of law at the Central University of Finance and Economics, echoed that assessment, noting that creating unauthorised digital replicas using personal data can infringe on multiple distinct civil rights. If a replica uses a person’s name, voice, or unique identity, it immediately violates personality rights, he explained, and if the avatar makes inappropriate statements that damage the original person’s reputation, it can also qualify as defamation.
Wang added that companies generally have no legal grounds to force employees to train AI systems using their personal skills and professional habits, as this does not qualify as a necessary component of routine labor management. “Individuals who find themselves replicated have the right to request deletion of data, destruction of models and an apology,” Wang said. “They may also seek compensation for property damage and emotional distress.”
Not all industry observers view the rise of digital worker avatars as entirely negative. Li Qiang, vice-president of major Chinese recruitment platform Zhaopin, noted that some legitimate businesses are testing the technology as a way to codify exiting employees’ professional knowledge into shared organizational assets, reducing the workflow disruptions that commonly occur when experienced staff leave a role.
Li added that the technology is unlikely to cause mass layoffs in the short term, because AI avatars built from existing employee data are only capable of handling structured, routine tasks, and cannot replace human workers when it comes to complex decision-making or interpersonal coordination. That said, he did warn that overreliance on these systems could carry long-term risks for corporate innovation. “AI is good at replicating past experience, but human judgment is still essential when confronting new problems,” he explained.
Li urged the public and policymakers to take a balanced approach to the emerging technology. “Every technological revolution redefines human value,” he said. “This time, AI may help us better understand which abilities are truly unique to human beings.”
