分类: technology

  • OpenAI boss ‘deeply sorry’ for not telling police of Tumbler Ridge suspect’s account

    OpenAI boss ‘deeply sorry’ for not telling police of Tumbler Ridge suspect’s account

    The chief executive and co-founder of leading artificial intelligence developer OpenAI has issued a formal public apology to the small Canadian community of Tumbler Ridge, after the company faced widespread criticism for failing to notify law enforcement of a problematic ChatGPT account tied to the perpetrator of a deadly January mass shooting.

    In a personal letter released publicly Thursday, Sam Altman expressed deep regret that OpenAI did not alert Canadian police to the account, which the company banned six months before the attack for violating content policies. “The pain your community has endured is unimaginable,” Altman wrote in the correspondence addressed directly to Tumbler Ridge residents. “While I know that words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.” Altman, who is a parent to a young child, added, “I cannot imagine anything worse in this world than losing a child.”

    The shooting, carried out by 18-year-old Jesse Van Rootselaar, left eight people dead and nearly 30 others injured, making it one of the deadliest mass violent events in the history of British Columbia. Multiple of the victims were young secondary school students. Van Rootselaar died from a self-inflicted gunshot wound during the incident, law enforcement confirmed after the attack.

    In the weeks following the January shooting, OpenAI acknowledged that it had identified and banned Van Rootselaar’s ChatGPT account months before the attack over inappropriate usage. However, the company chose not to share the account information with police at the time, arguing that the activity on the account did not meet OpenAI’s internal threshold for a credible, imminent plan to inflict serious physical harm on others. Altman explained in his letter that he delayed the public apology out of respect for the community’s grieving process, noting that time was needed to allow residents to mourn before any public statement.

    An OpenAI spokesperson confirmed the authenticity of Altman’s letter to reporters, but declined to provide any additional comment beyond the content of the correspondence. The apology comes after the parents of a student who was severely wounded in the school attack filed a lawsuit against OpenAI. The lawsuit alleges that the company had clear, specific knowledge of the shooter’s long-term planning for a mass casualty event but failed to take any action to warn authorities or prevent the attack.

    This incident is not the only legal and regulatory scrutiny OpenAI is facing over connections between its AI chatbot and mass violent attacks. The company is already the subject of an active criminal investigation in Florida, tied to a 2025 shooting at Florida State University that left two people dead and multiple others injured. Authorities are probing the case after the suspect accused in that attack reportedly used ChatGPT to plan his assault.

    In response to growing pressure over AI safety protocols, OpenAI has committed to updating and strengthening its internal safety monitoring systems. In his letter, Altman reaffirmed the company’s commitment to collaboration, writing that OpenAI will continue working with all levels of government to put new safeguards in place that prevent a similar tragedy from occurring in the future.

  • China sends experimental satellites into orbit

    China sends experimental satellites into orbit

    On April 24, 2026, China marked another key milestone in its space-based internet infrastructure development with the successful launch of a batch of experimental satellites from the Xichang Satellite Launch Center located in the southwestern province of Sichuan. According to the China Aerospace Science and Technology Corporation, the country’s top state-owned space contractor, the mission lifted off at 2:35 p.m. Beijing Time, with a veteran Long March 2D liquid-fuel carrier rocket delivering the Space-based Internet Technology Demonstrator series satellites into their pre-planned orbits without incident.

    This launch marks the ninth orbital deployment of satellites for the Space-based Internet Technology Demonstrator program, which kicked off with its inaugural mission back in July 2023. Among the new satellites placed into orbit is a platform developed by GalaxySpace, a leading private aerospace firm headquartered in Beijing. This particular spacecraft is designed to carry out cutting-edge technical trials for several critical next-generation satellite technologies, including broadband direct-to-device cellular communication, integrated space-ground network architecture, and other core enabling technologies for global satellite internet.

    The launch is part of China’s broader, ambitious plan to build a large-scale low-Earth orbit satellite mega-constellation, which will consist of approximately 13,000 individual satellites working together to deliver comprehensive global internet coverage to users across the planet. This infrastructure will help bridge the digital divide for remote and underserved regions that lack access to traditional terrestrial broadband networks.

    Produced by the Shanghai Academy of Spaceflight Technology, the Long March 2D rocket that carried out this mission is a proven workhorse of China’s launch fleet. Powered by liquid propellants, the rocket generates 300 metric tons of liftoff thrust, and is certified to deliver payloads of up to 1.2 tons into a 700-kilometer sun-synchronous orbit, making it well-suited for the deployment of this class of experimental communications satellites.

  • AI model Zeta to expand use of Tibetan language

    AI model Zeta to expand use of Tibetan language

    A groundbreaking new artificial intelligence model tailored specifically for the Tibetan language has cleared national regulatory approval and entered public pilot testing, marking a major milestone in expanding digital inclusion and technological development across China’s Tibetan-speaking regions. Developed by the State Key Laboratory of Tibetan Intelligence at Qinghai Normal University, the Zeta model — Qinghai’s first large-scale multimodal Tibetan language AI system — was officially unveiled in Beijing on April 22, 2026, opening new doors for innovation across sectors from cultural preservation to public services.

    Unlike earlier Tibetan language AI tools that were limited to single functions such as basic text translation or speech recognition, Zeta was built from the ground up to deliver comprehensive, full-spectrum language processing capabilities across all major forms of linguistic interaction. According to Dorlha, executive deputy director of the development laboratory, the model supports integrated listening, speaking, reading, writing and translation across the three primary regional Tibetan dialects: Amdo, U-Tsang and Kham.

    This broad capability set allows Zeta to tackle a wide range of specialized use cases that were out of reach for previous tools. Its core innovative functions include mixed-language document recognition, automated audiobook production, intelligent retrieval of ancient Tibetan literature, and real-time intelligent subtitle transcription. For industry-specific applications, the model also offers built-in features for digital broadcasting, agricultural information dissemination and tourist translation services, making it a flexible resource for public and private stakeholders across media, agriculture, tourism, healthcare, education and governance.

    To address longstanding technical barriers in Tibetan language AI development — most notably the historical lack of large-scale, high-quality training data — the Zeta development team assembled an expansive, diverse training corpus. The model’s dataset includes 150 gigabytes of curated high-quality Tibetan text, 87 million parallel multilingual sentence pairs across Tibetan, standard Chinese and English, and 30,000 hours of labeled multi-dialect Tibetan audio recordings. Zeta integrates all three languages into a unified multilingual framework, and pairs custom-developed algorithms with full compatibility for domestic AI infrastructure, delivering proven technical maturity and room for future expansion. It is available in three parameter configurations of 7 billion, 50 billion and 122 billion parameters to accommodate different use cases and computing environments, from mobile device deployment to large-scale server-side applications.

    Nyima Tashi, director of the State Key Laboratory of Tibetan Intelligence and a professor at Xizang University, emphasized that the launch of Zeta and its supporting applications will drive high-quality economic and social development across China’s Tibetan regions. Moving forward, the research team plans to continue expanding the model’s capabilities by opening its multimodal functions through public application programming interfaces, fostering deeper collaboration between academic institutions and private sector enterprises, and building a complete, self-sustaining ecosystem for Tibetan language AI innovation. The lab also plans to increase research investment, strengthen specialized talent training, and advance partnerships across industry, academia and research institutions to further refine the technology.

    Zeta’s launch comes just one month after the release of Deep-Zang, the first large Tibetan language model developed in the Xizang Autonomous Region, giving users across Tibetan-speaking regions a growing range of specialized AI tools to meet their needs. For Tibetan communities and users, the innovation carries far more meaning than just technological progress. Tenzin Palden, a Tibetan student studying at Shandong Agricultural University, noted that Zeta addresses long unmet needs for advanced Tibetan language digital tools, offering new hope for preserving Tibetan linguistic and cultural identity in an increasingly digital-first world.

    “By addressing historical challenges like limited datasets and diversity in Tibetan dialects, this innovation provides much-needed momentum for bridging the wisdom of Tibetan traditions with modern development,” Tenzin Palden said. “It is not just a technological achievement but also a reflection of the protection and transmission of ethnic culture.”

  • China’s DeepSeek rolls out a long-anticipated update of its AI model

    China’s DeepSeek rolls out a long-anticipated update of its AI model

    As competition in artificial intelligence between the United States and China reaches new levels of intensity, prominent Chinese AI startup DeepSeek rolled out previews of its highly anticipated next-generation V4 model lineup on Friday, marking another major milestone in China’s push to advance its domestic AI ecosystem independent of U.S. technology.

    The V4 release comes months after DeepSeek’s specialized R1 reasoning model upended global tech markets earlier this year, with the startup claiming it outperformed comparable U.S.-built models at a far lower cost. R1 quickly became a global symbol of China’s rapid progress closing the AI gap with the United States, and expectations for V4 have been building among developers and users eager to compare its capabilities to leading models from U.S. industry leaders OpenAI, Anthropic, and Google. Some analysts initially projected the V4 launch would arrive more than two months ago, to coincide with the Lunar New Year holiday.

    DeepSeek’s new V4 family includes two core open-source variants: the high-performance V4 Pro and the lightweight V4 Flash. The startup says the new models deliver sweeping upgrades across three key areas: general knowledge retention, logical reasoning, and agentic functionality — the ability for AI to complete complex, multi-step workflows and tasks without constant human input. One of the most notable shifts in the new lineup is its underlying hardware: unlike prior DeepSeek models that relied on U.S.-made chips from industry leader Nvidia, V4 is powered by chips developed by Chinese tech giant Huawei.

    In a statement accompanying the launch, DeepSeek shared internal benchmark results comparing V4 to top U.S. models. The company notes its top-tier V4 Pro Max delivers superior performance on standard reasoning tests compared to OpenAI’s recently released GPT-5.2 and Google’s Gemini 3.0-Pro, though it falls slightly short of OpenAI’s GPT-5.4 and Google’s Gemini 3.1-Pro. The V4 Pro, DeepSeek claims, outperforms Anthropic’s mid-tier Claude Sonnet 4.5 in agentic capabilities and comes close to matching Anthropic’s flagship Claude Opus 4.5. For everyday simple agent tasks, the more efficient V4 Flash matches the performance of V4 Pro, with reasoning capabilities that nearly equal its higher-end counterpart, per the company’s testing.

    The V4 launch came just hours after OpenAI introduced its own newest model, GPT-5.5, in a clear sign of the breakneck pace of competition in the global AI race. Both V4 variants also include a game-changing 1 million token context window — the measure of how much text and data an AI can process and retain in a single session. That marks an eightfold increase from the 128,000 token window supported by DeepSeek’s previous V3 model, released in late 2024, and enables the new models to handle far larger datasets, long documents, and complex extended conversations more effectively. The startup also emphasized the V4 lineup is designed to run far more efficiently than prior generations.

    Unlike closed, proprietary models from leading U.S. AI developers, DeepSeek makes its core technology open source, allowing outside developers to modify, adapt, and build new tools on top of its models. The company also offers a free publicly accessible chatbot for web and mobile users, helping it gain a large global user base. A January report from Microsoft found DeepSeek usage has grown rapidly across many developing nations, especially in regions where Huawei smartphones dominate the consumer market.

    Huawei confirmed its compatibility with the new models in a separate statement Friday, noting its Ascend chip platform and supporting ecosystem work seamlessly with DeepSeek V4. Marina Zhang, an associate professor at the University of Technology Sydney, described the V4 rollout as a “pivotal milestone for China’s AI industry”, particularly amid rising global pressure for countries to build self-reliance in critical emerging technologies. She added that the partnership with Huawei demonstrates that a fully functional Chinese AI ecosystem independent of Nvidia’s market dominance is technically achievable, even as U.S.-China technological decoupling continues.

    Lian Jye Su, chief analyst at global technology research firm Omdia, concluded that “Based on the benchmark results, it does appear DeepSeek V4 is going to be very competitive against its U.S. rivals.”

    Not all industry observers are convinced the V4 represents a transformative leap forward, however. Ivan Su, senior equity analyst at investment research firm Morningstar, argued that while V4 is a solid, capable update to DeepSeek’s product lineup, it does not deliver the same level of groundbreaking innovation that R1 introduced earlier this year. He noted that domestic competition within China’s AI sector has intensified dramatically since R1’s launch, and that independent third-party testing is needed to verify DeepSeek’s own performance claims, since the startup’s internal comparisons cannot yet be confirmed by outside experts.

    The V4 launch comes amid ongoing friction between U.S. AI firms and Chinese developers over intellectual property. Earlier this year, Anthropic publicly accused DeepSeek and two other China-based AI labs of running “industrial-scale campaigns” to steal its technology via a process called knowledge distillation, a method that trains a smaller model by feeding it the outputs of a more powerful competitor model to replicate its capabilities. OpenAI made similar accusations in a letter to U.S. lawmakers, and this week Michael Kratsios, chief science and technology adviser to U.S. President Donald Trump, repeated the claims, accusing Chinese tech firms of distilling leading U.S. AI systems to “exploit American expertise and innovation.”

    The Chinese embassy in Washington has pushed back against these allegations, framing them as unjustified efforts by the United States to stifle competition from Chinese tech companies.

  • ‘Clearly me’: AI drama accused of stealing faces

    ‘Clearly me’: AI drama accused of stealing faces

    The rapid expansion of artificial intelligence has opened a new chapter of ethical and legal uncertainty, highlighted by a recent high-profile case in China’s booming microdrama industry, where two creators have accused a viral AI-generated series of stealing their likenesses without consent to portray villainous characters.

    Christine Li, a 26-year-old model and social media influencer based in Hangzhou, never auditioned for, nor agreed to appear in, the AI microdrama *The Peach Blossom Hairpin*. The show, which premiered last month on Hongguo — a leading short-form microdrama platform owned by ByteDance, the parent company of TikTok — gained significant traction before the controversy broke. Li only learned of her unauthorised appearance when fans reached out to alert her that the show’s lead antagonist was an obvious digital replica of her, created from public photos she had posted to her social media channels two years prior.

    What made the experience even more distressing for Li was the nature of the character: her digital deepfake was scripted to commit acts of violence against other women and animal abuse. “I was genuinely shocked. It was clearly me,” Li told Agence France-Presse in an interview. “I also felt a deep fear. I kept wondering what kind of person would do something like this.”

    Li is not alone in her experience. A male stylist specialising in traditional Chinese clothing and cosmetics, who requested the pseudonym Baicai to protect his privacy, also discovered his likeness had been stolen to play the role of Li’s character’s husband, another unsavoury, “sleazy” antagonist. Like Li, Baicai had shared public costume photos on Xiaohongshu, China’s Instagram-style social platform, which were used to generate his digital twin. Both individuals confirmed to AFP that their original photos bear an unmistakable resemblance to the characters featured in the series.

    Baicai shared the same concerns as Li, worrying that the negative portrayal could damage his personal reputation and harm future career opportunities. “There are probably plenty of cases with unknown victims,” he noted, pointing to the widespread lack of oversight for unauthorised AI deepfake use in the fast-growing sector.

    Microdramas, ultra-short online soap operas with episodes running just two to three minutes, have exploded in popularity across China and global markets in recent years. As of October 2024, Hongguo alone counts roughly 245 million monthly active users, hosting thousands of free AI-generated and live-action bite-sized shows. The industry has turned to AI as a low-cost tool to speed up production and cut expenses in the highly competitive, multi-billion-dollar market.

    However, the case has exposed critical gaps in content moderation and regulatory oversight. Even after the story gained public attention and sparked widespread outcry over AI ethics, AFP confirmed that *The Peach Blossom Hairpin* remained online for days before it was removed, with the unauthorised deepfake characters only quietly swapped out after public pressure grew.

    In early April, Hongguo released an initial statement confirming it had removed the series after finding producers violated platform rules and contractual agreements. In a follow-up statement released earlier this month, the platform said it would implement broader reforms to strengthen content review and creator authorisation protocols. It also noted that it had already removed 670 AI-generated microdramas that violated platform regulations, and would issue harsher penalties for repeat offenders. When contacted by AFP for comment, ByteDance directed reporters to the two existing Hongguo statements.

    Two Chinese companies are linked to the production of *The Peach Blossom Hairpin*: one is associated with a verified account on Douyin, the Chinese version of TikTok, that published the series, while the other is listed as the official producer on a Chinese government registration portal. AFP contacted both firms for comment but received no response. Li and her legal team are still working with Hongguo to confirm the exact identity of the responsible creator, a necessary step before moving forward with the planned lawsuit against both the producers and the platform.

    Current Chinese regulation places primary responsibility for screening potentially problematic content on hosting platforms, according to rules set by the National Radio and Television Administration. Platforms that fail to complete mandatory content reviews face forced removal of non-compliant content. If platforms are aware of intellectual property or rights infringement and fail to take action, affected individuals can report the issue to Chinese cyberspace regulators, who can levy administrative penalties, explained Zhao Zhanling, a partner at Beijing’s Javy Law Firm.

    Yijie Zhao, Li’s lawyer from Henan Huailv Law Firm, noted that using AI to feature an individual in a demeaning, negative role without explicit permission may violate both portrait rights and reputation rights under Chinese law. New national regulations that took effect this month require all AI-generated microdrama content to be officially registered and licensed, but legal experts note that bad actors can still avoid accountability by registering temporary shell companies or hosting content on overseas servers to hide their activity.

    While 2024 saw a Beijing Court order a company to pay compensation and issue a public apology to a celebrity whose likeness was used without permission to create an AI deepfake for inappropriate purposes, lawyers note that plaintiffs who are not public figures with high commercial value often receive relatively low compensation for such violations. For Li, the damage extends far beyond financial compensation: she worries that her connection to the controversy will damage her reputation and cost her future modelling opportunities, leaving her permanently associated with the scandal.

    Baicai has not yet launched formal legal action, but he joins Li in calling for stronger regulatory and platform safeguards to prevent similar unauthorised deepfake misuse from happening to other people. The case has reignited global conversations around the risks of AI deepfake technology, which has already raised widespread concerns over job displacement for actors, as well as its misuse for scams, disinformation and non-consensual intimate content.

  • Trump administration vows crackdown on Chinese companies ‘exploiting’ AI models made in US

    Trump administration vows crackdown on Chinese companies ‘exploiting’ AI models made in US

    As China rapidly closes the technological gap with the United States in global artificial intelligence development, the Trump administration has launched a new crackdown on what it frames as unfair exploitation of American AI innovation by foreign firms, with China positioned as the primary target of the policy push.

    In a formal memorandum released Thursday, Michael Kratsios, then-President Trump’s top science and technology advisor, leveled accusations that foreign entities—most headquartered in China—are running coordinated, industrial-scale campaigns to “distill” core capabilities from leading U.S.-built AI systems, effectively siphoning off American research and development work for their own gain. “Foreign actors are exploiting decades of American expertise and innovation to cut corners on their own AI development,” Kratsios wrote, outlining that the administration would partner with leading U.S. AI companies to map unauthorized extraction activity, reinforce defensive systems, and implement penalties against bad actors.

    The policy announcement lands amid a shifting global AI landscape: the White House has repeatedly framed AI dominance as a critical strategic priority, arguing U.S. leadership is necessary to set global technical norms and secure long-term economic and military advantages. However, a recent analysis from Stanford University’s Human-Centered AI Institute found that the performance gap between the world’s top U.S. and Chinese AI models has “effectively closed” in recent years, eroding the long-held American competitive edge.

    China’s embassy in Washington swiftly pushed back against the accusations, condemning what it called the United States’ “unjustified suppression of Chinese companies.” “China has always been committed to advancing global scientific and technological progress through open cooperation and healthy, fair competition,” embassy spokesperson Liu Pengyu said in a statement, adding that China prioritizes rigorous intellectual property protection for all innovators.

    Kratsios’ memo coincided with a key congressional development that same week: the U.S. House Foreign Affairs Committee gave unanimous, bipartisan backing to a new bill that would establish a formal government process to identify foreign actors that steal core technical details from closed-source, U.S.-owned AI models, and impose punitive measures including economic sanctions against offenders. The bill’s sponsor, Republican Representative Bill Huizenga of Michigan, framed model extraction attacks as a new front in Chinese economic aggression and intellectual property theft. “American AI models are delivering transformative new capabilities that will reshape our economy and national security,” Huizenga said. “It is absolutely critical that we block China from stealing these decades of technological advancement to boost their own strategic position.”

    Tensions over AI extraction first flared last year, when Chinese AI startup DeepSeek launched a high-performance large language model that could compete with products from top U.S. AI giants—at a fraction of their development cost, sending shockwaves through U.S. tech markets. David Sacks, who served as Trump’s AI and crypto advisor at the time, publicly claimed there was substantial evidence that DeepSeek had distilled proprietary knowledge from OpenAI’s leading models to build its own product. OpenAI, the developer behind ChatGPT, echoed these claims in a February letter to U.S. lawmakers, arguing that China should not be allowed to build what it called “autocratic AI” by “appropriating and repackaging American innovation.”

    Shortly after, Anthropic—creator of the popular Claude chatbot—accused DeepSeek and two other China-based AI research labs of running coordinated campaigns to illicitly extract Claude’s core capabilities to improve their own competing models via knowledge distillation, a technique that involves training a smaller, less advanced model on the output of a more powerful, cutting-edge system. While Anthropic acknowledged that distillation can be a legitimate, widely used method for AI training when done with permission, the company argued that it becomes unfair and illicit when competitors use the technique to gain powerful AI capabilities in a fraction of the time and at a tiny fraction of the cost required to develop leading models independently.

    However, cross-border knowledge sharing in AI works in both directions. San Francisco-based startup Anysphere, maker of the widely used coding tool Cursor, recently confirmed that its latest flagship product is built on an open-source model developed by Chinese AI firm Moonshot AI, creator of the popular Kimi chatbot.

    Industry and policy experts note that enforcing new restrictions on unauthorized AI distillation will pose massive practical challenges. Kyle Chan, a Brookings Institution fellow based in Washington D.C. and a leading expert on Chinese technology development, explained that distinguishing unauthorized extraction from the massive volume of legitimate, routine data requests from AI systems is comparable to “looking for needles in an enormous haystack.” That said, Chan added that coordinated information sharing between U.S. AI research labs could help mitigate the risk, and the federal government can play a key facilitating role in aligning anti-extillation defenses across the private sector.

    While it remains unclear how far the House-passed bill will advance through the legislative process, Chan noted that the Trump administration may be hesitant to escalate tensions with Beijing ahead of a planned mid-May state visit by the U.S. president to China, creating uncertainty about how aggressively the new policy will be implemented.

    This reporting included contributions from Matt O’Brien, an AP technology writer based in Providence, Rhode Island.

  • China mass-produces chip-scale atomic clock with ultra-high precision

    China mass-produces chip-scale atomic clock with ultra-high precision

    China has marked a landmark breakthrough in quantum precision measurement and high-precision timekeeping technology, with the successful mass production of an ultra-compact, fingernail-sized chip-scale atomic clock boasting extraordinary accuracy: it deviates by just one second over 30,000 years of operation. This advancement delivers a robust, high-precision time foundation for critical national strategic sectors ranging from low-Earth-orbit satellites to underwater BeiDou navigation systems, cementing China’s position as a global leader in the field.

    Developed by the Satellite Navigation and Positioning Technology Research Center at Wuhan University in central China’s Hubei Province, and commercialized via spin-off enterprise Zhongke Taifeisi (Wuhan) Technology Co, the finished device measures a mere 2.3 cubic centimeters — approximately one-seventh the volume of comparable atomic clock products manufactured in the United States, while delivering matching performance levels.

    “Time is a fundamental strategic resource. Nations that master the highest precision in timekeeping gain a decisive competitive edge across technology, economics, and even national defense,” explained Chen Jiehua, a professor at the Wuhan University research center and legal representative of Zhongke Taifeisi, in an interview with Hubei’s local newspaper Changjiang Daily. Chen, whose team has spent decades advancing the technology, emphasized the critical link between timing accuracy and navigation performance: “In navigation and positioning, time equals distance. A timing error of just one nanosecond — one billionth of a second — translates to a positioning deviation of 0.3 meters. Even the most accurate consumer timepieces drift by more than 10 seconds annually, which is why holding the “power of time” in China’s own hands has been such a critical national priority.”

    Unlike traditional timing solutions that rely on satellite calibration, chip-scale atomic clocks provide an independent, stable time reference in environments where satellite signals cannot reach or become compromised. These use cases include underwater exploration, underground infrastructure, deep space missions, and battlefields where global positioning signals are intentionally jammed.

    Traditional large atomic clocks operate by counting stable frequency signals produced when microwave fields interact with atoms. However, the long wavelength of microwaves imposes hard limits on how small these devices can be made. Chip-scale atomic clocks take a different approach, using microwave-modulated lasers that can be guided through extremely compact spaces. This innovation allows the devices to deliver ultra-high precision while cutting both physical size and power consumption by dozens of times compared to legacy designs.

    Chen highlighted the enormous untapped market potential for the technology, noting the device’s combination of tiny form factor (just a few cubic centimeters) and low power draw (less than 200 milliwatts). For example, on the seabed where satellite signals cannot penetrate and solar power is unavailable, autonomous synchronization systems require both ultra-precise time references and long-duration low-power operation — a combination that makes the new chip-scale atomic clock an ideal core frequency source component.

    To date, Zhongke Taifeisi is the first and only Chinese company to achieve large-scale commercial production of chip-scale atomic clocks. The devices have already been successfully deployed in real-world use cases, including time synchronization systems for underwater BeiDou navigation, low-Earth-orbit satellites, and drone swarms. As of 2024, the product had already sold several hundred units, with sales continuing a steady upward trajectory through 2025.

    Gou Fei, a representative of Yangtze River Industry Group — which holds a more than 20% stake in Zhongke Taifeisi — noted that quantum technology is designated as a top strategic priority for China’s future industrial development, with quantum precision measurement standing out as a key subfield where chip-scale atomic clocks act as a core enabling device.

    “Professor Chen Jiehua’s team has developed the world’s smallest chip-scale atomic clock, and in doing so has completely broken the long-standing foreign technology monopoly in the sector,” Gou said. “The product delivers a comprehensive leap forward: it is smaller than competing alternatives, matches or outperforms them in functionality, and supports scalable mass production. This achievement places China at the cutting edge of the global quantum industry.”

    Despite this milestone, mass market adoption still faces hurdles: currently, production is constrained by the performance limitations and high cost of imported laser components. To address this gap, Gou noted that Yangtze River Industry Group will deploy its capital and industrial resources to help Zhongke Taifeisi breakthrough key domestic component technologies, scale up automated production to bring down costs, and expand use cases across both military and civilian communications networks. The expansion will also strengthen Hubei’s already strong competitive position in the global quantum precision measurement sector.

    This breakthrough aligns directly with China’s 15th Five-Year Plan (2026-2030) for national economic and social development, which prioritizes achieving key technology breakthroughs in quantum precision measurement and positioning quantum technology as a core new growth driver for the national economy.

    Globally, the sector is also growing rapidly. According to QYResearch, a global industrial market research firm with dual headquarters in Beijing and Los Angeles, the global market for chip-scale atomic clocks hit 405 million yuan ($60 million) in sales last year, and is projected to grow to 737 million yuan by 2032, reflecting rising demand across defense, navigation, telecommunications and scientific research sectors worldwide.

  • Apple’s Tim Cook to step down as CEO

    Apple’s Tim Cook to step down as CEO

    SAN FRANCISCO — One of the most consequential leadership transitions in modern tech history is set to unfold at Apple this year: long-serving chief executive Tim Cook will step down from his top role this September, passing the reins to 22-year company veteran John Ternus as the Silicon Valley giant navigates a rapidly shifting global technology landscape reshaped by the artificial intelligence boom.

    The 65-year-old Cook, who has steered Apple for 15 years after taking over from the company’s iconic co-founder Steve Jobs following Jobs’ health departure in 2011, will transition into the role of executive chairman of Apple’s board of directors after his exit from the CEO post. The long-awaited announcement, made public this Monday, puts to rest years of market and industry speculation about who would inherit the leadership of the world’s most valuable company.

    Cook first joined Apple back in 1998, working his way up through the executive ranks to become chief operating officer, where he oversaw the iPhone maker’s famously complex global supply chain and laid the operational groundwork for Apple’s explosive growth in the 2000s. When he stepped into the CEO role in 2011, Cook inherited a company at the peak of its early success, and over his 15-year tenure, he delivered transformative growth: he expanded Apple’s product portfolio far beyond its core iPhone line, and guided the company to a staggering market valuation of roughly $4 trillion in current share price terms.

    “It has been the greatest privilege of my life to be the CEO of Apple and to have been trusted to lead such an extraordinary company,” Cook said in an official statement announcing the transition.

    Arthur Levinson, Apple’s outgoing board chairman, lauded Cook’s unprecedented tenure at the company’s helm, noting that “Tim’s unprecedented and outstanding leadership has transformed Apple into the world’s best company. His integrity and values are infused into everything Apple does.”

    Ternus, the incoming CEO, first joined Apple’s product design team back in 2001, working his way up to senior vice president of hardware engineering over the course of more than two decades at the company. He has been a core contributor to nearly all of Apple’s flagship product launches over that period, playing key roles in the development of iPhones, iPads, Apple Watch, and the modern line of Mac personal computers.

    For Ternus, the opportunity to lead Apple comes after a career shaped by the company’s two most recent leaders: “Having spent almost my entire career at Apple, I have been lucky to have worked under Steve Jobs and to have had Tim Cook as my mentor,” he said in the official announcement.

    The leadership transition comes at a pivotal moment for Apple, as the global tech industry races to integrate generative artificial intelligence into consumer products and services, putting new competitive pressure on established players to innovate or risk falling behind to faster-moving rivals. Ternus’ deep background in hardware development also signals that Apple will continue to tie its AI innovation to its core integrated product ecosystem, a strategy that has defined the company’s success for decades.

  • Chinese engineers plan to study building greenhouse on lunar surface

    Chinese engineers plan to study building greenhouse on lunar surface

    BEIJING, April 22 — In an announcement made at a Beijing press conference this week, a senior leader from China’s lunar exploration program has revealed that Chinese space engineers are set to launch preliminary research into constructing a functional greenhouse on the surface of the moon.

    Wang Qiong, senior space engineer and deputy chief designer of China’s groundbreaking Chang’e 6 mission at the China National Space Administration (CNSA) Lunar Exploration and Space Program Center, outlined that the initiative leverages cutting-edge lunar construction technologies to address one of the most persistent hazards of lunar exploration: the extreme environment of the lunar night. Spanning 14 Earth days, the lunar night sees temperatures plummet to as low as minus 200 degrees Celsius, creating life-threatening and equipment-damaging conditions for lunar rovers, robotic systems, and any future human expeditions. The proposed greenhouse would act as a temperature-controlled shelter, allowing robotic assets to survive the long, frigid dark period more reliably than existing power and thermal management systems.

    As China’s lunar exploration program shifts its long-term strategy from short-duration robotic missions to sustainable infrastructure that will support eventual human stays on the moon, this research fills a critical gap in current lunar habitat design, Wang noted. A functional lunar greenhouse could also lay early groundwork for testing in-situ resource utilization and closed-loop life support systems that will be essential for future crewed lunar bases.

    The announcement of the greenhouse research comes on the heels of a series of major scientific breakthroughs achieved by the Chang’e 6 mission, which made history as the first human mission to return geological samples from the far side of the moon. In June 2024, the Chang’e 6 return capsule touched down in northern China, carrying 1,935.3 grams of far-side lunar material back to Earth. Analysis of these unprecedented samples has already allowed Chinese scientists to reconstruct, for the first time in global lunar science, the complete evolutionary geological history of the moon’s little-studied far side.

    Wang also highlighted the collaborative, open nature of China’s lunar exploration efforts, noting that the Chang’e 6 mission successfully carried international payloads from partner space agencies across the globe. The mission hosted a Pakistani CubeSat, plus three independent scientific instruments from France, the European Space Agency (ESA), and Italy. All international cooperative instruments have already returned data that exceeded pre-mission performance expectations, demonstrating the value of global collaboration in advancing deep space exploration.

    The plan to research a lunar greenhouse marks another step forward in China’s expanding lunar exploration roadmap, building on the historic success of Chang’e 6 to push the boundaries of what is possible for long-term lunar activity.

  • Most serious cyberattacks against the UK now from Russia, Iran and China, cyber chief will say

    Most serious cyberattacks against the UK now from Russia, Iran and China, cyber chief will say

    At the annual CyberUK conference hosted in Glasgow, Scotland, the leader of the United Kingdom’s top cyber defense body will deliver a stark wake-up call this Wednesday: the gravest cyber threats facing the nation today are not the work of criminal gangs, but of hostile state actors based in Russia, Iran, and China. Richard Horne, chief executive of the National Cyber Security Centre (NCSC) — a division of the UK’s signals intelligence agency GCHQ — will frame this growing threat against a backdrop of unprecedented geopolitical upheaval, arguing the world is now experiencing the most dramatic geopolitical shift seen in modern history. Previews of Horne’s speech, shared with journalists ahead of the event, emphasize that British private and public sector organizations cannot afford to delay upgrading their cyber defenses, as large-scale state-sponsored attacks could target the UK rapidly if the nation becomes entangled in a major international conflict.

    Horne’s warning aligns with a growing chorus of alarm across Europe, where Nordic and Central European nations have repeatedly flagged state-linked hacking campaigns targeting critical national infrastructure in recent months. Per Horne’s prepared remarks, the NCSC currently responds to roughly four nationally significant cyber incidents every week. While criminal activity, most notably ransomware attacks, remains the most common cyber challenge for UK entities, the most destructive and high-stakes threats stem from operations backed directly or indirectly by foreign governments.

    This characterization of an increasingly dangerous global security landscape echoes recent remarks from other top UK intelligence leaders. Back in December, Blaise Metreweli, head of the UK Secret Intelligence Service (MI6), noted that the international order is far more contested and dangerous than it has been in decades, with the UK now operating in a gray zone that falls somewhere between formal peace and open war. “Let’s be clear, cyberspace is part of that contest,” Horne will reiterate in his Glasgow address.

    Horne will outline distinct threat profiles for each of the three major hostile state actors: China’s intelligence and military apparatuses have demonstrated a staggering, eye-watering level of technical sophistication in their global cyber operations; Iran, he will add, is highly likely using cyber tools to repress British dissidents and activists within the UK itself, targeting individuals the Iranian regime views as threats to its rule. For Russia, Horne will note that the Kremlin has refined and tested its cyber tactics through its full-scale invasion of Ukraine, and is now deploying those battle-hardened techniques far beyond the Ukrainian battlefield, carrying out sustained hybrid cyber operations targeting the UK and the wider European continent.

    A core message of Horne’s speech is a call to action for British organizations: corporate and institutional leaders must study how cyber operations have been deployed in active conflict to build their own defensive resilience. Unlike ransomware attacks, which often can be resolved (at great cost) through payment of a ransom, large-scale state-sponsored cyberattacks in a conflict scenario leave no such exit. No amount of money will buy back access to hijacked systems or stolen data, Horne will stress, meaning every organization must map the full scope of its vulnerability and harden defenses before a crisis hits.

    Recent cyber incidents across Northern Europe back up the urgency of this warning. Last Friday, Swedish authorities confirmed that a pro-Russian hacking group with ties to Russian intelligence services was responsible for a cyberattack on a Swedish heating plant carried out last year. Carl-Oskar Bohlin, Sweden’s civil defense minister, drew a direct line between that incident and a coordinated series of attacks in Poland last December, which hit combined heat and power plants supplying nearly 500,000 customers alongside multiple wind and solar farms. Polish investigators later concluded the hackers behind that assault were directly linked to Russian intelligence services.

    Those attacks are not isolated. Norwegian authorities have tied an April 2025 hack that disrupted water flow from a Norwegian dam to Russian actors, while Danish officials confirmed a 2024 cyberattack on a Danish water utility that left hundreds of homes without water was also linked to the Kremlin. The Associated Press has tracked more than 155 disruptive incidents — including arson, sabotage, espionage, and cyberattacks — linked to Russia or its proxies by Western officials since Moscow launched its full-scale invasion of Ukraine in February 2022. Beyond critical infrastructure attacks, European officials have also linked Russian actors to a hack of German air traffic control systems, repeated attempts to compromise Signal and WhatsApp accounts belonging to European officials and journalists, and campaigns to exploit router security vulnerabilities to steal sensitive user data on behalf of Russian military intelligence.