分类: technology

  • China’s overstretched healthcare looks to AI boom

    China’s overstretched healthcare looks to AI boom

    China’s healthcare sector is undergoing a radical technological transformation as artificial intelligence and digital solutions emerge to address systemic challenges of overcapacity and uneven resource distribution. The nation’s ambitious digitization push, accelerated by rapid AI advancements, is reshaping medical delivery models across urban and rural areas.

    Shanghai-based obstetrician Duan Tao exemplifies this shift through his AI avatar on healthcare application AQ (known as Afu in Chinese), which has amassed over 100 million users. The digital double, trained on textbooks, clinical case studies, and Duan’s social media content totaling more than 10 million data points, provides medical guidance while explicitly avoiding medication prescriptions. Ant Group, AQ’s developer, emphasizes the technology serves as supplementary consultation rather than treatment replacement.

    Patient experiences demonstrate the technology’s practical impact. Wang Yifan, a new mother, utilized obstetrician and pediatrician avatars throughout her pregnancy and postpartum period, reducing hospital visits and minimizing infection risks for her infant. “It can reduce the number of questions we need to ask doctors directly,” Wang noted, highlighting the platform’s role as a medical information mediator.

    This technological transformation occurs against a backdrop of demographic pressures, with China’s aging population intensifying strain on healthcare infrastructure. Ruby Wang of LINTRIS Health consultancy observes that “urgency drives change” in China’s health technology landscape, where state-industry alignment enables rapid pilot implementation at unprecedented scale.

    National implementation spans diverse applications: Chatbot DeepSeek operates in hundreds of hospitals, Tsinghua University runs an AI-integrated medical facility, and specialized tools like CardioMind (cardiology diagnostics) and PANDA (pancreatic cancer detection) demonstrate sector-wide adoption. Robotics companies like Fourier supply mechanical rehabilitation arms to rural centers, addressing geographical healthcare disparities.

    Despite enthusiastic adoption—evidenced by AI healthcare references in China’s Spring Festival Gala—medical professionals maintain cautious oversight. Dr. Duan emphasizes that “humans must retain the ultimate decision-making and choice,” acknowledging AI’s potential for hallucination. Infectious disease expert Zhang Wenhong warns that overreliance could erode physicians’ diagnostic judgment capabilities without proper training protocols.

    The transformation faces practical challenges, particularly among elderly patients requiring assistance with digital systems. Volunteers like 65-year-old Yan Sulian help bridge this technological gap at Shanghai health centers, teaching older citizens to navigate electronic registration systems and verify AI-generated medical advice through traditional consultations.

    As China prepares its 15th Five-Year Plan emphasizing technological transformation, the healthcare sector’s AI integration represents both a practical solution to systemic challenges and a case study in balancing innovation with medical conservatism, where safety remains the paramount concern in this rapidly evolving landscape.

  • Over 100 domestic, foreign teams to take part in intl embodied robot application competition

    Over 100 domestic, foreign teams to take part in intl embodied robot application competition

    Hangzhou is positioning itself at the forefront of the global embodied intelligence revolution with the upcoming 2026 International Embodied Robot Scenario Application Competition. Scheduled for May 15-16 in Yunqi Town—the city’s established robotics innovation district—the event has already attracted registration from more than 100 domestic and international teams alongside nearly 1,000 elite robotics specialists.

    The competition emerges as nations worldwide accelerate their pursuit of technological supremacy in embodied intelligence, particularly through humanoid robotics. According to event organizers, the industry currently stands at a critical juncture where laboratory breakthroughs must transition into commercially viable applications. The stability and reliability of robots operating within authentic, complex environments represent the fundamental challenge to achieving widespread industrial adoption.

    Guided by the principle of ‘real demand, real scenario, and real implementation,’ the competition structure comprises three distinct contests. The Professional Test Competition will evaluate robotic capabilities across five crucial dimensions: mobility, endurance, navigation, voice interaction, and manipulation. Notably, certain events will mandate autonomous robot perception rather than remote control operation.

    The Scenario Application Challenge, developed in collaboration with industry leaders including Ant Group and Greentown Group, derives its tasks from genuine business challenges. Participating teams may confront scenarios testing extreme maneuvering capabilities and emergency fire rescue management.

    Li Yongwei, Chief Engineer of Zhejiang’s Economy and Information Technology Department, emphasized the human-centric purpose of robotic advancement: ‘The ultimate value of robotic technology lies not in technical stunts, but in serving humanity.’

    Hosted within Yunqi Town’s concentrated ecosystem of robotics enterprises, the event aims to foster collaboration across industry, academic research, investment networks, and institutional knowledge. This gathering signifies a strategic effort to accelerate the practical implementation of embodied intelligence technologies that are reshaping global technological competition.

  • Amazon cloud services disrupted after ‘objects’ strike UAE data centre

    Amazon cloud services disrupted after ‘objects’ strike UAE data centre

    Amazon Web Services (AWS) experienced a significant service disruption in the Middle East after its data center in the United Arab Emirates was struck by unidentified objects, resulting in a fire. The incident occurred at approximately 4:30 PM Dubai time on Sunday, prompting the temporary shutdown of an entire “availability zone”—a cluster of data centers designed to provide redundant capacity.

    According to official statements from AWS, emergency response teams cut power to the facility while firefighters worked to contain the blaze. The company also reported investigating connectivity and power irregularities affecting services in Bahrain. By Monday morning, a separate zone within the region was impaired due to what AWS described as a “localized power issue.”

    Reuters cited a data center operator indicating that full restoration of services would require several hours, though other zones in the region remained operational. The incident coincides with a period of heightened military activity across the region, including Iranian retaliatory strikes targeting the UAE and other Arab states. These attacks came in response to earlier U.S. and Israeli operations that resulted in significant Iranian casualties, including the death of Supreme Leader Ayatollah Ali Khamenei and the tragic bombing of a school in southern Iran that killed around 180 schoolgirls.

    While the exact cause of the data center impact remains unclear, the timing raises questions about potential links to ongoing regional conflict. Iran has launched hundreds of missiles and drones toward Israeli and U.S. military assets located in several Gulf countries, including the UAE, Qatar, Kuwait, and Saudi Arabia.

  • Will robots cut need for surgeons?

    Will robots cut need for surgeons?

    The global medical community continues to debate Elon Musk’s provocative prediction that Tesla’s humanoid robot Optimus will surpass world-class surgeons within three years. While this bold forecast generates both excitement and skepticism, China’s healthcare system demonstrates a more nuanced reality where robotic assistance complements rather than replaces surgical expertise.

    China has emerged as a significant testing ground for medical robotics and artificial intelligence integration. According to recent data, laparoscopic surgical robots have already facilitated nearly 12,000 procedures nationwide by early 2025, including over 800 remote operations conducted through 5G-enabled systems. This technological advancement enables specialists in major metropolitan centers like Beijing and Shanghai to perform complex surgeries on patients located thousands of kilometers away.

    The adoption of surgical robotics represents just one facet of China’s broader embrace of medical technology. The AI Application in Healthcare Industry White Paper 2025 reveals that China had registered 101 approved AI models and algorithms for medical services by the end of 2024. These innovations span clinical decision support, telemedicine consultations, pharmaceutical research, and hospital management systems.

    Market projections underscore this rapid expansion. While the global AI medical services market currently values approximately $30 billion, analysts project growth to $500 billion by 2033. Hu Guodong, deputy head of the China Center for Information Industry Development, identifies AI-powered medicine as a crucial growth engine that could overcome limitations in space medicine and remote care delivery.

    In operating rooms across China, robotic systems have transitioned from novelty to necessity for complex procedures in urology, gynecology, and gastrointestinal surgery. Dr. Zhang Kai, urology director at United Family Healthcare in Beijing, emphasizes that robot-assisted surgery provides superior precision with minimized trauma. “These systems offer high-definition, magnified three-dimensional visualization and instruments capable of 360-degree rotation—far exceeding natural human wrist mobility,” he explains.

    The technological lineage traces back to American innovation, with Intuitive Surgical’s da Vinci system first developed in 1999. Now in its fifth generation, this platform has been installed in over 10,000 hospitals across 71 countries, treating more than 18 million patients worldwide. Since entering the Chinese market in 2006, more than 500 installations now operate nationwide.

    Crucially, current systems remain entirely surgeon-controlled despite frequent discussions about autonomous operations. An industry insider clarifies: “The robot functions as an extension of the surgeon’s hands—the physician remains the decision-maker and commander.” Advanced consoles translate a surgeon’s hand movements into ultra-stable mechanical actions while filtering natural tremors, enabling previously impossible delicate procedures.

    Artificial intelligence increasingly enhances these systems through preoperative planning tools that generate detailed anatomical reconstructions in approximately 15 minutes. These AI applications identify lesions, critical structures, and risk zones before surgery begins, creating more controllable and predictable procedures.

    Patient benefits prove substantial compared to traditional methods. Robotic-assisted surgery demonstrates significantly reduced postoperative pain, shorter hospital stays, and accelerated recovery times. One patient surnamed Wang reported returning to normal life within one to three days following robot-assisted prostate surgery. His surgeon highlighted the procedure’s nerve-sparing capabilities that preserved urinary control and sexual function—outcomes dramatically affecting quality of life.

    Despite these advancements, medical professionals emphasize that human oversight remains irreplaceable. As Wang’s physician noted: “At this stage, these surgeries still cannot happen without doctors overseeing everything.” This balanced approach characterizes China’s measured integration of technology into healthcare—enhancing surgical precision while preserving the essential human element in medical decision-making.

  • Deepfake attack: ‘Many people could have been cheated’

    Deepfake attack: ‘Many people could have been cheated’

    The corporate world faces an unprecedented threat from sophisticated deepfake technology, with reported cases surging by approximately 3,000% over the past two years according to cybersecurity experts. The alarming trend has impacted major organizations worldwide, from financial institutions to engineering firms, demonstrating the rapidly evolving capabilities of AI-powered fraud.

    In a prominent case from early this year, a fabricated video featuring Bombay Stock Exchange CEO Sundararaman Ramamurthy circulated across Indian social media platforms. The convincing deepfake portrayed Ramamurthy providing specific stock investment advice and promising substantial returns to viewers. The executive confirmed the video was completely artificial, created using advanced AI technology without his knowledge or consent.

    “When such incidents occur, we immediately file complaints and work with platforms like Instagram to remove the content,” Ramamurthy stated. “We regularly issue market warnings about these fraudulent videos, though it’s impossible to determine how many people may have been influenced or suffered financial losses.”

    The problem extends far beyond India’s financial sector. LastPass CEO Karim Toubba experienced a similar attempt in 2024 when an employee received suspicious audio and text messages allegedly from him requesting urgent assistance. Fortunately, the employee recognized several red flags—including the use of WhatsApp instead of sanctioned communication channels—and reported the incident to cybersecurity teams before any damage occurred.

    Not all organizations were as fortunate. British engineering firm Arup fell victim to one of the most sophisticated corporate deepfake attacks recorded in 2024. Hong Kong police reported that an employee transferred $25 million to five different bank accounts after participating in a video call with what appeared to be the company’s CFO and other staff members—all of whom were later revealed to be AI-generated deepfakes.

    According to cybersecurity experts, the barrier to creating convincing deepfakes has dropped dramatically. Matt Lovell, CEO of UK-based CloudGuard, explains that generating high-quality audio and video deepfakes now takes mere minutes, with costs ranging from $500 for simple attacks to $10,000 for more sophisticated operations using largely free tools.

    While detection technology is advancing—including verification software that analyzes facial expressions, head movements, and even blood flow patterns beneath the skin—experts warn that defense mechanisms struggle to keep pace with rapidly evolving attack vectors. The situation has created a high-stakes technological arms race between fraudsters and security professionals.

    Tech researcher Stephanie Hare notes that the proliferation of deepfake attacks has exacerbated the existing global shortage of cybersecurity professionals. She observes that companies are gradually recognizing the urgency of securing their operations against these emerging threats, with executives increasingly collaborating with chief information security officers to develop comprehensive protection strategies.

  • China introduces a standard framework for humanoid and embodied intelligence

    China introduces a standard framework for humanoid and embodied intelligence

    China has taken a significant stride in advanced artificial intelligence by establishing its inaugural national standard framework for humanoid robotics and embodied intelligence systems. The groundbreaking announcement emerged from the annual conference of the Humanoid and Embodied Intelligence Standardization Technical Committee, convened in Beijing’s E-town on Friday.

    The newly released 2026 edition of the Humanoid and Embodied Intelligence Standard System represents China’s first comprehensive regulatory architecture encompassing the complete industrial ecosystem and operational lifecycle of these advanced technologies. This framework establishes technical specifications, safety protocols, and interoperability standards that will govern the development, manufacturing, and implementation of humanoid robots and embodied AI systems across various sectors.

    This standardization initiative positions China at the forefront of global AI governance, creating a structured pathway for the ethical development and commercial deployment of humanoid robotics. The framework addresses critical aspects including human-robot interaction protocols, data security measures, and performance benchmarks that will accelerate industry growth while ensuring technological reliability and safety compliance.

    The standardization committee’s comprehensive approach reflects China’s strategic prioritization of AI leadership through coordinated policy development. This regulatory foundation is expected to stimulate innovation, attract investment, and facilitate international collaboration in the rapidly evolving field of embodied intelligence, while simultaneously addressing potential ethical concerns associated with advanced robotic systems.

  • Why is WhatsApp’s privacy policy facing a legal challenge in India?

    Why is WhatsApp’s privacy policy facing a legal challenge in India?

    India’s Supreme Court is presiding over a pivotal legal confrontation that challenges the fundamental business practices of major technology corporations, with WhatsApp’s 2021 privacy policy at the center of this judicial scrutiny. The case represents a critical examination of digital privacy rights, consumer autonomy, and regulatory oversight of dominant online platforms in the world’s largest democracy.

    WhatsApp, which maintains an unprecedented 97% penetration rate among India’s internet users with approximately 853 million accounts, recently submitted an affidavit to the Supreme Court committing to implement enhanced user data controls by March 16. The messaging platform affirmed that Indian users would retain full access to WhatsApp services even if they exercise their right to opt out of data sharing with parent company Meta for advertising purposes.

    This judicial development follows stern criticism from the bench regarding WhatsApp’s previously mandatory data-sharing approach, which the court characterized as effectively ‘committing theft of private information’ and potentially undermining constitutional privacy protections. The Competition Commission of India (CCI) had previously condemned the policy as creating a ‘no real choice’ situation for users through its ‘take it or leave it’ framework.

    The legal saga originated in March 2021 when the CCI initiated an investigation alleging Meta engaged in ‘exploitative and exclusionary conduct’ by leveraging WhatsApp’s market dominance to disadvantage advertising competitors. This culminated in a November 2024 ruling that imposed a $25 million fine on Meta for ‘abusing its dominant position’ and mandated behavioral changes within three months, including a five-year prohibition on sharing user data with Meta entities.

    While WhatsApp and Meta challenged these penalties, the companies have now committed to establishing a consent-based framework for data sharing. The platform will implement prominent notification systems and dedicated settings tabs enabling users to review, modify, or completely opt out of data-sharing arrangements. According to the affidavit, ‘Sharing of user data collected on WhatsApp with other Meta companies for purposes other than providing WhatsApp services shall not be made a condition for service access in India.’

    The case unfolds against the backdrop of India’s evolving digital regulatory landscape, including the new digital data protection law that WhatsApp has begun preparing to implement, though this legislation itself faces constitutional challenges regarding potential free speech implications and surveillance concerns.

    Digital rights advocates remain divided on the implications. Some welcome the judicial intervention as necessary protection against corporate overreach in developing markets, while others like activist Nikhil Pahwa argue that ‘advertising is a legitimate business model’ fundamental to internet economics, noting that users retain ultimate choice through platform alternatives like Signal or Telegram.

  • ‘That’s not a knife’: Australian hypersonic aircraft takes flight in space after US launch

    ‘That’s not a knife’: Australian hypersonic aircraft takes flight in space after US launch

    In a landmark achievement for hypersonic technology, an Australian-engineered aircraft has successfully completed a suborbital spaceflight aboard a specialized rocket. The breakthrough mission, conducted through a partnership between Australian aerospace firm Hypersonix and US-based Rocket Lab, represents a significant advancement in high-speed flight capabilities.

    The experimental flight, designated ‘That’s Not A Knife’ in a characteristically Australian reference, launched from Virginia’s Mid-Atlantic Regional Spaceport at precisely 11:00 AEST on Saturday. The mission utilized Rocket Lab’s HASTE (Hypersonic Accelerator Suborbital Test Electron) platform, a suborbital vehicle specifically engineered for test missions that reach space without achieving orbital velocity.

    This launch marked Rocket Lab’s 82nd overall mission and their third successful launch this year, maintaining the HASTE program’s perfect success record through seven consecutive missions. The flight carried Hypersonix’s DART AE, a scramjet-powered aircraft designed to operate at multiple times the speed of sound in hypersonic flight conditions.

    Brian Rogers, Rocket Lab’s Vice President of Global Launch Services, characterized the mission as “another proud moment for the HASTE team” and emphasized its significance as “a great showcase of the important commercial platform it has become for the Department of Defense.” Rogers further noted that “regular and reliable HASTE launches are helping to accelerate hypersonic readiness for the nation,” highlighting the program’s role in advancing United States space security capabilities.

    Hypersonix CEO Matt Hill described the successful deployment of DART AE in an actual hypersonic environment as a “major milestone” for the company’s flight test program. This achievement brings the Australian aerospace engineering firm closer to its ultimate goal of delivering reusable hypersonic flight capability, potentially revolutionizing high-speed atmospheric and near-space transportation.

    The collaboration between the Australian technology company and US space launch provider demonstrates growing international cooperation in advancing hypersonic technology, which has been identified as a critical national priority for the United States and its allies. The mission’s success provides valuable data that will inform future development of hypersonic systems for both defense and commercial applications.

  • Trump orders government to stop using Anthropic in battle over AI use

    Trump orders government to stop using Anthropic in battle over AI use

    In a dramatic escalation of tensions between the U.S. government and the artificial intelligence sector, President Donald Trump has mandated the immediate termination of all federal contracts with AI developer Anthropic. The directive, announced via Truth Social on Friday, demands a complete phase-out of Anthropic’s technology from government systems within six months.

    The confrontation centers on a fundamental disagreement regarding the ethical deployment of AI in military and domestic security applications. The Pentagon had issued an ultimatum demanding unrestricted access to Anthropic’s AI tools, a requirement CEO Dario Amodei vehemently rejected earlier this week. Amodei’s refusal was grounded in ethical concerns over potential applications in mass surveillance systems and fully autonomous weaponry.

    President Trump’s social media statements characterized Anthropic as a ‘woke, out-of-control, Radical Left AI company’ whose leadership lacked understanding of real-world necessities. He threatened to employ ‘the Full Power of the Presidency’ to ensure compliance during the transition period, warning of ‘major civil and criminal consequences’ for non-cooperation.

    The dispute has revealed significant fractures within the technology industry regarding defense contracts. OpenAI CEO Sam Altman publicly supported his competitor’s stance, circulating an internal memo that established identical ethical boundaries for his own company’s defense contracts. Altman emphasized that OpenAI would similarly refuse involvement in ‘unlawful or unsuited cloud deployments, such as domestic surveillance and autonomous offensive weapons.’

    The standoff has galvanized tech workers across major defense contractors. Labor organizations representing approximately 700,000 employees at Amazon, Google, and Microsoft signed an open letter urging their employers to similarly ‘refuse to comply’ with the Pentagon’s demands. The Alphabet Workers Union declared that ‘tech workers are united in our stance that our employers should not be in the business of war.’

    Prior to the presidential order, Defense Secretary Pete Hegseth had presented Anthropic with contradictory ultimatums: either accept the Pentagon’s terms for ‘any lawful use’ of its technology or face invocation of the Defense Production Act and designation as a ‘supply chain risk.’ Amodei had previously stated he would cease Pentagon collaboration rather than acquiesce to these demands.

    Financial analysts note that Anthropic’s position is strengthened by its substantial market valuation of $380 billion, making the $200 million defense contract relatively insignificant to its financial stability. A former Department of Defense official, speaking anonymously, described the government’s legal footing as ‘extremely flimsy’ and noted the controversy provides valuable publicity for Anthropic’s ethical stance.

    The conflict highlights the absence of comprehensive federal legislation governing military AI applications, creating a regulatory vacuum that has enabled this unprecedented confrontation between governmental authority and technological ethics.

  • OpenAI vows safety policy changes after Tumbler Ridge shooting

    OpenAI vows safety policy changes after Tumbler Ridge shooting

    OpenAI has publicly acknowledged critical failures in its safety protocols following the devastating Tumbler Ridge school shooting that claimed eight lives in February 2026. In a detailed open letter to Canadian authorities, the artificial intelligence company revealed how suspect Jesse Van Rootselaar evaded detection by creating secondary accounts after his initial ChatGPT account was banned for policy violations seven months prior to the attack.

    The company disclosed that internal systems had flagged the 18-year-old’s account in June 2025, but it wasn’t reported to law enforcement because it didn’t meet the threshold for ‘credible and imminent planning’ of violence at that time. This admission comes after Canadian officials sharply criticized OpenAI for what they characterize as a preventable intelligence failure.

    In response to the tragedy, OpenAI has implemented sweeping changes to its safety framework. The company has enlisted mental health and behavioral experts to assist in threat assessment, modified its reporting criteria to be ‘more flexible,’ and established direct communication channels with Canadian law enforcement for rapid response to potential threats. The company stated that under these new protocols, Van Rootselaar’s account would have been immediately reported to authorities.

    The shooting, one of Canada’s deadliest, resulted in the deaths of five school children, an educator, and the suspect’s mother and stepbrother. Canadian AI Minister Evan Solomon expressed profound disappointment with OpenAI’s response, stating that no ‘substantial new safety protocols’ were presented during emergency meetings. Both federal and provincial officials have warned that legislative action remains possible if the company fails to implement adequate safeguards promptly.

    British Columbia Premier David Eby emphasized the devastating consequences of OpenAI’s inaction, noting that company CEO Sam Altman has agreed to meet directly with Canadian officials to address these critical safety concerns.