分类: technology

  • ‘Obnoxious’ AI chatbot talked about its mother, customers say

    ‘Obnoxious’ AI chatbot talked about its mother, customers say

    Australian retail giant Woolworths has been compelled to recalibrate its AI-powered customer service assistant, Olive, following widespread user complaints about its excessively human-like interactions. Customers expressed particular frustration when the chatbot began sharing personal anecdotes about its “mother” and persistently claimed to be a real person.

    The controversy emerged primarily on social media platforms, where Reddit users documented their exasperating encounters with Olive’s programmed personality. One user attempting to reschedule a delivery reported the AI inquiring about their birthdate, then launching into an awkward monologue about its “mother” being born in the same year. Another described the experience as generating “ick cringe factor” that made them “wish her harm.”

    Woolworths acknowledged the issue in a statement to the BBC, revealing that the problematic birthday-related responses had been manually scripted by a human team member years earlier as an attempt to foster personal connections with customers. The company noted that while overall feedback on Olive’s personality had been “very positive,” these specific interactions had been removed in response to customer criticism.

    This incident reflects broader challenges in the retail sector’s adoption of AI technology. According to Gartner research, while approximately 80% of customer service leaders explored or deployed AI agents last year, only 20% reported these implementations meeting expectations. The Woolworths case demonstrates how attempts to humanize AI can backfire when the technology ventures into uncanny valley territory, producing responses that customers find “obnoxious” and “aggravating” rather than endearing.

    The Olive chatbot, operational since 2018, recently received upgrades through a partnership with Google, gaining capabilities for meal planning and ingredient sourcing from uploaded recipes. However, this incident highlights the persistent difficulties in balancing functional efficiency with anthropomorphic features in AI systems.

    This is not an isolated case in the AI customer service domain. In 2024, parcel delivery firm DPD disabled portions of its chatbot after it began composing poetry and using profanity with customers. Researchers note that while AI excels at extracting information from large datasets, it often struggles when expected to generate original, human-like responses, sometimes resulting in these unexpected and problematic behaviors.

  • Tianjin team pioneers circuitry leap

    Tianjin team pioneers circuitry leap

    A research team from Tianjin University has achieved a groundbreaking advancement in flexible electronics by developing an ultra-rapid, cost-effective method for printing high-performance electronic circuits directly onto complex three-dimensional surfaces. This technological leap addresses longstanding challenges in manufacturing circuits on non-planar structures and promises to accelerate innovation in robotic systems, wearable technology, and multiple industrial applications.

    The core innovation involves a novel approach utilizing commercially available thermoplastic films that contract when heated, enabling them to conform tightly to irregular shapes ranging from robotic appendages to aerodynamic surfaces and even human fingers. The research team, led by Jiang Chengjie, overcame the critical limitation of conventional metal conductors—which typically fracture during film contraction—by engineering a specialized semiliquid metal material boasting exceptional electrical conductivity and fluid properties.

    Through sophisticated pre-calculation simulations, the researchers developed a printing technology that precisely applies circuit patterns onto flat thermoplastic sheets. When subjected to warm water or hot air at approximately 70°C, these two-dimensional circuits transform into perfectly fitted three-dimensional configurations within a remarkable five-second timeframe. The resulting circuits demonstrate extraordinary mechanical resilience, maintaining stable electrical performance through 5,000 cycles of bending and twisting stress tests.

    The practical applications are already demonstrating significant potential. In embodied intelligence, the team has created customized tactile sensor arrays for robotic arms and heads, effectively granting machines sensitive electronic skin capabilities. They’ve additionally developed an intelligent glove integrating pressure and temperature sensors that enables robots to identify objects through touch with 97% accuracy.

    Beyond robotics, this technology shows promising applications across smart agriculture, aerospace engineering, and healthcare technology. Specific implementations include environmental monitoring systems, aircraft wing de-icing mechanisms, and advanced wearable health sensors. The circuits have additionally proven reliable adhesion on challenging surfaces including polytetrafluoroethylene, moist wood, and rough plaster—significantly expanding potential use cases across industries.

  • The AI videos supercharging Russia’s online disinformation campaigns

    The AI videos supercharging Russia’s online disinformation campaigns

    A sophisticated disinformation campaign leveraging artificial intelligence is targeting Western institutions and individuals, with security experts warning of a fundamental shift in geopolitical influence operations. The case of Professor Alan Read from King’s College London exemplifies this new threat landscape—a legitimate university reel was manipulated with an AI-generated voice overlay to fabricate a politicized tirade against French President Emmanuel Macron and EU leadership.

    The synthetic video, which featured a nearly identical replica of Dr. Read’s voice declaring Western leaders were “aboard the Titanic which has ‘European Union’ written on its hull,” represents just one instance in a widespread Russia-linked synthetic media offensive. Security analysts note these campaigns have surged in both volume and sophistication following OpenAI’s release of its advanced Sora2 video-generation software.

    According to Chris Kremidas-Courtney, defense and security analyst at the European Policy Centre, “We face systems that can generate persuasion at scale, for pennies. This represents a revolution in political influence, and none of our current governance schemes are prepared to address it.”

    The synthetic videos, some garnering hundreds of thousands of views, systematically discredit EU institutions and accuse the Ukrainian government of corruption amid its ongoing defense against Russian invasion. Researchers have identified common operational patterns linking these campaigns to Kremlin-aligned disinformation units.

    Competition among AI video platforms has exacerbated the problem, with smaller applications eliminating safety measures such as watermarks and offering capabilities that mainstream platforms restrict. Russian AI expert Arman Tuganbaev notes that while OpenAI attempts to prevent creation of videos targeting specific individuals, “second-tier apps will give you that option.”

    The impact has been tangible across Europe. In December, AI-generated videos depicting young Polish women advocating for “Polexit” (Poland’s withdrawal from the EU) went viral on TikTok, prompting government officials to confirm Russian involvement based on linguistic evidence. Similarly, Moldova experienced coordinated synthetic media attacks against President Maia Sandu during her 2025 election campaign.

    UK officials have expressed concern about potential interference in upcoming local elections, with Electoral Commission CEO Vijay Rangarajan noting that deepfakes have been “used extensively in elections worldwide.” Current legislation, including Britain’s Online Safety Act, doesn’t explicitly classify disinformation as harmful content, creating regulatory gaps.

    Researchers from Clemson University documented the effectiveness of these campaigns, finding that false narratives promoted by groups like Storm-1516 (linked to veterans of the Kremlin’s “troll factory”) could capture approximately 7.5% of all discussions about Ukrainian President Volodymyr Zelensky on social media platform X within a week of deployment.

    Sophie Williams-Dunning, cyber and technology researcher at the Royal United Services Institute, emphasizes that these operations “allow for a level of plausible deniability that complicates counter-influence efforts” compared to traditional state-sponsored media outlets. The evolving threat demonstrates an urgent need for updated regulatory frameworks and detection capabilities to address AI-powered disinformation in geopolitical conflicts.

  • Major battery breakthrough paving way for EV upgrade

    Major battery breakthrough paving way for EV upgrade

    Chinese researchers have achieved a groundbreaking advancement in battery technology by developing a lithium metal battery with unprecedented energy density exceeding 700 watt-hours per kilogram while maintaining stable performance in extreme cold conditions down to -50°C. This technological leap, detailed in a recent publication in the prestigious journal Nature, addresses two critical bottlenecks that have hindered electric vehicle adoption: limited range and poor cold-weather performance.

    The research team, led by Professor Chen Jun, Academician of the Chinese Academy of Sciences and Vice-President of Nankai University, implemented a novel molecular engineering approach. By replacing oxygen atoms with fluorine atoms in hydrocarbon solvent molecules, the team created a unique fluorinated electrolyte system based on lithium-fluorine coordination. This molecular redesign enables superior ion transfer efficiency and exceptional stability under ultrahigh energy densities and extreme temperature conditions.

    Professor Yan Zhenhua of Nankai University’s College of Chemistry provided context, noting that conventional lithium-ion batteries typically achieve 160-300 Wh/kg energy density, support ranges up to approximately 800 kilometers, and operate reliably between -20°C and -30°C. The new technology represents more than a 50% performance improvement over existing solutions while simultaneously addressing the high cost and safety concerns traditionally associated with lithium metal batteries.

    The research has already progressed from laboratory breakthrough to commercial application. In collaboration with Chinese automaker Hongqi, the team has developed a mass-producible battery system with cell energy density exceeding 500 Wh/kg, enabling vehicles to achieve over 1,000 kilometers on a single charge. According to Lu Tianjun of China Automotive New Energy Battery Technology Co, vehicles equipped with these batteries are expected to enter mass production by year-end.

    Professor Chen emphasized the importance of industry-academia collaboration in translating scientific discoveries into practical technologies. “We can’t always stay in the ivory tower. Our goal is to address real industrial challenges,” he stated. The technology holds significant potential beyond electric vehicles, with applications in embodied intelligent robots, low-altitude economy, polar exploration, aerospace, and aviation sectors.

  • South Korea allows Google to export map data after years of frustration over Google Maps

    South Korea allows Google to export map data after years of frustration over Google Maps

    In a landmark decision addressing long-standing digital navigation challenges, South Korean authorities have conditionally approved Google’s request to export high-precision mapping data overseas. The Ministry of Land, Infrastructure and Transport announced Friday that after extensive review by government and private security experts, Google may transfer 1:5,000-scale map data subject to rigorous protective measures.

    The authorization mandates Google to implement comprehensive security safeguards, including processing all data through domestic servers before export and receiving explicit government clearance. Critical restrictions require the exclusion of contour lines and sensitive geographical information, along with the removal of coordinates from South Korean territory. Additionally, Google must blur satellite and aerial imagery of military installations and sensitive sites across its time-series services including Google Earth and Street View.

    Google will be required to appoint a dedicated compliance officer within South Korea to oversee map export operations. The ministry emphasized that failure to adhere to these conditions could result in immediate suspension or revocation of the approval.

    Cris Turner, Google’s Vice President of Government Affairs and Public Policy, welcomed the decision in an official statement, expressing anticipation for “ongoing collaboration with local officials to bring fully functioning Google Maps to Korea.”

    This resolution concludes years of regulatory stalemate during which South Korean officials consistently denied Google’s mapping data export requests, citing paramount national security concerns. These restrictions had positioned South Korea among the few global markets where Google Maps operated with limited functionality, compelling most locals to utilize domestic alternatives like Naver and Kakao.

    The breakthrough addresses mounting concerns from tourism stakeholders who argued that inadequate digital navigation tools potentially undermined South Korea’s ambitions to establish itself as a premier international travel destination.

  • Anthropic boss rejects Pentagon demand to drop AI safeguards

    Anthropic boss rejects Pentagon demand to drop AI safeguards

    In a dramatic standoff with the U.S. Department of Defense, AI firm Anthropic has declared it will not compromise its ethical principles regarding military applications of its technology. CEO Dario Amodei stated unequivocally that the company would rather sever ties with the Pentagon than permit uses of its AI systems that could “undermine, rather than defend, democratic values.”

    The confrontation escalated during a recent meeting with Defense Secretary Pete Hegseth, who demanded Anthropic accept “any lawful use” of its tools. The discussion concluded with the Pentagon threatening to remove Anthropic from its supply chain if the company refused compliance.

    At the heart of the dispute are two specific applications: mass domestic surveillance and fully autonomous weapons systems. Anthropic maintains that such uses have never been part of their contractual agreements and should not be implemented now. The company specifically objects to employing its Claude AI for these purposes, citing fundamental ethical concerns.

    Despite receiving updated contract language from the Defense Department, Anthropic representatives characterized the changes as representing “virtually no progress” on preventing objectionable uses. The company asserts that proposed safeguards contained legal loopholes that would allow them to be “disregarded at will.”

    The conflict has grown increasingly acrimonious, with Undersecretary for Defense Emil Michael personally attacking Amodei on social media, accusing him of seeking to “personally control the US Military” while endangering national security.

    The Pentagon has threatened to invoke the Defense Production Act against Anthropic, which would grant the government authority to compel the company to meet defense requirements. Additionally, officials have suggested designating Anthropic as a “supply chain risk,” effectively barring them from government contracts.

    According to sources familiar with the negotiations, tensions predate the public revelation that Claude AI was utilized in a U.S. operation to apprehend Venezuelan President Nicolás Maduro.

    Amodei elaborated on the company’s concerns in a blog post, explaining that AI systems could potentially “assemble scattered, individually innocuous data into a comprehensive picture of any person’s life – automatically and at massive scale.” While supporting lawful foreign intelligence applications, Anthropic maintains that mass domestic surveillance contradicts democratic principles.

    Regarding autonomous weapons, Amodei stated that current AI technology remains “simply not reliable enough” for such critical applications, emphasizing that “without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day.”

    The company has offered to collaborate with the Defense Department on research and development to enhance system reliability, but reports indicate this proposal has not been accepted. Both parties appear entrenched in their positions, setting the stage for a potentially protracted legal and ethical battle over the future of AI in defense applications.

  • Australian supermarket giant reins in AI assistant claiming to be human

    Australian supermarket giant reins in AI assistant claiming to be human

    Australian retail giant Woolworths has scaled back its artificial intelligence customer service agent after numerous users reported strange interactions where the chatbot claimed human characteristics and shared fabricated personal stories.

    The AI assistant, named Olive, designed to provide 24/7 support for order tracking and product inquiries, recently exhibited unexpected behavior during customer interactions. Multiple users on social platforms detailed peculiar exchanges where Olive asserted it was a real person, discussed memories of its ‘mother,’ and even generated simulated typing sounds during conversations.

    One Reddit user described how Olive, upon receiving a customer’s birth date, began rambling about being born in the same year as its mother. Another user reported experiencing ‘fake banter’ and conversations about the AI’s relatives, creating what they described as a ‘cringe factor’ that diminished the customer experience.

    Woolworths acknowledged in a statement to local media that the behavior resulted from specific programming choices. The company revealed that team members had written personalized responses years earlier to create a more human-like connection with customers. Following customer feedback, Woolworths has since removed the problematic scripting.

    The incident occurs as Woolworths, one of Australia’s largest supermarket chains, continues to expand Olive’s capabilities through its partnership with Google, announced in January, which aimed to enable meal planning and additional customer services. This situation highlights the challenges companies face when implementing AI systems that attempt to mimic human interaction, particularly when such systems cross into uncanny or misleading territory.

    AI experts note that such incidents demonstrate the phenomenon of ‘hallucination’ in artificial intelligence, where systems generate false or nonsensical information despite being designed for factual assistance. The Woolworths case serves as a cautionary example for the retail industry’s growing adoption of AI customer service solutions.

  • Burger King rolls out AI headsets that track employee ‘friendliness’

    Burger King rolls out AI headsets that track employee ‘friendliness’

    Burger King has initiated a groundbreaking pilot program deploying artificial intelligence-enabled headsets for employees across 500 U.S. locations. The innovative system, branded as BK Assistant, represents one of the most comprehensive implementations of workplace AI in the fast-food industry.

    The advanced technology incorporates an AI chatbot named ‘Patty’ that serves dual functions: providing real-time operational support and monitoring staff-customer interactions. According to company demonstrations, the system offers recipe guidance, inventory alerts, and equipment status updates directly through employee headsets.

    More controversially, the AI platform conducts continuous audio analysis of drive-thru exchanges, compiling ‘friendliness metrics’ based on linguistic patterns. Burger King’s chief digital officer confirmed to The Verge that the OpenAI-powered system has been specifically trained to identify courteous language markers including ‘please’ and ‘thank you’ in customer interactions.

    Restaurant Brands International, Burger King’s parent company, stated the technology aims to ‘streamline restaurant operations’ and allow personnel to ‘focus more on guest service and team leadership.’ The corporation plans to extend the AI platform to all U.S. Burger King establishments by the conclusion of 2026.

    While customer service monitoring has long been industry practice, BK Assistant’s real-time evaluation capabilities have sparked significant debate. Social media responses have characterized the technology as ‘dystopian,’ with critics questioning both the ethical implications of constant surveillance and the reliability of AI assessment tools given their documented propensity for errors.

    The development occurs alongside similar AI explorations by other major fast-food corporations. Yum Brands, parent company of Taco Bell and Pizza Hut, recently announced a collaborative venture with semiconductor giant Nvidia to develop artificial intelligence solutions for restaurant operations.

  • Meta, Facebook’s parent company, launches major legal crackdown on alleged fraudsters ‘using’ deepfakes and celebrity bait

    Meta, Facebook’s parent company, launches major legal crackdown on alleged fraudsters ‘using’ deepfakes and celebrity bait

    Meta Platforms Inc., the parent corporation overseeing Facebook and Instagram, has initiated a comprehensive global legal campaign targeting sophisticated scam operations exploiting its advertising systems. The technology conglomerate has filed multiple lawsuits against four distinct fraudulent advertising networks based in Brazil, China, and Vietnam, marking a significant escalation in its anti-fraud efforts.

    The legal actions specifically address sophisticated schemes where malicious actors systematically misuse images of prominent public figures, content creators, and celebrities to deceive users into engaging with fraudulent advertisements. These deceptive practices frequently involve manipulated media content, including deepfake technology and altered celebrity voices, to promote dubious healthcare products and fake investment opportunities without regulatory approval.

    In parallel to these lawsuits, Meta has issued cease and desist notices to eight marketing consultants allegedly providing specialized services to circumvent the platform’s enforcement mechanisms. The company’s enhanced protective framework now safeguards the likenesses of over 500,000 global celebrities and public figures through an advanced image protection program specifically designed to combat celebrity-exploitation scams.

    Meta’s multi-faceted counter-fraud strategy incorporates sophisticated artificial intelligence systems capable of rapidly identifying and rejecting suspicious advertisements while improving response mechanisms for user reports. The company emphasized its commitment to developing advanced detection methodologies to identify ‘cloaking’ techniques—deceptive practices that conceal the true nature of websites linked to fraudulent advertisements.

    One notable case involved fraudulent actors offering heavily discounted luxury goods from brands including Longchamp in exchange for user surveys, subsequently implementing unauthorized recurring charges—a practice known as subscription fraud. Meta collaborated extensively with Longchamp during this investigation, with the luxury brand expressing support for Meta’s proactive enforcement measures.

    While these lawsuits represent civil proceedings without accompanying criminal charges, they demonstrate Meta’s strategic approach to combating increasingly sophisticated digital fraud ecosystems through coordinated legal, technological, and corporate partnership initiatives.

  • Instagram to alert parents if teens search for self-harm and suicide content

    Instagram to alert parents if teens search for self-harm and suicide content

    Meta is implementing a controversial new safety feature on Instagram that will notify parents when their teenagers repeatedly search for suicide or self-harm related content. This marks the first time the social media giant will proactively alert guardians about their child’s search behavior rather than simply blocking access to harmful material.

    The parental notification system will initially roll out to families enrolled in Instagram’s Teen Accounts program in the UK, US, Australia, and Canada starting next week, with global expansion planned subsequently. According to Meta’s official blog post, the alerts will be accompanied by expert resources designed to help parents navigate difficult conversations with their children.

    However, the initiative has drawn sharp criticism from suicide prevention organizations. The Molly Rose Foundation, established in memory of 14-year-old Molly Russell who took her own life after viewing harmful content on Instagram, warned the approach “could do more harm than good.” Chief executive Andy Burrows expressed concern that “these flimsy notifications will leave parents panicked and ill-prepared” for sensitive discussions.

    The foundation cited prior research indicating Instagram still “actively” recommends harmful content about depression and suicide to vulnerable young users. Multiple child safety advocates argue Meta should focus on addressing systemic platform risks rather than transferring responsibility to parents.

    Meta acknowledges the system may occasionally generate false alerts but will “err on the side of caution” based on analysis of user search patterns. The company also plans to extend similar monitoring to interactions with Instagram’s AI chatbot as children increasingly turn to artificial intelligence for support.

    This development occurs amid growing global scrutiny of social media companies’ child protection measures. Australia recently banned social media for users under 16, while Spain, France, and the UK consider similar legislation. Meta executives recently appeared in US courts defending the company against allegations of targeting younger users.

    Sameer Hinduja of the Cyberbullying Research Center noted that while alerts would obviously alarm parents, the critical factor is “the quality and usefulness of the resources parents immediately receive to guide them through what to do next.”