Dubious AI detectors drive ‘pay-to-humanize’ scam

A concerning proliferation of fraudulent artificial intelligence detection tools is creating additional layers of deception within the digital information ecosystem, according to recent investigations. These dubious platforms systematically misidentify human-authored content as AI-generated while simultaneously promoting paid ‘humanization’ services that experts characterize as outright scams.

AFP’s fact-checking division conducted comprehensive testing of three prominent text analysis tools—JustDone AI, TextGuard, and Refinely—that claim to quantify AI-generated content percentages. When presented with verified human-written materials across multiple languages including Dutch, Greek, Hungarian, and English, these detectors consistently produced false positive results. Remarkably, even passages from celebrated literary works such as a 1916 Hungarian classic were incorrectly flagged as containing high percentages of AI-generated content.

The monetization strategy employed by these platforms follows a predictable pattern: after generating erroneous AI detection results, users are prompted to pay fees reaching $9.99 to ‘humanize’ their supposedly artificial text. JustDone AI specifically misidentified an authentic human-written report concerning US-Iran relations as containing “88% AI content” before immediately offering paid remediation services.

Technical analysis suggests these tools may operate through pre-scripted responses rather than genuine algorithmic processing. Both JustDone and Refinely continued functioning without internet connectivity, indicating their results might be predetermined rather than derived from actual content analysis.

Academic researcher Debora Weber-Wulff, who has extensively studied detection technologies, confirmed these platforms represent “scams to sell a ‘humanizing’ tool that will often return what we call ‘tortured phrases’”—essentially replacing text with unrelated jargon or nonsensical alternatives.

The proliferation of these unreliable tools has created dangerous real-world consequences. Pro-government influencers in Hungary recently leveraged JustDone’s flawed results to falsely claim that opposition election campaign documents were entirely AI-generated, demonstrating how these tools can be weaponized to discredit authentic content.

Educational institutions including Cornell University have explicitly denied relationships with AI detector companies, with Cornell noting that detection technologies “are unlikely to provide a workable solution” to academic integrity concerns surrounding generative AI.

This phenomenon contributes to what researchers term the “liar’s dividend”—where authentic content becomes increasingly vulnerable to dismissal as AI fabrication. As Waqar Rizvi from misinformation tracker NewsGuard observes, we’re now witnessing the opposite but equally insidious phenomenon of authentic visuals being falsely labeled as AI-generated.

The situation presents significant challenges for fact-checking organizations that sometimes rely on legitimate AI detection tools developed by experts, which typically search for digital watermarks and other technical indicators. However, even these verified tools occasionally produce errors, necessitating supplemental verification through open-source intelligence and additional evidence.