When Vibe Scammers Met Vibe Hackers: Pwning PhaaS with Their Own Weapons

Day 2 23:00 Zero en Security
Dec. 28, 2025 23:00-23:40
What happens when AI-powered criminals meet AI-powered hunters? A technical arms race where both sides are vibing their way through exploitation—and the backdoors write themselves. In October 2025, we investigated Taiwan's fake delivery scam ecosystem targeting convenience store customers. What started as social engineering on social media became a deep dive into two distinct fraud platforms—both bearing the unmistakable fingerprints of AI-generated code. Their developers left more than just bugs: authentication flaws, file management oversights, and database implementations that screamed "I asked LLM and deployed without reading." We turned their sloppiness into weaponized OSINT. Through strategic reconnaissance, careful database analysis, and meticulous operational security, we achieved complete system access on multiple fraud infrastructures. By analyzing server artifacts and certificate patterns, we mapped 100+ active domains and extracted evidence linking thousands of victim transactions worth millions of euros in fraud. But here's the twist: we used the same AI tools they did, just with better prompts. The takeaway isn't just about hunting scammers—it's about the collapse of the skill gap in both offense and defense. When vibe coding meets vibe hacking, the underground economy democratizes in ways we never anticipated. We'll share our methodology for fingerprinting AI-assisted crime infrastructure, discuss the ethical boundaries of counter-operations, and demonstrate how to build sustainable threat intelligence pipelines when your adversary can redeploy in 5 minutes. This talk proves that in 2025, the real exploit isn't zero-day—it's zero-understanding.

Our journey began with a simple question: why are so many people losing money to fake convenience store delivery websites? The answer led us through two distinct criminal architectures, both exhibiting characteristics of large language model–assisted development.

Case 1 ran on PHP with backup artifacts exposing implementation details and query manipulation opportunities. The installation package itself contained pre-existing access mechanisms—whether this was developer insurance or criminal-on-criminal sabotage remains unclear. We leveraged initial access to bypass security restrictions using protocol-level manipulation and extracted gigabytes of operational data.

Case 2 featured authentication bypass vulnerabilities that granted direct administrative access. The backend structure revealed copy-pasted code patterns without proper security implementation.

Throughout both systems, we observed telltale signs of AI-generated code: verbose documentation in unexpected languages, inconsistent coding patterns, textbook-like naming conventions, and theoretical security implementations. Even the UI revealed LLM fingerprints—overly polished component layouts, placeholder text patterns, and design choices that felt distinctly "tutorial-like." These weren't experienced developers—they were operators deploying what LLMs gave them without understanding the internals.

The irony? We used AI extensively too: for data parsing, pattern recognition, attack surface mapping, and intelligence queries. The difference was intentionality—we understood what the output meant.

Using open-source intelligence platforms and carefully crafted fingerprints, we mapped over a hundred active domains following similar patterns. Each one shared the same architecture, the same weaknesses, the same developer mistakes. This repeatability became our advantage. When scammers can redeploy infrastructure in days, you don't attack individual sites—you automate the entire reconnaissance-to-evidence pipeline.

This talk demonstrates practical techniques for mass-scale fraud infrastructure fingerprinting, operational security considerations when investigating active criminal operations, and methods to recognize AI-generated code patterns that reveal threat actor sophistication. We'll discuss the ethical boundaries of counter-fraud operations and evidence preservation for law enforcement, along with automation strategies for sustainable threat intelligence when adversaries rebuild faster than you can report. The demonstration will show how to go from a single suspicious domain to a network map of 100+ sites and thousands of victim records—using tools available to any researcher.

This isn't a story about elite hackers versus criminal masterminds. It's about two groups equally reliant on AI vibing their way through technical problems—one for fraud, one for justice. The skill barrier has collapsed. The question now is: who has better context, better ethics, and better coffee?

Speakers of this event