When Vibe Scammers Met Vibe Hackers: Pwning PhaaS with Their Own Weapons
Our journey began with a simple question: why are so many people losing money to fake convenience store delivery websites? The answer led us through two distinct criminal architectures, both exhibiting characteristics of large language model–assisted development.
Case 1 ran on PHP with backup artifacts exposing implementation details and query manipulation opportunities. The installation package itself contained pre-existing access mechanisms—whether this was developer insurance or criminal-on-criminal sabotage remains unclear. We leveraged initial access to bypass security restrictions using protocol-level manipulation and extracted gigabytes of operational data.
Case 2 featured authentication bypass vulnerabilities that granted direct administrative access. The backend structure revealed copy-pasted code patterns without proper security implementation.
Throughout both systems, we observed telltale signs of AI-generated code: verbose documentation in unexpected languages, inconsistent coding patterns, textbook-like naming conventions, and theoretical security implementations. Even the UI revealed LLM fingerprints—overly polished component layouts, placeholder text patterns, and design choices that felt distinctly "tutorial-like." These weren't experienced developers—they were operators deploying what LLMs gave them without understanding the internals.
The irony? We used AI extensively too: for data parsing, pattern recognition, attack surface mapping, and intelligence queries. The difference was intentionality—we understood what the output meant.
Using open-source intelligence platforms and carefully crafted fingerprints, we mapped over a hundred active domains following similar patterns. Each one shared the same architecture, the same weaknesses, the same developer mistakes. This repeatability became our advantage. When scammers can redeploy infrastructure in days, you don't attack individual sites—you automate the entire reconnaissance-to-evidence pipeline.
This talk demonstrates practical techniques for mass-scale fraud infrastructure fingerprinting, operational security considerations when investigating active criminal operations, and methods to recognize AI-generated code patterns that reveal threat actor sophistication. We'll discuss the ethical boundaries of counter-fraud operations and evidence preservation for law enforcement, along with automation strategies for sustainable threat intelligence when adversaries rebuild faster than you can report. The demonstration will show how to go from a single suspicious domain to a network map of 100+ sites and thousands of victim records—using tools available to any researcher.
This isn't a story about elite hackers versus criminal masterminds. It's about two groups equally reliant on AI vibing their way through technical problems—one for fraud, one for justice. The skill barrier has collapsed. The question now is: who has better context, better ethics, and better coffee?