Mike Perry

This speaker has not yet provided information about themselves.

Events with this speaker

Day 2
17:35
40m
A Quick Stop at the HostileShop

Nothing stops [this train](https://ai-2027.com/). It just [might not arrive on schedule](https://www.interconnects.ai/p/brakes-on-an-intelligence-explosion)... LLMs appear unlikely to become capable of either true human-level novelty creation or AGI. However, they excel at task execution in [well-established task domains](https://epochai.substack.com/p/most-ai-value-will-come-from-broad), even exceeding most humans in some of these domains. This capability set has yielded an "Agentic Revolution", where LLMs are being deployed as components of software systems for various tasks. These **LLM Agents** work **_just well enough_** to deploy in scenarios for which they are either [not yet safe](https://brave.com/blog/comet-prompt-injection/), or are [fundamentally impossible to secure against](https://labs.zenity.io/p/why-aren-t-we-making-any-progress-in-security-from-ai-bf02). The resulting vulnerability surface is very much reminiscent of the hacking scene in the 1990s, but at a lightning pace, with exploits often being patched within hours after they widely circulate. The hacking dopamine treadmill has become an express train. Rather than hop right on what looked like an express train to Fail City, I wanted a tool that would **hack LLM Agents automatically**, and also let me know if and when LLM Agents finally become secure enough for use in privacy preserving systems, without the need to rely on [oppressive](https://runtheprompts.com/resources/chatgpt-info/chatgpt-is-reporting-your-prompts-to-police/) [levels of surveillance](https://www.anthropic.com/news/activating-asl3-protections). All of this led me to create [HostileShop](https://github.com/mikeperry-tor/HostileShop).