AI Agent, AI Spy
The talk will provide a critical technical and political economy analysis of the new privacy crisis emerging from OS and application level AI agents, aimed at the 39C3 "Ethics, Society & Politics" audience.
-
Defining the Threat: The OS as a Proactive Participant (5 mins) We will begin by defining "Agentic AI" in two contexts - imbibed into the operating system and deployed via critical gateway applications such as web browsers. Traditionally, the operating systems and browsers are largely neutral enforcers of user agency, managing resources and providing APIs for applications to run reliably. We will argue that this neutrality is close to being eliminated. The new paradigm shifts these applications into a proactive agent that actively observes, records, and anticipates user actions across all applications.The prime example for this analysis will be Microsoft’s "Recall" feature, Google’s Magic Cue, and OpenAI’s Atlas. Politically, we will frame this not as a "feature" but as the implementation of pervasive, non-consensual surveillance and remote-control infrastructure. This "photographic memory" of and demand for non-differentiated access to everything from private Signal messages to financial data to health data creates a catastrophic single point of failure, making a single security breach an existential threat to a user's entire digital life. Ultimately, we hope to illustrate how putting our brains in a jar (with agentic systems) is effectively a prompt injection attack against our own humanity.
-
The Existential Threat to Application-Level Privacy (10 mins) The core of the talk will focus on what this means for privacy-first applications like Signal. We will explain the "blood-brain barrier" analogy: secure apps are meticulously engineered to minimize data and protect communications, relying on the OS to be a stable, neutral foundation on which to build. This new OS trend breaks that barrier. We will demonstrate how OS-level surveillance renders application-level privacy features, including end-to-end encryption, effectively useless. If the OS can screenshot a message before it's encrypted or after it's decrypted, the promise of privacy is broken, regardless of the app's design. We will also discuss the unsustainable "clever hacks" (like Signal using a DRM feature) that developers are forced to implement, underscoring the need for a structural solution.
-
An Actionable Framework for Remediation (20 mins) The final, and most important, part of the talk will move from critique to action. We will present an actionable four-point framework as a "tourniquet" to address these immediate dangers:
a. Empower Developers: Demand clear, officially supported APIs for developers to designate individual applications as "sensitive" with the default posture being for such applications being opted-out of access by agentic systems (either OS or application based) (default opt-out)
b. Granular User Control: Move beyond all-or-nothing permissions. Users must have explicit, fine-grained control to grant or deny AI access on an app-by-app basis.
c. Mandate Radical Transparency: OS vendors and application developers must clearly disclose what data is accessed, how it's used, and how it's protected—in human-readable terms, not buried in legalese. Laws and regulations must play an essential role but we cannot just wait for them to be enforced, or it will be too late.
d. Encourage and Protect Adversarial Research: We will conclude by reinforcing the need for a pro-privacy, pro-security architecture by default, looking at the legal frameworks that govern these processes and why they need to be enforced, and finally asking the attendees to continue exposing vulnerabilities in such systems. It was only due to technically-grounded collective outrage that Recall was re-architected by Microsoft and we will need that energy if we are to win this war.
Speakers of this event
Udbhav Tiwari
Meredith Whittaker
Udbhav Tiwari is the VP for Strategy and Global Affairs at Signal. Udbhav’s experience in the technology sector spans both global and regional contexts, where he was formerly the Director for Global Product Policy at Mozilla, with prior roles at Google and the Centre for Internet and Society in India. He has testified before the U.S Senate Committee on Commerce, Science and Transportation and been quoted as an expert by CNN, The Guardian, Wired, Financial Times, BBC, and Reuters. Udbhav was previously affiliated with the Carnegie Endowment for Peace and was named to India Today’s “India Tomorrow” list in 2020.
- AI Agent, AI Spy
Meredith Whittaker
- AI Agent, AI Spy