

As head of the non-profit foundation overseeing Signal, the messaging service known for its end-to-end encryption, Meredith Whittaker is always on the lookout for emerging security risks. Right now, she’s particularly concerned about agentic A.I., which she warned could reach “a very dangerous juncture” during her speech at the AI for Good Summit in Geneva on July 8. A.I. agents pose a serious concern due to their access to application layers, according to Whittaker. While these agents promise to make life easier by allowing users to “put your brain in a jar,” they can also gather valuable—and often sensitive—data.
This is a core concern for Signal, which is trusted by tens of millions of users, including those in government, military, human rights and journalism, for confidential communication and guaranteed privacy. As Whittaker put it, Signal collects “as close to no data as possible.”
However, this focus on security could be compromised by A.I. agents, even when performing simple tasks like booking a restaurant reservation. To complete such a task, an agent needs access to your calendar, credit card, web browser, contacts list and messaging apps like Signal to find an available time, make the payment, search for a restaurant and coordinate with friends.
Whittaker emphasized that Signal isn’t the only platform at risk from the rise of agentic A.I. These systems pose a competitive threat to any technology operating at the application layer. She pointed to Spotify as an example: an agent curating a playlist to share with friends could gain access to proprietary data the app uses to power its recommendation algorithms or sell ads. “Spotify doesn’t want to give every other company access to all of your Spotify data,” she said.
To mitigate these risks, Whittaker is calling for developer-level opt-outs that would block agentic A.I. from accessing certain apps altogether. She also stressed the importance of implementing agentic systems in an open manner that allows safety researchers to examine them and promotes rigorous security engineering.
“Yes, it’s going to take a long time, it’s going to be painful,” noted Whittaker. “But you need to formally verify some of these system components if we’re going to be integrating them into things like military operations or government infrastructures.”

