sentiment analysis —

Due to AI, “We are about to enter the era of mass spying,” says Bruce Schneier

Schneier: AI will enable a shift from observing actions to interpreting intentions, en masse.

An illustration of a woman standing in front of a large eyeball.

In an editorial for Slate published Monday, renowned security researcher Bruce Schneier warned that AI models may enable a new era of mass spying, allowing companies and governments to automate the process of analyzing and summarizing large volumes of conversation data, fundamentally lowering barriers to spying activities that currently require human labor.

In the piece, Schneier notes that the existing landscape of electronic surveillance has already transformed the modern era, becoming the business model of the Internet, where our digital footprints are constantly tracked and analyzed for commercial reasons. Spying, by contrast, can take that kind of economically inspired monitoring to a completely new level:

"Spying and surveillance are different but related things," Schneier writes. "If I hired a private detective to spy on you, that detective could hide a bug in your home or car, tap your phone, and listen to what you said. At the end, I would get a report of all the conversations you had and the contents of those conversations. If I hired that same private detective to put you under surveillance, I would get a different report: where you went, whom you talked to, what you purchased, what you did."

Schneier says that current spying methods, like phone tapping or physical surveillance, are labor-intensive, but the advent of AI significantly reduces this constraint. Generative AI systems are increasingly adept at summarizing lengthy conversations and sifting through massive datasets to organize and extract relevant information. This capability, he argues, will not only make spying more accessible but also more comprehensive.

"This spying is not limited to conversations on our phones or computers," Schneier writes. "Just as cameras everywhere fueled mass surveillance, microphones everywhere will fuel mass spying. Siri and Alexa and 'Hey, Google' are already always listening; the conversations just aren’t being saved yet."

From action to intent

We've recently seen a movement from companies like Google and Microsoft to feed what users create through AI models for the purposes of assistance and analysis. Microsoft is also building AI copilots into Windows, which require remote cloud processing to work. That means private user data goes to a remote server where it is analyzed outside of user control. Even if run locally, sufficiently advanced AI models will likely "understand" the contents of your device, including image content.

Microsoft recently said, "Soon there will be a Copilot for everyone and for everything you do."

Despite assurances of privacy from these companies, it's not hard to imagine a future where AI agents probing our sensitive files in the name of assistance start phoning home to help customize the advertising experience. Eventually, government and law enforcement pressure in some regions could compromise user privacy on a massive scale. Journalists and human rights workers could become initial targets of this new form of automated surveillance.

"Governments around the world already use mass surveillance; they will engage in mass spying as well," writes Schneier. Along the way, AI tools can be replicated on a large scale and are continuously improving, so deficiencies in the technology now may soon be overcome.

What's especially pernicious about AI-powered spying is that deep-learning systems introduce the ability to analyze the intent and context of interactions through techniques like sentiment analysis. It signifies a shift from observing actions with traditional digital surveillance to interpreting thoughts and discussions, potentially impacting everything from personal privacy to corporate and governmental strategies in information gathering and social control.

In his editorial, Schneier raises concerns about the chilling effect that mass spying could have on society, cautioning that the knowledge of being under constant surveillance may lead individuals to alter their behavior, engage in self-censorship, and conform to perceived norms, ultimately stifling free expression and personal privacy.

So what can people do about it? Anyone seeking protection from this type of mass spying will likely need to look toward government regulation to keep it in check since commercial pressures often trump technological safety and ethics. President Biden's Blueprint for an AI Bill of Rights mentions AI-powered surveillance as a concern. The European Union's draft AI Act also may obliquely address this issue to some extent, although apparently not directly, to our understanding. Neither is currently in legal effect.

Schneier isn't optimistic on that front, however, closing with the line, "We could prohibit mass spying. We could pass strong data-privacy rules. But we haven’t done anything to limit mass surveillance. Why would spying be any different?" It's a thought-provoking piece, and you can read the entire thing on Slate.

Channel Ars Technica