We trust our email providers to be neutral carriers of our private thoughts, much like a postal service. Google has traded that trust for a surveillance model that scans, stores, analyzes, and manipulates. In an age of invisible algorithms, how do we rebuild a digital world where «free» doesn’t mean «surrendered»?
In 2012, when I began to understand how potentially dangerous Google was – that in more than one respect, we paid for the company’s apparently free services with our freedom – I grew increasingly reluctant to communicate with people through Gmail, Google’s surveillance email system. First introduced in 2004, Gmail is now the most widely used email system in the world, with approximately two billion users; it is everywhere now except in mainland China and North Korea.
Driven by disturbing findings from a program of rigorous empirical research I began in 2013 and that I continue to this day, I soon decided to stop communicating with Gmail users, even though the majority of emails I received each day were Gmails. Instead of replying to the content of such emails, I decided to turn each of them into opportunities for teaching people about the risks they were taking by using Gmail – and about the true nature of subliminal surveillance systems.
Here, slightly edited, is how I began replying to Gmail users. I have been responding this way for more than a decade now:
Dear emailer: I would be happy to read your email (truly!), but, for privacy reasons, I no longer communicate with people through Gmail or other email services that route through Google surveillance servers. Neither should you!
In spite of appearances, Gmail is not a communications system – at least not one like a national postal service. Unlike a postal service, Google has no legal obligation to deliver mail, and it routinely and deliberately delays, alters, diverts (usually to spam folders), or deletes emails, as it pleases. It also routinely cuts people off from their Gmail accounts – in many cases, from a decade or more of their emails.
From a business perspective, Gmail is actually just a surveillance platform that tricks people into revealing personal information. It is nothing like a national postal service.
Google analyzes, monetizes, and permanently stores all emails. It adds email content to our personal profiles and to those of the friends and family members we mention in our emails. It uses that content to construct models that predict our behavior and that give the company more and more power to influence virtually everything we think and do. It also shares that content with business partners, software developers, and government agencies in the US and other countries.
Postal services do none of these things. Neither do other traditional sources of information, such as public libraries.
If you have any concerns about your own privacy, or if you at least respect the privacy of your correspondents, you should consider switching to a different email service. Please see articles I have published on this topic, such as “Free Isn’t Freedom: How Silicon Valley Tricks Us” or “Google’s Dance.”
Even if you don’t value your own privacy, you should do the people with whom you correspond the honor of valuing theirs. I’d like to make up my own mind about what content I do or do not share with personnel and algorithms at Google. When you communicate with people using your Gmail account, you take away that power from each and every one of them.
If you wish to communicate with me, please write to me from a non-Gmail email address. If you are looking for a secure email service, I recommend https://protonmail.com. It is subject to strict Swiss privacy laws and uses end-to-end encryption. The basic service is free. Yours sincerely, /re
Over the years, about half of the people to whom I have sent this response have followed up with emails coming from Proton or other companies that use encryption to protect people’s privacy. The other half, whom I never heard from again, have presumably sunk even deeper into the surveillance morass.
I am not the only one, it turns out, who has concerns about Gmail. In 2019, on a visit to Berlin to attend the premiere of the documentary film, “The Creepy Line,” which gives an overview of scientific research I have been conducting on Google’s ability to manipulate elections, I met the prominent German attorney Markus Runde. At that time, he was Managing Director of VG Media, which represented hundreds of media organizations in Europe on copyright matters. In chatting, we were surprised to learn we had been replying to Gmail users the same way for years; like me, Runde did notreply to the substance of their messages but simply warned them about the dangers of using Gmail.
We were two like-minded souls, each deeply concerned about online surveillance, each doing our small part, day by day, one person at a time, to shed some light on this serious problem – a problem that is unprecedented in human history and that is now impacting close to six billion people worldwide.
The Surveillance Problem
Surveillance is only part of a much larger problem, but let’s focus on it for a moment. Google, which invented the “surveillance business model,” doesn’t just read and analyze the incoming and outgoing emails of billions of people, it also collects personal information about people worldwide – everywhere outside of mainland China and North Korea – using roughly 200 different surveillance tools, most of which people have never heard of. People are dimly aware that Google tracks and analyzes their search histories on its ubiquitous search engine (which controls 92 percent of search worldwide), on all activities conducted on its ubiquitous Android devices (73 percent of mobile phones and tablets worldwide), on everything typed into Google Docs, every video people watch on YouTube (which Google owns), and on all activity on Google Chrome, Wallet, Maps, and so on.
But those very visible apps are just a small part of the problem. Most non-Google websites – tens of millions, in fact – use Google Analytics to track traffic, which gives Google the right and the power to track everything users do on those websites. Similarly, millions of websites embed Google Ads and AdSense into their content, giving Google the right and the power to track people when they click advertising links. This kind of tracking is invisible to users; they have no idea that Google tools are embedded into virtually every website they use.
Even more startling, perhaps, is the recent revelation that some time in 2026, Google’s Gemini AI software will be running Apple’s ubiquitous Siri personal assistant on all Apple devices. That brings another billion people into Google’s fold, many of whom use Apple products to try to avoidGoogle’s aggressive surveillance. Like Gemini, Siri will become part of Google’s surveillance ecosystem.
Who gave Google such rights and powers? Well, you did, although some demonic logic is involved here. Google claims that (a) long ago, without your awareness, you probably clicked a button agreeing to their Terms of Service (even though you didn’t read it and you don’t remember clicking the button), and (b) you are bound by their Terms of Service whenever you use a Google algorithm, even if you are unaware you are using such an algorithm and even if you never clicked the agreement button. See any circularity here?
So are users to blame for the surveillance? Not at all. Instead, we should all be outraged by the fact that our nations’ leaders and courts have failed to protect us. Courts, regulators, and legislators in the EU have made some efforts – for example, by implementing the General Data Protection Regulation (GDPR) in 2018. But the GDPR hasn’t slowed Google down one iota. In fact, because small companies can’t handle the legal demands of this regulation, its main effect has been to discourage potential Google competitors.
Add to all this the sad fact that thousands of companies now use the online surveillance model. Google is the worst offender, but Facebook isn’t far behind, and neither are X, Instagram, TikTok, Amazon, and so on. All these companies collect and monetize your personal data. Apple still seems to respect user privacy, but that could change in an instant with a change in leadership.
The Manipulation Problem
My own research has focused on a problem that is inherently even more dangerous than the surveillance, and that is the power that Google, and, to a lesser extent, other tech companies, have to manipulate thinking and behavior. And speaking of surveillance, one of my recently published studies suggests that the more you know about people, the more easily you can manipulate them online. Moreover, one of the most common ways Big Tech companies manipulate people is by deliberately addicting them to their websites, so surveillance and manipulation are intertwined.
My first study on online manipulation showed that a search engine can shift the voting preferences 20 and 80 percent of undecided voters by simply showing them search results that favor one political candidate, by which I mean that clicking on high-ranking search results brings users to websites that make that candidate look superior.
Since then, I have discovered, named, studied, and quantified about a dozen other new forms of manipulation that the internet has made possible, among them the Search Suggestion Effect (SSE), the Answer Bot Effect (ABE), the Targeted Messaging Effect (TME), and the Opinion Matching Effect (OME).
Some of these effects are among the most powerful forms of influence ever discovered in the behavioral sciences. They are especially dangerous because they are invisible to users. Worse still, almost all these manipulations are under Google’s exclusive control, and, to a lesser extent, the control of other Big Tech companies. Because all but one of these companies – X (f.k.a., Twitter), since it was purchased by Elon Musk in 2022 – lean left politically, this means they all send similarly biased content to users. In one of my latest studies, I showed that similarly biased content on multiple platforms has an additive impact on users. The additivity gives these companies an unparalleled ability to indoctrinate children, fix elections, and undermine human autonomy worldwide.
Google whistleblowers and leaks of documents and other materials from the company suggest that Google is using its manipulative powers systematically. If you doubt this, watch the eight-minute video – “The Selfish Ledger” – that leaked from the company in 2018, or read the annotated transcript I made of the video. This video, made by Google’s advanced products division, is about the company’s ability to reshape humanity according to “Google’s values.”
The question, then, is not only how to opt out individually, but how democracies can audit systems that influence information flows at scale.
Making Big Tech Companies Accountable to the Public
Besides the statements of whistleblowers and the occasional leaks of materials, is there any way for us to know what content the tech companies are actually sending to everyone? Is there any way for us to look over the shoulders of a large group of real users – with their permission, of course, and protecting their privacy – and capture, aggregate, and analyze the content they are seeing on their screens? Perhaps we could do this so quickly that we could catch Google in the act of interfering in an election. Perhaps we could expose manipulations so quickly that we could force Big Tech companies to back off on their manipulations – a change we would detect in real time.
A monitoring system of this sort would not only make Big Tech companies – the Google of today and the Googles of tomorrow – accountable to the public, it would also show the European Commission (EC) whether Google was complying with rules or laws they had put in place. In March, 2024, the EC expressed concern publicly that Apple, Google, and Meta (owner of Facebook) were not in compliance with the EU’s Digital Markets Act, but, sans monitoring, they had no way of knowing for sure. There is reason to believe that in the EU, Big Tech companies have simply paid the fines and then largely ignored the new rules and laws.
One of the main reasons they can get away with such defiance is because so much of the content they show people – search results, search suggestions, newsfeeds, AI replies, and so on – is ephemeral in nature: It appears, has an impact, and then disappears, leaving no paper trail for authorities to trace. The studies I began in 2013 measured the impact of biased ephemeral content on people who were undecided – that is, people who were vulnerable to being influenced. In 2018, the Wall Street Journalpublished an article about a leak of emails from Google in which employees were discussing how they might be able to use “ephemeral experiences” to change people’s views about Trump’s travel ban. My head spun when I saw that. Here were Google employees talking about the very powerful kind of manipulative content I had been studying for years.
As it happens, since 2016 (the year of the Clinton v. Trump Presidential race), my team and I had also begun building real monitoring systems – recruiting registered voters and installing custom software on their computers that allowed us to capture and rapidly aggregate personalized ephemeral content related to the upcoming November election. That year, we recruited 95 voters in 24 US states and preserved more than 13’000 searches – more than 130’000 search results – on Google, Bing, and Yahoo (the latter two for comparison purposes), along with the 98’000 webpages to which the search results linked.
On Google – but not on Bing or Yahoo – we found significant pro-Clinton bias in all ten positions on the first page of search results. Our experimental research suggested that if that level of bias was being shown by Google to voters nationwide, Google could have shifted between 2.6 and 10.4 million votes to Clinton without anyone’s awareness. She lost the election because of our peculiar Electoral College system, but she won the popular vote by 2.8 million votes.
Our monitoring systems grew with every subsequent US election, and so did Google’s awareness of our oversight. By late 2023, we had built a nationwide system that was capturing ephemeral content from more than 13’000 registered voters in all 50 US states, and we had preserved more than 66 million personalized ephemeral experiences on the platforms of multiple tech companies. I testified before the US Congress about this nationwide system – the first in the world – in December, 2023.
As a result of our efforts, in the months leading up to the 2024 Presidential election, we saw liberal bias drop by 20 percent on Google Search and by 50 percent on YouTube. Most important, five days before Election Day, we caught Google sending out 50 percent more go-vote reminders to Democrats than to Republicans on its home page, which is seen more than 500 million times a day in the US. When we exposed them, they stopped. Between November 1st and 5th, they sent an equal number of go-vote reminders to members of both parties.
In other words, monitoring made them accountable. As US Supreme Court Justice Louis Brandeis wrote in 1913, “Sunlight is the best disinfectant.”
Our real-time public dashboard showing the bias in online content 24 hours a day can be accessed at https://AmericasDigitalShield.com. We are currently monitoring through the computers of a politically balanced group of more than 17’000 registered voters (and many of their children) nationwide. In early 2026, we will begin expanding the system in preparation for the US midterm elections in November.
Should you trust the content Big Tech companies are sending you? No. You should be wary.
Can such companies be made accountable to the public? I say, emphatically, yes.
von Robert Epstein for Schweizer Monat – https://schweizermonat.ch/the-betrayal-of-trust-why-gmail-is-not-a-public-service/#