New facts are beginning to emerge on how Facebook, Whatsapp monitored users’ content despite assurances from Mark Zuckerberg that it has a strong reputation for building privacy protective services.
WhatsApp’s increasingly aggressive business plan is focused on charging companies for an array of services — letting users make payments via WhatsApp and managing customer service chats — that offer convenience but fewer privacy protections. The result is a confusing two-tiered privacy system within the same app where the protections of end-to-end encryption are further eroded when WhatsApp users employ the service to communicate with businesses.
The company’s December marketing presentation captures WhatsApp’s diverging imperatives. It states that “privacy will remain important.” But it also conveys what seems to be a more urgent mission: the need to “open the aperture of the brand to encompass our future business objectives.”
READ ALSO: Facebook partners Nigeria media to tackle fake news publishers
According to leaked data, WhatsApp has more than 1,000 contract workers filling floors of office buildings in Austin, Texas, Dublin and Singapore, where they examine millions of pieces of users’ content.
Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems.
These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute.
WhatsApp’s director of communications, Carl Woog, acknowledged that teams of contractors in Austin and elsewhere review WhatsApp messages to identify and remove “the worst” abusers. But Woog told ProPublica that the company does not consider this work to be content moderation, saying: “We actually don’t typically use the term for WhatsApp.” The company declined to make executives available for interviews for this article, but responded to questions with written comments.
WhatsApp’s denial that it moderates content is noticeably different from what Facebook Inc. says about WhatsApp’s corporate siblings, Instagram and Facebook.
The company has said that some 15,000 moderators examine content on Facebook and Instagram, neither of which is encrypted. It releases quarterly transparency reports that detail how many accounts Facebook and Instagram have “actioned” for various categories of abusive content. There is no such report for WhatsApp.
READ ALSO: BREAKING: Sadly, WhatsApp is set to stop working on these phones in two months away
Deploying an army of content reviewers is just one of the ways that Facebook Inc. has compromised the privacy of WhatsApp users. Together, the company’s actions have left WhatsApp — the largest messaging app in the world, with two billion users — far less private than its users likely understand or expect.
Many of the assertions by content moderators working for WhatsApp are echoed by a confidential whistleblower complaint filed last year with the U.S. Securities and Exchange Commission.
The complaint, which ProPublica obtained, details WhatsApp’s extensive use of outside contractors, artificial intelligence systems and account information to examine user messages, images and videos. It alleges that the company’s claims of protecting users’ privacy are false.
“We haven’t seen this complaint,” the company spokesperson said. The SEC has taken no public action on it; an agency spokesperson declined to comment.
Facebook Inc. has also downplayed how much data it collects from WhatsApp users, what it does with it and how much it shares with law enforcement authorities. For example, WhatsApp shares metadata, unencrypted records that can reveal a lot about a user’s activity, with law enforcement agencies such as the Department of Justice.
WhatsApp user data, ProPublica has learned, helped prosecutors build a high-profile case against a Treasury Department employee who leaked confidential documents to BuzzFeed News that exposed how dirty money flows through U.S. banks.
Like other social media and communications platforms, WhatsApp is caught between users who expect privacy and law enforcement entities that effectively demand the opposite: that WhatsApp turn over information that will help combat crime and online abuse.
WhatsApp has responded to this dilemma by asserting that it’s no dilemma at all. “I think we absolutely can have security and safety for people through end-to-end encryption and work with law enforcement to solve crimes,” said Will Cathcart, whose title is Head of WhatsApp, in a YouTube interview with an Australian think tank in July.