YouTube moderators are being forced to sign a statement acknowledging the job can give them PTSD

Content moderators for YouTube are being ordered to sign a document acknowledging that performing the job can cause post-traumatic stress disorder (PTSD), according to interviews with employees and documents obtained by The Verge. Accenture, which operates a moderation site for YouTube in Austin, Texas, distributed the document to workers on December 20th — four days after The Verge published an investigation into PTSD among workers at the facility.

“I understand the content I will be reviewing may be disturbing,” reads the document, which is titled “Acknowledgement” and was distributed to employees using DocuSign. “It is possible that reviewing such content may impact my mental health, and it could even lead to Post Traumatic Stress Disorder (PTSD). I will take full advantage of the weCare program and seek additional mental health services if needed. I will tell my supervisor/or my HR People Adviser if I believe that the work is negatively affecting my mental health.”

The PTSD statement comes at the end of the two-page acknowledgment form, and it is surrounded by a thick black border to signify its importance. It may be the most explicit acknowledgment yet from a content moderation company that the job now being done by tens of thousands of people around the world can come with severe mental health consequences.

“The wellbeing of our people is a top priority,” an Accenture spokeswoman said in an email. “We regularly update the information we give our people to ensure that they have a clear understanding of the work they do — and of the industry-leading wellness program and comprehensive support services we provide.”

Accenture said it shares information about potentially disturbing content with all of the content moderators it employs, including those who work on its contracts with Facebook and Twitter. But it would not answer questions about whether it specifically informs Facebook and Twitter moderators that they are at risk for PTSD. The Verge has previously interviewed Facebook moderators working for Accenture competitor Cognizant in Phoenix, Arizona, and Tampa, Florida, who have been diagnosed with PTSD after viewing violent and disturbing content.

In a statement, Facebook said it did not review or approve forms like the one Accenture sent. A Twitter spokeswoman said that both full-time and contract Twitter employees receive information when they join the company that acknowledges they might have to view sensitive material as part of their jobs. It is not clear whether contract workers for Facebook or Twitter have been asked to sign the PTSD acknowledgment form. (If you’re a contract worker for either company and have been asked to sign one, please email

The PTSD form describes various support services available to moderators who are suffering, including a “wellness coach,” a hotline, and the human resources department. (“The wellness coach is not a medical doctor and cannot diagnose or treat mental health disorders,” the document adds.)

It also seeks to make employees responsible for monitoring changes in their mental health and orders them to disclose negative changes to their supervisor or HR representative. It instructs employees to seek outside help if necessary as well. “I understand how important it is to monitor my own mental health, particularly since my psychological symptoms are primarily only apparent to me,” the document reads. “If I believe I may need any type of healthcare services beyond those provided by [Accenture], or if I am advised by a counselor to do so, I will seek them.”

The document adds that “no job is worth sacrificing my mental or emotional health” and that “this job is not for everyone” — language that suggests employees who experience mental health struggles as a result of their work do not belong at Accenture. It does not state that Accenture will make reasonable accommodations to employees who become disabled on the job, as required by federal law. Labor attorneys told The Verge that this language could be construed to suggest that employees may be terminated for becoming disabled, which would be illegal.

“I’m acknowledging that if I disclose my mental health to you, you may be able to fire me. That isn’t allowed,” said Alreen Haeggquist, an employee rights attorney based in California.

Accenture says signing the document is voluntary. But two current employees told The Verge that they were threatened with being fired if they refused to sign. The document itself also says that following its instructions is required: “Strict adherence to all the requirements in this document is mandatory,” it reads. “Failure to meet the requirements would amount to serious misconduct and for Accenture employees may warrant disciplinary action up to and including termination.”

Employment law experts contacted by The Verge said Accenture’s requirement that employees tell their supervisor about negative changes to their mental health could be viewed as an illegal requirement to disclose a disability or medical condition to an employer.

“I would think it’s illegal to force an employee to disclose any sort of disability to you,” Haeggquist said.

Accenture said employees are not being asked to disclose disabilities or medical conditions, and it framed the document as a general disclosure that it has been providing new employees for years. But it would not say why the PTSD disclosure was distributed mere days after The Verge’s investigation.

Accenture refused to disclose when it became aware that its workers were getting PTSD from exposure to YouTube content, how many workers have been affected so far, or whether it intended to use employees’ signatures as a legal defense against the current and future class action lawsuits it faces.

Google also would not answer questions about the prevalence of PTSD among its workforce of moderators. Instead, it issued this statement:

Moderators do vital and necessary work to keep digital platforms safer for everyone. We choose the companies we partner with carefully and require them to provide comprehensive resources to support moderators’ wellbeing and mental health.

Google would not comment on its vendor’s explicit warning to YouTube moderators that the job is harmful to workers’ mental health.

The Verge’s investigation last month into Accenture’s Austin site described hundreds of low-paid immigrants toiling in what the company calls its violent extremism queue, removing videos flagged for extreme violence and terrorist content. Working for $18.50 an hour, or about $37,000 a year, employees said they struggled to afford rent and were dealing with severe mental health struggles. The moment they quit Accenture or get fired, they lose access to all mental health services. One former moderator for Google said she was still experiencing symptoms of PTSD two years after leaving.

It’s unclear how common it is for content moderators to get PSTD. From my own interviews with more than 100 moderators over the past year, it appears to be a significant number. And many other employees develop long-lasting mental health symptoms that stop short of full-blown PTSD, including depression, anxiety, and insomnia.

Accenture’s PTSD disclosure comes as class action lawsuits are gathering steam around the world targeting tech platforms and their vendors. Facebook alone currently faces lawsuits that are seeking class action status in California and in Ireland.

Under the Occupational Safety and Health Act of 1970 (OSHA), employers are required to provide a workplace that is free of hazards that can cause serious harm or death. The act was designed to acknowledge that most employees cannot avoid unsafe conditions that are created by their employers, said Hugh Baran, a staff attorney with the nonprofit National Employment Law Project.

Baran, who reviewed the document for The Verge, said it read as “reverse psychology” — getting workers to blame themselves for any mental health struggles they have as a result of working in content moderation.

“It seems like these companies are unwilling to do the work that’s required by OSHA and reimagine these jobs to fit the idea that workers have to be safe,” Baran said. “Instead they’re trying to shift the blame onto workers and make them think that there’s something wrong in their own behavior that is making them get injured.”

Baran said forcing workers to sign PTSD acknowledgments could make them less likely to sue in the event that they become disabled. But companies like Accenture would still be liable for harm caused on the job, he said. “Under most understandings of OSHA, it doesn’t matter what you make people sign,” Baran said. “You can’t [eliminate] your burden to provide your employees with safe working conditions.”

Meanwhile, employees I spoke with expressed shock that only recently — after they had been doing the job for more than a year — did Accenture acknowledge that the work could scar them deeply and perhaps permanently.

“If I knew from the beginning how this job would impact our mental health, I would never have taken it,” one said.

Big tech CEOs are learning the art of the filibuster

You maniacs sold out our second-ever Interface live event, with Uncanny Valley author Anna Wiener, in record time. Thanks to everyone who bought a ticket, and we’ll look into finding a bigger venue for the next one. In the meantime, I’m looking forward to seeing a good number of you on February 4th!

The basic idea behind journalism is that there are things people don’t know that they should know, and that someone ought to go find the people who do know about the things and ask them. Most of the time when a journalist interviews someone, they learn something useful, and then report it all back to us so we can have a shared understanding of reality and make better decisions about how to live.

Historically, a person that lots of journalists have wanted to talk to is the big tech CEO. As companies like Amazon and Apple grew in power, getting the chance to sit down with a Jeff Bezos or a Tim Cook became wildly appealing. Here were people who knew about many, many things — things that affected almost all of us — and could tell us about them with a candor that their employees typically will not permit themselves.

And yet when you think of what you have learned from reading the thoughts of tech CEOs over the past few years — well, what have you learned? If you hang around the darker, more thought-leader-y corners of Medium, it’s possible you’ll have gleaned a few insights into customer acquisition or recruiting. But if what you’re after is a CEO’s worldview — or even just a moderately unvarnished look into their decision-making process — you typically come up empty.

I thought about all this today while reading Adam Lashinsky’s “conversation” with Google and Alphabet CEO Sundar Pichai in Fortune. It’s the first long interview Pichai has given since being elevated to the role of Alphabet CEO, and Lashinsky asks him about many of the subjects you would expect from a journalist in his position.

Lashinsky asks why Alphabet exists, and whether Pichai will crack down on the spending of its non-Google companies. He asks what companies Pichai considers to be his competition, and whether he has a plan to deal with the possibility that the US government will attempt to break up Alphabet on antitrust grounds. And what Lashinsky gets back from Google is … almost nothing at all. Here’s a characteristically empty exchange:

Who do you see as your biggest competitors?

I’ve always worried as a company at scale your biggest competition is from within, that you stop executing well, you focus on the wrong things, you get distracted. I think when you focus on competitors you start chasing and playing by the rules of what other people are good at rather than what makes you good as a company.

Do you have a scenario you plan for in which regulators break up Alphabet on antitrust grounds?

At our scale we realize there will be scrutiny. We always engage constructively, and we take feedback to the extent there are areas where sometimes we may not agree with it. But obviously we understand the role of regulators.

So there you have it: Alphabet’s biggest competitor is itself (?), and its plan for an attempted breakup of the company is understanding the role that regulators play in society.

To be clear, I’m not not criticizing Fortune here. These are questions that almost anyone would have asked. And I doubt there are many reporters who would have gotten different answers.

But it’s clear that as prevailing sentiment about big tech companies has darkened, tech CEOs see increasingly little value in having meaningful public conversations. Instead, they grit their teeth through every question, treating every encounter as something in between a legal deposition and a hostage negotiation.

We saw this in 2018, when the New Yorker profiled Mark Zuckerberg. We saw it again last year, when Jack Dorsey went on a podcast tour. At some point this year Tim Cook will probably give a zero-calorie interview to someone, and if it’s a slow-enough news day I’ll write this column for a fourth time.

To some extent, CEOs’ reticence to engage is understandable. When you are effectively a head of state, and staring down the barrel of potentially company-ending regulation, you have strong incentives to do as little thinking in public as possible. But journalists, I think, have a responsibility to point that out in real time — to call a dodge a dodge.

For a long time I thought the point of journalism was to get into a room with the CEO and ask the big questions. I was embarrassingly late to realize that the bigger story was almost always elsewhere, in the events unfolding just outside the CEO’s field of vision. Access will always have its appeal, and I still wouldn’t turn down an interview with the CEO of any company I cover. But as I wrote about Dorsey’s podcast tour:

The CEO is traditionally in the best position to enact change. But we have learned that once social networks grow to a certain scale, they begin to operate beyond their creators’ control. You can ask the CEOs what they plan to do about it. But the answers will always tell you less than you hope.

The solution, as ever, is to fall back on that oldest journalistic principle: talk to the people who know the things you want to find out. And that starts with the recognition that the CEO doesn’t have a monopoly on knowledge — and that to the extent he knows anything useful, he is probably going to do his best not to talk about it.


Yesterday, we included a link to this story about Facebook and Twitter having evidence that could keep people out of prison—but the Stored Communications Act forbids them from giving it up. Alex Stamos pushed back against the article’s framing on Twitter, saying “Updating these Reagan-signed laws is totally reasonable, but I really dislike reporting that doesn’t take into account the last decade of tech companies fighting to keep user data classified as stored communications content.”

The Ratio

Today in news that could affect public perception of the big tech platforms.

Trending up: Pinterest CEO Ben Silbermann is talking about his decision to pull all medical information from the platform after users started searching for vaccines. He said “we couldn’t ensure that we were giving people great information.”

Trending down: The spread of misinformation about climate change has increased during Australia’s bushfires, demonstrating the limits of Facebook’s fact-checking program. Fact checkers have focused on misleading photos and videos, but are leaving climate-related misinformation mostly untouched.


Clearview AI, the controversial facial-recognition company that has amassed a database of billions of photos, claimed it helped crack a case of alleged terrorism in a New York City subway station last August in a matter of seconds. The cops say that’s not true. Ryan Mac, Caroline Haskins and Logan McDonald from BuzzFeed report:

As it emerges from the shadows, Clearview is attempting to convince law enforcement that its facial recognition tool, which has been trained on photos scraped from Facebook, Instagram, LinkedIn, and other websites, is more accurate than any other on the market. However, emails, presentations, and flyers obtained by BuzzFeed News reveal that its claims to law enforcement agencies are impossible to verify — or flat-out wrong.

For example, the pitch email about its role in catching an alleged terrorist, which BuzzFeed News obtained via a public records request last month, explained that when the suspect’s photo was “searched in Clearview,” its software linked the image to an online profile with the man’s name in less than five seconds. Clearview AI’s website also takes credit in a flashy promotional video, using the incident, in which a man allegedly placed rice cookers made to look like bombs, as one example among thousands in which the company assisted law enforcement. But the NYPD says this account is not true.

Sixty years ago, Woody Bledsoe invented technology that could identify faces. Today, his place in the field of facial recognition has been largely forgotten. This profile dives into his story — and his relationship with the CIA, which seems to have become one of his clients. (Shaun Raviv / Wired)

Adrian Chen writes about why facial recognition has suddenly become a hot-button issue. “The most obvious answer is that the technology has been improved, streamlined, and commercialized,” he argues as part of a package of stories about facial recognition. (Adrian Chen / California Sunday Magazine)

People are retaliating against big tech by calling SWAT teams to executives’ homes. Facebook employees have been a particularly favorite target, with Instagram chief Adam Mosseri being swatted in November. This is insanely dangerous, and police departments have been slow to acknowledge the threat. Swatting is a violent crime and ought to be treated as one by the courts. (Sheera Frenkel / The New York Times)

Facebook is about to face off with the IRS in a rare trial to capture billions that the tax agency thinks Facebook owes. But onerous budget cuts have hamstrung the agency’s ability to bring the case. (Paul Kiel / ProPublica)

Facebook is worried about Democrats winning the presidential election. Many of them hold the social network responsible for Trump’s 2016 victory, assail it for allowing misinformation to spread, and have vowed to regulate it or break it up. (Sara Fischer and Scott Rosenberg / Axios)

Rep. David Cicilline is the House Democrat leading the investigation into Amazon, Apple, Facebook and Google. Since he can’t issue fines or press criminal charges, he’s opted to convene high-profile hearings and give air time to smaller businesses frightened by the Big Four. (Nancy Scola and Cristiano Lima / Politico)

Presidential candidate Mike Bloomberg wants to replicate and build off what Trump’s campaign did best, without mimicking his style. And that means lots of Facebook ads. (Mike Allen and Margaret Talev / Axios)

Twitter’s attempts to stop people from targeting ads using keywords that include sexist terms or racial slurs have created a confusing user experience. While users are still able to select those terms when targeting an ad, Twitter said the terms won’t “register” as keywords once the ad is published. (Shoshana Wodinsky / Gizmodo)

The report from FTI Consulting that strongly implicates Saudi Crown Prince Mohammed bin Salman in the hacking of Jeff Bezos’ iPhone X in 2018 has not fully convinced the cybersecurity community. They say there are still open questions about the attack, starting with a most basic one: how exactly did the hack work? (Shannon Vavra / CyberScoop)

As India experiences the longest-ever internet blockade for a democracy, more countries are pressing the internet kill switch to snub dissent. Experts warn that the tactic is being used as a form of protest suppression. (Puja Changoiwala / OneZero)


TikTok announced a new licensing deal with Merlin, a global agency that represents tens of thousands of independent music labels and hundreds of thousands of artists. The deals allows TikTok to use music from the label on its platform and on its forthcoming music service, Resso. Ingrid Lunden from TechCrunch has the story:

The news is significant because this is the first major music licensing deal announced by TikTok as part of its wider efforts in the music industry. Notably, it’s not the first: I’ve confirmed TikTok has actually secured other major labels but has been restricted from going public on the details.

The Merlin deal is therefore a template of what TikTok is likely signing with others: it includes both its mainstay short-form videos — where music plays a key role (the app, before it was acquired by Bytedance, was even called ‘Musically’) — as well as new music streaming services.

TikTok moved its operations to a new office in Los Angeles. “While we are a global company, having a permanent office in LA speaks to our commitment to the U.S. market and deepens our bonds with the city,” writes Vanessa Pappas, TikTok’s general manager in the United States.

Twitter is rolling out a new feature that lets users add iMessage-like reactions to direct messages. The company first started testing this emoji reaction feature last year, but it’s now rolling the capability out to all users on the web, iOS, and Android. (Chance Miller / 9To5Mac)

American bathrooms have become the stage set of the moment. With good lighting, acoustics, and privacy, they’re the ideal place for the dramatic entrances, exits, skits, dances and story times of TikTok. (Taylor Lorenz / The New York Times)

Angelina Jolie is producing a BBC show to help kids spot fake news. The show explains the stories behind news and offers facts and information that helps kids over the age of 13 make up their own minds on pressing international issues. (Brian Steinberg / Variety)

Josh Constine makes the case that Facebook and Instagram should mark stories as “watched” so users stop seeing story reruns across both apps. Co-signed. (Josh Constine / TechCrunch)

TripAdvisor is cutting hundreds of jobs in an attempt to cut costs as competition from Google intensifies. The search giant has launched a variety of travel search tools, as well as reviews of hotels and restaurants, that directly compete with TripAdvisor — and appear at the top of the world’s most-used search engine. (Mark Gurman and Olivia Carville / Bloomberg)

Tinder launched some new safety features, including a photo verification system that’ll place a blue check mark on daters’ profiles. The system requires daters to take a selfie in real time that matches a pose shown by a model in a sample image. (Ashley Carman / The Verge)

And finally…

Yesterday in this space, we brought you the story of United States senators flouting their own rules against bringing electronic devices into the room for the ongoing impeachment trial of Donald Trump. On Wednesday, senators were spotted wearing Apple Watches. Today, Niels Lesniewski writes in Roll Call, they’re taking a decidedly analog approach to solving the agony of losing their smartphones:

Sen. Richard M. Burr is trying to help out his antsy Senate colleagues.

The North Carolina Republican is providing an assortment of fidget spinners and other gizmos to his GOP colleagues at this week’s Thursday lunch.

Another approach to resolving their boredom could be simply paying attention to the trial, but I realize that’s a big ask.

Talk to us

Send us tips, comments, questions, and your questions for Sundar Pichai: and

PSA: Never open a WhatsApp message from the crown prince of Saudi Arabia

Bay Area! I’ll be talking with Anna Wiener about Uncanny Valley, her brilliant new memoir of a life in tech, on February 4th at Manny’s in San Francisco. It’s our second-ever Interface Live event, and it would mean the world to me if you came to say hello and talk tech and democracy with us. Get your tickets here!

Some days, when you write a column about the latest interactions between big tech platforms and the government, you try to make a meticulous and layered argument based on a series of nuanced observations about the world. Other days, you just write down a bunch of facts and say — wait, what?!

The past 24 hours have been a wait, what?! sort of day.

It has been just under a year since Amazon CEO Jeff Bezos shocked the world with a Medium post disclosing that he had been the subject of an extortion attempt, hired the best person in the world to investigate it, and promised to get to the bottom of it. The story’s elements included an extramarital affair, family betrayal, stolen nudes, and the crusading reporting of the Washington Post, which Bezos owns. Within days, a hefty amount of circumstantial evidence hinted that the government of Saudi Arabia — and its crown prince, Mohammed bin Salman, were likely involved in the scheme.

Then, on Tuesday afternoon, the Guardian published a bombshell: a forensic examination conducted at Bezos’ request by the FTI Consulting found that his phone had most likely been hacked in 2018 after he received a WhatsApp message from a personal phone number belonging to MBS himself. Stephanie Kirchgaessner reports:

The encrypted message from the number used by Mohammed bin Salman is believed to have included a malicious file that infiltrated the phone of the world’s richest man, according to the results of a digital forensic analysis.

This analysis found it “highly probable” that the intrusion into the phone was triggered by an infected video file sent from the account of the Saudi heir to Bezos, the owner of the Washington Post.

The report was subsequently confirmed by the Financial Times and New York Times, and , and Vice published the full report from FTI. Among other things, the report suggests that MBS was attempting to intimidate Bezos, months before a Post columnist — MBS critic Jamal Khashoggi — was brutally murdered on the crown prince’s orders, according to the CIA.

The United Nations has called for further investigation related to the Khashoggi murder, in which MBS continues to deny his involvement. Here’s Jared Malsin, Dustin Volz and Justin Scheck in the Wall Street Journal.

“The circumstances and timing of the hacking and surveillance of Bezos also strengthen support for further investigation by U.S. and other relevant authorities of the allegations that the Crown Prince ordered, incited, or, at a minimum, was aware of planning for but failed to stop the mission that fatally targeted Mr. Khashoggi in Istanbul,” the officials said in a statement based on their review of the forensic analysis.


“At a time when Saudi Arabia was supposedly investigating the killing of Mr. Khashoggi, and prosecuting those it deemed responsible, it was clandestinely waging a massive online campaign against Mr. Bezos and Amazon targeting him principally as the owner of The Washington Post,” Ms. Callamard and Mr. Kaye said.

Some threads.

Is the case against MBS being behind the hack open and shut? On one hand, there’s no smoking gun. On the other, no one has proposed a credible-sounding alternate culprit. The gist is that after MBS’ WhatsApp account sent Bezos a video file, Bezos’ phone went crazy and started transmitting an enormous amount of data:

That file shows an image of the Saudi Arabian flag and Swedish flags and arrived with an encrypted downloader. Because the downloader was encrypted this delayed or further prevented “study of the code delivered along with the video.”

Investigators determined the video or downloader were suspicious only because Bezos’ phone subsequently began transmitting large amounts of data. “[W]ithin hours of the encrypted downloader being received, a massive and unauthorized exfiltration of data from Bezos’ phone began, continuing and escalating for months thereafter,” the report states.

Still, information security types aren’t satisfied with the FTI report, arguing that someone with access to the phone and the malicious file should be able to find direct evidence that it was the culprit. See Alex Stamos on this point.

What malware was used in the attack? What vulnerabilities were exploited? Could my phone be hacked in the same way? We don’t know, we don’t know, and we don’t know, respectively.

OK, but who made the malware used in the attack? Probably one of those shadowy hacker-for-hire outfits. The FTI report “suggested that the Tel Aviv-based NSO Group and Milan-based Hacking Team had the capabilities for such an attack,” Sheera Frenkel reports in a Times piece about the hack. NSO Group denied it; Hacking Team didn’t respond.

Is this the craziest series of events ever to befall the CEO of a major tech platform? Yes and it’s not even close.

What was the best tweet about all this? Oh, probably Jake Tapper’s.

Second place goes to Jeff Bezos.

The crown prince of Saudi Arabia has recently sent me a message on WhatsApp. Should I open it? Absolutely not. And probably stay out of his embassies, too.

The Ratio

Today in news that could affect public perception of the big tech platforms.

Trending down: Apple dropped plans to let iPhone users fully encrypt backups of their devices in iCloud after the FBI complained that the move would harm investigations. The tech giant’s reversal, which happened about two years ago, shows how much Apple has been willing to help US law enforcement despite casting itself as a defender of customer information.


Facebook and Twitter have evidence that could save people from prison, but they’re reluctant to give it up. They argue that the Stored Communications Act forbids them from divulging the content of communications unless a specific exemption applies. Megan Cassidy from the San Francisco Chronicle reports:

Facebook and Twitter provide online portals specifically for law enforcement to request information during emergencies and investigations. Government officials armed with search warrants routinely collect private user messages to help win convictions.

Defendants and their attorneys have no such recourse. In addition to the legal firewalls, Facebook also requires defense counsels to deliver subpoenas in person to their Menlo Park headquarters or to an authorized agent.

Critics are worried that Facebook’s fact-checking partners aren’t getting the resources they need to adequately address misinformation. The six partners tasked with evaluating content in the US are all growing their staff, but so far it hasn’t been enough to quell fears. (Chris Mills Rodrigo / The Hill)

Facebook has allowed a major pro-Trump Super PAC, the Committee to Defend the President, to run ads with lies. Some of the ads claim former Vice President Joe Biden is “a criminal who used his power as Vice President to make him and his son RICH.” Who’s excited for 11 more months of this? (Popular Information)

Facebook has made serious improvements to election security ahead of the caucuses next month, the company argues in a new op-ed in the Des Moines Register. The changes include opening rapid-response centers to monitor suspicious activity on the platform, and growing the security teams.

Voters in the Seattle area will be able to vote by smartphone in an upcoming election. It’s a historic moment for American democracy. But security experts warn that while mobile voting could increase turnout, it could also make the system much more vulnerable to a cyberattack. Yikes! (Miles Parks / NPR)

Amazon and Facebook each spent roughly $17 million on lobbying efforts in 2019. The new federal disclosures tell a story of a sector tapping its deep pockets to beat back regulatory threats and boost its bottom line. (Tony Romm / The Washington Post)

Presidential candidate Mike Bloomberg said breaking up tech companies “is not an answer.” He added that he doesn’t think Sens. Elizabeth Warren (D-MA) and Bernie Sanders (I-VT) “know what they’re talking about” when it comes to breaking up big tech companies. He also did not offer “the answer.” (Makena Kelly / The Verge)

While Apple may not provide official support to law enforcement agencies to access iPhones, police departments across the US already have the ability to crack mobile devices. They often use third-party companies to unlock and access information on encrypted mobile devices (including iPhones) at a relatively low cost. (Michael Hayes / OneZero)

There’s been very little consistency in how companies are complying with California’s new privacy law. Some have incorrect information on their websites about how the law affects them and consumers. Others lack a clear process to respond to customers who request their data. (Greg Bensinger / The Washington Post)

Joshua Collins, a 26-year-old socialist trucker running for Congress in Washington State, is leveraging TikTok in a new kind of political campaign. (Makena Kelly / The Verge)

San Francisco Pride members voted to ban Google and YouTube from their parade. They say the company isn’t doing enough to stop hate speech on its platforms. (Shirin Ghaffary / Recode)

City officials in Suzhou, a city of six million people in eastern China, sparked outrage online when they published surveillance photos of residents wearing pajamas in public. The people in the photos were identified with facial recognition software, and officials called their behavior “uncivilized.” (Amy Qin / The New York Times)

Britain unveiled sweeping new online protections for children’s privacy. The rules will require platforms like YouTube and Instagram to turn on the highest possible privacy settings by default for minors, and turn off by default data-mining practices like targeted advertising and location tracking for children in the country. (Natasha Singer / The New York Times)


ByteDance is seeking a new CEO for TikTok. The massively popular video app has come under fire from American politicians who worry that it might present a national security threat. Bloomberg’s Kurt Wagner and Sarah Frier have the story:

The company has interviewed candidates in recent months for the CEO role, which would be based in the U.S., according to people familiar with the matter, who asked not to be named because the search is private. In one potential scenario, the new CEO would oversee TikTok’s non-technical functions, including advertising and operations, while current TikTok chief Alex Zhu would continue to manage the majority of product and engineering out of China, one person said. The hiring process is ongoing and the envisioned role could still change depending on who is selected, the people added.

Zhu, who co-founded a predecessor to TikTok called, took over the business last year, though ByteDance also has a Chinese version of TikTok called Douyin, which is run by a different management team. The eventual corporate structure involving Zhu and the new CEO is still unclear, the people said, and Bytedance has hired executive search firm Heidrick & Struggles to help lead the process.

Researchers at Stanford have developed a new metric to track the time people spend on their devices. They say it’s more accurate than “screen time,” which treats all time spent online as more or less equal. (Will Oremus / OneZero)

Google launched three new experimental apps to help people use their phones less as part of a “digital wellbeing” initiative. One of the apps invites people seal their devices in a phone-sized paper envelope, similar to the pouches some artists require fans to put their phones into at concerts. No thanks! (Jay Peters / The Verge)

Small businesses are posting about the difficulties of competing with Big Tech, and the messages are going viral on social media. Sometimes, that virality has kept the businesses afloat. Other times, it’s made things harder. (Input)

Some of the biggest companies in the world are funding climate misinformation by advertising on YouTube, according to a study from activist group Avaaz. More than 100 brands were found to be running ads on videos that were promoting misleading information about climate change. (Alex Hern / The Guardian)

British telecom company Vodafone just quit the Facebook-founded Libra Association, the latest company to do so after PayPal, Mastercard, Visa, Mercado Pago, eBay, and Stripe left last year. I can’t remember the last time Libra got any good news. (Nikhilesh De / Coindesk)

Bonus industry content: The former director of newsletters at The New Yorker and BuzzFeed interviewed Casey about the making of this newsletter.

And finally…

The president’s impeachment trial is underway in the Senate, and rules prohibit senators from bringing electronic devices onto the floor. And yet seven senators have been spotted wearing their Apple Watches:

Republican Sens. Mike Lee of Utah, John Thune of South Dakota, Jerry Moran of Kansas, James Lankford of Oklahoma, John Cornyn of Texas and Tim Scott of South Carolina all are wearing them on the floor. Also spotted with the smartwatch: an aide to Senate Majority Leader Mitch McConnell.

So, too, is Democratic Sen. Patty Murray of Washington. Virginia Democratic Sen. Mark Warner owns an Apple Watch, but it could not be confirmed if he had it on the floor.

It should be pretty easy to tell. Just wait to see if he stands up 10 minutes before every hour.

Talk to us

Send us tips, comments, questions, and your unopened messages from the Saudi prince: and

The need for a federal privacy law has never been greater

Bay Area! I’ll be talking with Anna Wiener about Uncanny Valley, her brilliant new memoir of a life in tech, on February 4th at Manny’s in San Francisco. It’s our second-ever Interface Live event, and it would mean the world to me if you came to say hello and talk tech and democracy with us. Get your tickets here!

Last June, after a series of developments related to facial recognition and customer tracking, I warned that a Chinese-style social credit system was beginning to take shape in the United States. Among other things, a school district in western New York announced plans to deploy a facial-recognition system to track students and faculty; the Washington Post reported that airports had accelerated their use of facial-recognition tools, and the United States began requiring visa applicants to submit social media profiles along with their applications.

That column left open the question of what role American law enforcement might play in building a system that feels increasingly dystopian. But now, thanks to a superb investigation by Kashmir Hill, we know much more. Hill tells the story of Clearview AI, a small and mostly unknown company that has been scraping publicly available images — including billions from Facebook, YouTube, and Venmo profiles — and selling access to the police. She writes:

Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.

But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.

Hill’s report is chockablock with surprising details, and you should read it in full if you haven’t already. When it landed online Saturday, it galvanized discussions around how quickly tech companies are eroding privacy protections, with Congress remaining idle so far despite years of discussions around a national privacy law.

Some threads to pull on.

Is this legal? As Ben Thompson explains today in a paywalled post, LinkedIn sued a company that had scraped its public profiles in a fashion similar to Clearview. But it lost the lawsuit, seemingly giving a green light to other companies seeking to do the same thing. Last year, Facebook told Congress that it gathers information about logged-out users to prevent this sort of scraping. But former Facebook chief security officer Alex Stamos explained to me that actually preventing that scraping is much easier said than done.

Is this the end of privacy? No, because laws protecting individual privacy can still be effective — even at the state level. On Tuesday, the Supreme Court declined to hear an appeal from Facebook on a case involving the company’s use of facial-recognition technology. Facebook used the tech to tag photos with user names, running afoul of an Illinois law requiring companies to get their consent first. As a result, Facebook will likely have to face a multi-billion-dollar class action lawsuit. A strong federal privacy law could make products like Clearview’s illegal, or regulate them to offer protections from some of the more obvious ways the technology will be misused.

Is our current freak-out about facial recognition ignoring the larger point? Surveying recent municipal efforts to ban use of the technology by law enforcement, Bruce Schneier argues persuasively that we need to take a broader view of the issue. We can be (and increasingly are) tracked in all manner of ways: by heart rate, gait, fingerprints, iris patterns, license plates, health records, and (of course) activity on social networks. The forces working to end individual privacy are a hydra, Schneier argues, and need to be dealt with collectively. He writes:

The point is that it doesn’t matter which technology is used to identify people. That there currently is no comprehensive database of heart beats or gaits doesn’t make the technologies that gather them any less effective. And most of the time, it doesn’t matter if identification isn’t tied to a real name. What’s important is that we can be consistently identified over time. We might be completely anonymous in a system that uses unique cookies to track us as we browse the internet, but the same process of correlation and discrimination still occurs. It’s the same with faces; we can be tracked as we move around a store or shopping mall, even if that tracking isn’t tied to a specific name. And that anonymity is fragile: If we ever order something online with a credit card, or purchase something with a credit card in a store, then suddenly our real names are attached to what was anonymous tracking information.

Regulating this system means addressing all three steps of the process. A ban on facial recognition won’t make any difference if, in response, surveillance systems switch to identifying people by smartphone MAC addresses. The problem is that we are being identified without our knowledge or consent, and society needs rules about when that is permissible.

Are privacy experts being needlessly alarmist? I try to ration my alarmism judiciously in this newsletter. But once you start looking for examples of companies using their data to build social-credit systems, you find them everywhere. Here, from earlier this month, is a tool Airbnb is developing to evaluate the risks posed by individual risks:

According to the patent, Airbnb could deploy its software to scan sites including social media for traits such as “conscientiousness and openness” against the usual credit and identity checks and what it describes as “secure third-party databases”. Traits such as “neuroticism and involvement in crimes” and “narcissism, Machiavellianism, or psychopathy” are “perceived as untrustworthy”.

Who will this tool discriminate against? And what recourse will those discriminated against have? These are two questions we should take into any discussion of technology like this.

Finally, is there a good Marxist gloss on all this? Sure. Here’s Ben Tarnoff with a provocative piece in The Logic calling for a revival of Luddism to counter oppressive technology of the sort Clearview manufactures. (His piece predates Hill’s by a couple days, but the point stands.)

One can see a similar approach in the emerging movement against facial recognition, as some city governments ban public agencies from using the software. Such campaigns are guided by the belief that certain technologies are too dangerous to exist. They suggest that one solution to what Gandy called the “panoptic sort” is to smash the tools that enable such sorting to take place.

We might call this the Luddite option, and it’s an essential component of any democratic future. The historian David F. Noble once wrote about the importance of perceiving technology “in the present tense.” He praised the Luddites for this reason: the Luddites destroyed textile machinery in nineteenth-century England because they recognized the threat that it posed to their livelihood. They didn’t buy into the gospel of technological progress that instructed them to patiently await a better future; rather, they saw what certain technologies were doing to them in the present tense, and took action to stop them. They weren’t against technology in the abstract. They were against the relationships of domination that particular technologies enacted. By dismantling those technologies, they also dismantled those relationships — and forced the creation of new ones, from below.

Last June, writing about the rise of American social credit systems, I noted that they were developing with very little public conversation about them. The good news is that the public conversation has now begun. The question is whether advocates for civil liberties will be able to sustain that conversation — or to turn it into action.

The Ratio

Today in news that could affect public perception of the big tech platforms.

Trending up: European businesses say that using Facebook apps helped them generate sales corresponding to an estimated EUR 208 billion last year, which translates to about 3.1 million jobs. The news comes from a study Facebook commissioned with Copenhagen Economics.

Trending down: A new analysis of coordinated inauthentic behavior on Facebook shows the social network is still failing to keep up with the spread of disinformation and media manipulation on the platform. Analysts are still calling for Facebook to release more information about the coordinated campaigns to increase transparency in the process.


First off today, a call for help from our friends at The California Consumer Privacy Act gives Californians certain rights over the data businesses collect about them. Have you taken advantage of this new law? Fill out this form to help Vox’s reporting on what happens when you do:

Apple, Amazon, Facebook and Google took a public lashing at a congressional hearing on Friday. Some of their smaller rivals, including Sonos and Tile, pleaded with federal lawmakers to take swift action against Big Tech. Tony Romm at The Washington Post has the story:

The pleas for regulatory relief resonated with lawmakers, led by Rep. David N. Cicilline (D-R.I.), the chairman of the House’s top antitrust committee. “It has become clear these firms have tremendous power as gatekeepers to shape and control commerce online,” Cicilline said to open the session.

The hearing at the University of Colorado at Boulder put public faces on the pain caused by some of the largest tech companies in the United States. Cicilline and other lawmakers have sought to determine if federal antitrust law is sufficient to hold Silicon Valley leaders accountable — and whether changes to federal law are necessary to address anti-competitive concerns in search, smartphones, e-commerce and social networking.

“I think it’s clear there’s abuse in the marketplace and a need for action,” said Rep. Ken Buck (R-Colo.).

Four Facebook competitors are suing the social network for allegedly anticompetitive behavior. They’ve asked a judge to order Mark Zuckerberg to give up control of the company and force him to sell off Instagram and WhatsApp. (Robert Burnson / Bloomberg)

As seven University of Puerto Rico students prepare to go on trial in February for participating in a nonviolent protest more than two years ago, documents released to their defense attorneys reveal that Facebook granted the island’s Justice Department access to a trove of private information from student news publications. (Alleen Brown and Alice Speri / The Intercept)

Democratic candidates’ spending on Facebook ads shows how campaigns are plotting their way through the primary states. Since October, Pete Buttigieg has spent about a fifth of his overall Facebook budget on ads targeting voters in Iowa. Andrew Yang has spent more than 85 percent of his Facebook budget in Iowa and New Hampshire. (Nick Corasaniti and Quoctrung Bui / The New York Times)

Facebook took down a network of pages that were coordinating posts defending Robert F. Hyde, a figure who has become embroiled in the impeachment investigation. The pages described themselves as representing groups of supporters of President Trump. (Rebecca Ballhaus / The Wall Street Journal)

A Massachusetts judge ordered Facebook to turn over data about thousands of apps that may have mishandled its users’ personal information. The move was a clear rejection of the tech giant’s earlier attempts to withhold the key details from state investigators. (Tony Romm / The Washington Post)

Nationalist propaganda has been spreading on WhatsApp ahead of an upcoming election in Delhi. The propagandists appear to be targeting university students who oppose India’s new Citizenship Amendment Act, which is widely perceived to be anti-Muslim. (Anisha Sircar / Quartz)

A viral video titled “Truth From an Iranian,” which has amassed more than 10 million views across Facebook, Twitter, and YouTube, was created by a registered lobbyist who previously worked for a militia group fighting in a bitter civil war in Libya. The video praised the US drone strike that killed Iranian Gen. Qassem Soleimani. (Ryan Broderick and Jane Lytvynenko / BuzzFeed)

Joe Biden said in an interview last week that he wants to revoke one of the core protections of the internet: Section 230 of the Communication Decency Act. He appears to have deeply misunderstood what the law actually does. (Makena Kelly / The Verge)

Attorney General William Barr has intensified a long-running fight between law enforcement and technology companies over encrypted communications. Some FBI agents worry his forceful approach could sour valuable relationships they have fostered with tech companies. (Sadie Gurman, Dustin Volz and Tripp Mickle / The Wall Street Journal)

French President Emmanuel Macron and Donald Trump agreed to a truce in an ongoing digital tax dispute that impacts big tech companies. Paris offered to suspend down payments for this year’s digital tax and Washington promised to keep negotiating toward a solution rather than acting on a tariff threat. (Reuters)

Peter Thiel’s guiding philosophy is libertarianism with an abstract commitment to personal freedom but no particular affection for democracy, says Max Read. The PayPal co-founder and Facebook board member (and Clearview AI investor!) has wed himself to state power, but not because he wants to actually participate in the political process. (Max Read / Intelligencer)

The New York Times created a game to demonstrate how easy it is to give up personal information online. The only way to win the game is to hand over personal data. Relatable!

MediaReview wants to turn the vocabulary around manipulated photos and video into something structured. The proposed definitions allow images or videos to be “Authentic,” “MissingContext,” “Cropped,” “Transformed,” “Edited,” or “ImageMacro.” Sure, why not! (Joshua Benton / NiemanLab)

If we wanted media that was good for democratic societies, we’d need to build tools expressly designed for those goals, says Ethan Zuckerman, Director of the Center for Civic Media at MIT. Those tools probably won’t make money, and won’t challenge Facebook’s dominance—and that’s okay. (Ethan Zuckerman / Medium)


Researchers are challenging the widespread belief that screens are responsible for broad societal problems like the rising rates of anxiety and sleep deprivation among teenagers. In most cases, they say, the phone is just a mirror that reveals the problems a child would have even without the phone. Nathaniel Popper at The New York Times explains the findings:

The researchers worry that the focus on keeping children away from screens is making it hard to have more productive conversations about topics like how to make phones more useful for low-income people, who tend to use them more, or how to protect the privacy of teenagers who share their lives online.

“Many of the people who are terrifying kids about screens, they have hit a vein of attention from society and they are going to ride that. But that is super bad for society,” said Andrew Przybylski, the director of research at the Oxford Internet Institute, who has published several studies on the topic.

Facebook plans to hire 1,000 people in London this year, in roles like product development and safety. The company is continuing to grow its biggest engineering center outside the US despite fears about Brexit. (Paul Sandle and Elizabeth Howcroft / Reuters)

Facebook gave Oculus Go a permanent $50 price cut. (Sam Byford / The Verge)

Adam Mosseri, the head of Instagram, is the person in charge of Project Daisy — the photo sharing app’s initiative to take away likes on the platform. This profile reveals a current tension of Mosseri’s reign at Instagram: the man who is working to mostly eliminate likes really wants to be liked. (Amy Chozick / The New York Times)

Countless purveyors of bootleg THC vape cartridges are hawking their wares in plain sight on Instagram and Facebook. These illegal operators appear to be doing so with impunity, using the ease and anonymity of Instagram to reach a massive audience of young people who vape. (Conor Ferguson, Cynthia McFadden and Rich Schapiro / NBC)

Jack Dorsey asked Elon Musk how to fix Twitter during a video call last week. Musk said Twitter should identify start by identifying and labeling bots. (Kurt Wagner / Bloomberg)

Instagram is removing the orange IGTV button from its home page. Only 1 percent of Instagram users have downloaded the standalone IGTV app in the 18 months since it launched. (Josh Constine / TechCrunch)

Instagram is democratizing who can succeed in the dance industry, allowing nontraditional talent to break in. It’s no longer just about having the right look or connections. (Makeda Easter / Los Angeles Times)

Instagram has also revolutionized the way tattoo artists grow their businesses. Many artists estimate that more than 70 percent of their clients now come from the photo-sharing app. (Salvador Rodriguez / CNBC)

Snap CEO Evan Spiegel says TikTok could become bigger than Instagram. App intelligence company App Annie ranked TikTok just behind Instagram in terms of monthly active users in 2019. (Hailey Waller / Bloomberg)

TikTok’s parent company, ByteDance, is preparing a major push into games, the mobile arena’s most lucrative market. It’s a realm Tencent has dominated for over a decade. (Zheping Huang / Bloomberg)

More than 70,000 photos of Tinder users are being shared by members of an online cyber-crime forum, raising concerns about the potential for abusive use of the photos. Ominously, only women appear to have been targeted. (Dell Cameron and Shoshana Wodinsky / Gizmodo)

A new report suggests Bumble, the “by women, for women” dating app that is trying to keep women safer online, has little strategy for how to achieve its lofty goals. It also struggles with a cliquey internal culture, according to some employees.

And finally…

Facebook apologized after its platform translated Xi Jinping, the name of the Chinese leader, as “Mr. Shithole” in English. The mistranslation caught the company’s attention when Daw Aung San Suu Kyi, the de facto civilian leader of Myanmar, wrote on her official Facebook page about Mr. Xi’s two-day visit to her country.

Xi is a brutal dictator who runs concentration camps that reportedly house more than 1 million people whose only crime is being Muslim. So I’d say “Mr. Shithole” suits him just fine.

Talk to us

Send us tips, comments, questions, and your Clearview results: and

Twitter hashtags aren’t as useful as they used to be

Last summer, with misinformation swirling about the death of Jeffrey Epstein, I joined the chorus of voices calling for an end to Twitter’s trending topics. The feature is generated by algorithms with little editorial oversight, is easily gamed by bots and bad actors, and yet continues to drive the news cycle anyway. Get rid of it — or have humans run it — and Twitter would only be better for it.

Many of those arguments have been re-litigated over the past day as journalists dig into #NeverWarren, a hashtag that was briefly trending on Twitter in the wake of a dustup between Democratic presidential candidates Elizabeth Warren and Bernie Sanders. Warren spoke sharply to Sanders after their most recent debate. Here’s Eric Newcomer at Bloomberg:

On Wednesday morning, the hashtag #NeverWarren appeared at the top of Twitter’s trending topics. As of late Wednesday afternoon it had been mentioned more than 80,000 times, according to Ben Nimmo, director of investigations for social media monitoring company Graphika. “It looks like it started off among some long-standing Sanders supporters,” he wrote in an email, “but the most striking thing is that all the most-retweeted posts are of people criticizing the hashtag and the mentality behind it, and/or calling for unity.”

As Nimmo notes, the hashtag seemed to trend not because a critical mass of Democrats was tweeting outrage at Warren, but rather because Warren supporters were outraged that anyone had tweeted with a #NeverWarren hashtag. Still, an untold number of Americans may have seen the trend on Wednesday and assumed that some groundswell of anti-Warren sentiment had suddenly materialized. It was a classic example of people on Twitter bringing more attention to something than it deserved, in ways that work against their interests.

At, Emily Stewart points out that the overall effect of misleading trends like #NeverWarren is to undermine confidence in our information sphere generally and in Twitter specifically:

As has been the case with so many viral hashtags and discussions on Twitter, the incident has again shown that when it comes to what’s gaining traction on the internet, we still have a hard time telling what’s real, what’s fake, and what’s being spread by whom. How much of the activity around #NeverWarren is generated by bots? How much of it comes from the so-called Bernie Bros, the online army behind the Vermont senator? And how much of it comes from Warren supporters trying to combat the #NeverWarren hashtag, or reporters tweeting about it, who are inadvertently causing it to trend higher on Twitter?

“It certainly harkens back to what we saw in 2016, and what we know happened in 2016. … And there’s no reason for us to think that the same disinformation efforts that happened in 2016 aren’t happening right now,” said Whitney Phillips, a Syracuse University professor who studies media literacy and online ethics. “And so it creates this low level of paranoia with what you’re even looking at.”

The discussion about #NeverWarren has once again focused attention on the needless harm that Twitter trends inflict on the news cycle. But it occurs to me that we should probably save some of our scorn for the hashtag, too.

The hashtag is ubiquitous on social networks today, but it was born on Twitter. On August 23rd, 2007, Chris Messina suggested adding what had previously been known as the pound sign to a keyword, so as to make searching for other tweets on the same topic easier. Two years later, Twitter made hashtags a native feature of the product, letting you click on a hashtag to see a page with search results. Trending topics followed in 2010.

Hashtags remain useful for organizing discussion around breaking news, such as wildfires; conferences and other temporary gatherings of folks who may not follow one another; and broad-based social movements, such as #MeToo. But when it comes to big, messy subjects like politics, hashtags are beginning to look dated.

Last year I wrote about the launch of Twitter topics, which allow you to follow subjects related to sports, gaming, and entertainment. In the past, fans of a music group like BTS might have added a hashtag to every relevant post to help fans find their tweets. But with the launch of Topics, Twitter’s algorithms are now doing that work, elevating popular tweets to everyone else who follows it. And in my own experience, they organize tweets around subjects better than hashtags ever did.

You can’t yet follow politics as a Twitter topic — company executives have expressed concern about the tweets such a topic might amplify, and are proceeding with caution. And yet it seems possible that political topics would do a better job elevating the day’s coverage than hashtags, which can compress meaning so much that — as in the case of #NeverWarren — they become all but useless.

Hashtags — unlike trending topics — still have their place on Twitter. (They’ve always felt more at home on Instagram, where they continue to help users acquire followers around their interests.) But I can’t help but feel that on Twitter, the hashtag is getting a little long in the tooth. And so long as they’re driving news cycles, we can expect them to continue spreading misleading information.

The Ratio

Today in news that could affect public perception of the big tech platforms.

Trending up: Facebook disaster maps are helping organizations like Direct Relief respond to the Australia bushfires. The maps illustrate how populations are evacuating and whether they have access to cellular networks.

Trending sideways: Instagram removed a “false” label from an edited photo of rainbow-colored mountains, effectively reversing an earlier decision from one of its third-party fact-checkers. The label sparked fears that the company would begin removing artistic images for being “false.” (Blake Montgomery / The Daily Beast)

Trending down: Twitter apologized for allowing hate groups like neo-Nazis and people with homophobic or transphobic viewpoints to be microtargeted by advertisers. “We’re very sorry this happened and as soon as we were made aware of the issue, we rectified it,” the company said.


House Democrats released dozens of pages of new documents related to the impeachment inquiry into President Trump. The documents show just how influenced Giuliani and his associates Lev Parnas and Robert F. Hyde were by the right-wing online echo chamber. Ryan Broderick at BuzzFeed explains:

On March 27, 2019, Parnas sent by far the most obscure piece of media in the exchange, a YouTube video titled “Trumps takedown of FBI (Winning montage!).” The YouTube channel it comes from has only nine subscribers. By the time Parnas texted it, it was already a year old, having all but died in obscurity. As of this week, it’s only been watched around 4,000 times.

But the video did receive a bit of activity from Trump supporters the week it was dropped into Parnas’s WhatsApp, though. That week, it was featured on r/The_Donald in a post titled “SOON,” was promoted heavily by QAnon-affiliated Twitter accounts, and was tweeted several times by one account called “Deplorable Nurse Ratchett luvs Q.

The majority of links Parnas sent Hyde that March were all being heavily shared within radicalized communities like Reddit and by conspiratorial pro-Trump influencers on Twitter. If Parnas wasn’t directly visiting r/The_Donald, the WhatsApp chat logs released on Tuesday night make it clear that he was drawing his misinformation from the same well. It also gives us a better understanding of how Parnas used pro-Trump internet ephemera to reinforce what Hyde was doing in Ukraine — even if the investigation was based on internet conspiracy theories.

No matter how President Trump’s impeachment trial plays out in the Senate, the process is unlikely to change very many minds. That’s partly because of partisanship, and partly because people are too inundated with information, some of which is intentionally misleading. (Sean Illing / Vox)

Mike Bloomberg will ask tech billionaires to support his presidential campaign in a private reception with some of Silicon Valley’s biggest power brokers this evening. The briefing shows Bloomberg is not shy about seeking the backing of Big Tech, unlike some of his Democratic rivals. (Theodore Schleifer / Recode)

House Speaker Nancy Pelosi took another swipe at Facebook over the social media giant’s reluctance to police disinformation, calling its executives “accomplices for misleading the American people.” (Dustin Gardiner / San Francisco Chronicle)

Facebook’s problems moderating deepfakes will only get worse in 2020, James Vincent argues in The Verge. The company can’t ban them altogether, and new apps like Doublicat make creating manipulated media easier and cheaper than ever before.

New Urban Memes for Transit-Oriented Teens, a popular (and hilarious) Facebook meme group, endorsed Bernie Sanders for president. (Andrew J. Hawkins / The Verge)

The Turkish government lifted its two-and-a-half-year ban on Wikipedia on Wednesday. The move came after the country’s top court ruled that blocking it was unconstitutional.


WhatsApp has delayed its plans to introduce advertising in the app. Instead, it’s focusing on paid tools for businesses, report Jeff Horwitz and Kirsten Grind:

WhatsApp in recent months disbanded a team that had been established to find the best ways to integrate ads into the service, according to people familiar with the matter. The team’s work was then deleted from WhatsApp’s code, the people said.

The shift marks a detour in the social-media giant’s quest to monetize WhatsApp, which it bought in a blockbuster $22 billion acquisition in 2014 that has yet to pay financial dividends despite the service being used by more than 1.5 billion people globally.

Google is now valued by the stock market at $1 trillion. It’s the fourth American company to reach the milestone, after Apple, Amazon and Microsoft. (Daisuke Wakabayashi / New York Times)

Armslist, a website that lets people easily buy and sell guns online, has taken a hands-off approach to moderating content on its platform, critics say. An investigation from The Verge in collaboration with The Trace reveals hundreds of users who may be skirting un control laws. The Verge’s Colin Lecher and Sean Campbell have the story:

As Amazon began searching for its second headquarters in 2017, CEO Jeff Bezos sought an additional $1 billion for future real-estate projects. The money was separate from any economic incentives the company might win for its second-headquarters project. (Shayndi Raice and Dana Mattioli / The Wall Street Journal)

Fans of popular podcasts are forming Facebook groups to talk about what they’ve heard. But the conversations are rarely just about the shows. Instead, members get sidetracked andfrequently go on tangents, talking about their failed marriages and parenting. (Taylor Lorenz / The New York Times)

Social media influencers are increasingly sharing about the mental health issues brought on by viral fame. Many are choosing to take breaks from the spotlight, risking their numbers in exchange for a moment of reprieve. (Natalie Jarvey / The Hollywood Reporter)

A USC student and TikTok star with 1.6 million followers explains how influencers make money on the viral video sharing app. (Amanda Perelli / Business Insider)

Slate asked journalists, scholars, and advocates to rank the most evil companies in tech. While the usual suspects are all there, so are a bunch of scary spyware companies may not be familiar with.

This journalist spent a week trying to get to screen time zero and ended up feeling isolated and alone instead of zen. (Steve Rousseau / OneZero)

And finally…

These days, Tim Cook and Mark Zuckerberg are regularly at odds over privacy and other issues. But in 2008, an Apple television commercial promoting the then-new iPhone had Facebook at its centerpiece. An absolutely wild find from the archives sent in by a reader. Watch it and marvel.

Talk to us

Send us tips, comments, questions, and your favorite hashtags: and

Chris Evans started a new site about politics because he thinks Wikipedia entries are too long

With our politics increasingly polarized and democracy in retreat, worried Americans are responding in all manner of ways. Some, such as former Georgia gubernatorial candidate Stacey Abrams, have mounted a fight against voter suppression. Others, such as Facebook co-founder Chris Hughes, are lobbying social networks to change their products and policies to promote transparency and accuracy in political advertising.

And then there’s the actor Chris Evans, best known for playing Captain America in 10 Marvel movies. According to an earnest new cover story that came out today in Wired, Evans is doing …….. this:

He would build an online platform organized into tidy sections—immigration, health care, education, the economy—each with a series of questions of the kind most Americans can’t succinctly answer themselves. What, exactly, is a tariff? What’s the difference between Medicare and Medicaid? Evans would invite politicians to answer the questions in minute-long videos. He’d conduct the interviews himself, but always from behind the camera. The site would be a place to hear both sides of an issue, to get the TL;DR on WTF was happening in American politics.

The origin story of A Starting Point, as the site will be called, is as follows. One day during a break from filming Avengers: Infinity War, Evans was watching the news. He heard an unfamiliar acronym — NAFTA, or maybe DACA. He Googled the term, and was met with headlines that took multiple, competing points of view. He clicked on the Wikipedia entry, but found that it was very long. “It’s this never-ending thing,” Evans told Arielle Pardes, “and you’re just like, who is going to read 12 pages on something?”

I don’t know — someone who cares?

In any case, Evans was crushed by the realization that to answer his question, he might have to read for several minutes. And so he decided to solve his problem in the next-most-logical way: by flying to Washington every six weeks, recording more than 1,000 videos of members of Congress and Democratic presidential candidates, and posting them on a website that he built with an actor friend and “the founder and CEO of a medical technology company called Masimo.”

And when all the videos are posted, what then?

If Evans got it right, he believed, this wouldn’t be some small-fry website. He’d be helping “create informed, responsible, and empathetic citizens.” He would “reduce partisanship and promote respectful discourse.” At the very least, he would “get more people involved” in politics.

Of course, all that assumes that people who won’t read a Wikipedia entry will watch videos instead. I would always rather read a few sentences about an unfamiliar subject than listen to a congressman filibuster about it until the camera shuts off, but maybe you’re a big fan of C-SPAN.

Still, there are a few obvious problems with Evans’ brainchild. One, it presumes that citizens can best be informed by hearing directly from politicians. Certainly politicians have a privileged viewpoint when it comes to some subjects — primarily their own opinions. But on most subjects, the median member of Congress can only repeat what they were told in briefings by staffers and lobbyists. To suggest that they have a monopoly on the truth is naive.

Two, A Starting Point assumes that you can reduce partisanship by exposing people to multiple points of view. In fact, the opposite is true. Human beings are fact-resistant, never more so than when a fact contradicts a closely held belief. Earlier studies found a so-called “backfire effect” in which seeing a fact contrary to your opinion would make you believe your erroneous opinion even more. Later studies have struggled to replicate that finding, but at the very least it seems fair to say that changing people’s views is extremely hard to do, especially with mere facts.

Finally, A Starting Point begins from the premise that voters are all basically the same, and differ primarily in how much information they have about candidates and issues. In reality, politics is tribal. As Ezra Klein explains in a book coming out later this month, Americans are increasingly polarized around their identities, with partisan affiliation representing a large and growing portion of that identity. Thus the inclination to dismiss what members of the opposing political party say out of hand, based on what they represent.

I don’t want to come down too hard on Evans here: there are worse ways to spend your time than trying to increase participation in the political process. (For example, Evans’ Avengers co-star Chris Hemsworth has a subscription-based fitness app.) But if you’re worried about democracy, you’re probably better off banding together with existing civil society groups, activists, and political scientists than you are going it alone. Defeating Thanos required that the Avengers work together with heroes even stronger than themselves. Captain America knew that. It’s a shame Evans doesn’t.

The Ratio

Today in news that could affect public perception of the big tech platforms.

Trending up: Facebook launched a new security feature that sends users a notification when their account is used to log into a third-party app. It’s both an added layer of protection and a way for people to gain more control over their information.


The National Security Agency announced that it alerted Microsoft to a vulnerability in its Windows operating system, rather than following the agency’s typical approach of keeping quiet and exploiting the flaw to develop cyberweapons. Julian E. Barnes and David E. Sanger at the New York Times explain the significant change in protocol:

The warning allowed Microsoft to develop a patch for the problem and gave the government an early start on fixing the vulnerability. In years past, the National Security Agency has collected all manner of computer vulnerabilities to gain access to digital networks to gather intelligence and generate hacking tools to use against American adversaries.

But that policy was heavily criticized in recent years when the agency lost control of some of those tools, which fell into the hands of cybercriminals and other malicious actors, including North Korean and Russian hackers.

By taking credit for spotting a critical vulnerability and leading the call to update computer systems, the National Security Agency appeared to adopt a shift in strategy and took on an unusually public role for one of the most secretive arms of the American government. The move shows the degree to which the agency was bruised by accusations that it caused hundreds of millions of dollars in preventable damage by allowing vulnerabilities to circulate.

California’s new privacy law gives consumers the right to see and delete their data. But getting access often requires giving up more personal details. (Kashmir Hill / The New York Times)

Network security giant Cloudflare said it’s going to give its security services to US political campaigns for free. The move is part of the company’s efforts to secure upcoming elections against cyberattacks and election interference. (Zack Whittaker / TechCrunch)

The person tasked with creating and enforcing Twitter’s rules is the company’s top lawyer Vijaya Gadde. She says CEO Jack Dorsey rarely weighs in on individual enforcement decisions. Oh, well in that case! ( (Kurt Wagner / Bloomberg)

Twitter suspended Grindr from its ad network after a report revealed privacy concerns with how the app shared personal data with advertisers. (Garett Sloane / Ad Age)

Trump apparently prefers to tweet alone because he doesn’t like to wear the reading glasses he needs to see his phone screen. (Matt Stieb / Intelligencer)


Twitter CEO Jack Dorsey said the company will probably never launch the edit button. In a video interview with Wired, the executive crushed the idea that the feature could go live in 2020. The Verge’s James Vincent explains:

[Dorsey] notes that the service has moved on since, but the company doesn’t consider an edit button worth it. There are good reasons for editing tweets, he says, like fixing typos and broken links, but also malicious applications, like editing content to mislead people.

“So, these are all the considerations,” says Dorsey. “But we’ll probably never do it.”

Twitter is preparing to launch pinned lists for Android. Already available on iOS, the feature allows users to create a list of topics or accounts and then pin them to the main feed. (Ben Schoon / 9To5Google)

YouTube launched new feature called profile cards that show a user’s public information and comment history. The feature has been touted as a way for creators to more easily identify their biggest fans by offering easy access to their past comments. It’s currently available on Android. (Sarah Perez / TechCrunch)

YouTube introduced filters to the subscriptions tab on its iOS app to help you decide what to watch next. The filters, which include “unwatched” and “continue watching,” will be coming to Android “in the future.” (Jay Peters / The Verge)

And finally…

There’s now a tool to mute VCs on Twitter. The website urges people to “silence VC thought leadership and platitudes from your feed.”

Less investor tweets means less content to consume and more time to do literally anything else.

Reading The Interface, for example.

Talk to us

Send us tips, comments, questions, and Avengers outtakes: and

Instagram messages on the web could pose an encryption challenge

It’s a relatively slow week on the platforms-and-democracy beat, so let’s talk about something small but fascinating in its own way: the arrival of Instagram messages on the web.

An unfortunate thing about being a xennial who grew up using (and loving) the world wide web is that most developers no longer build for it. Over the past 15 years, mobile phones became more popular than desktop computers ever were, and the result is that web development has entered a slow but seemingly inexorable decline. At the same time, like most journalists, I spent all day working on that same web. And with each passing year, the place where I do most of my work seems a little less vital.

This all feels particularly true when it comes to communications tools. Once, every messaging kingdom was united with a common API, allowing us to gather our conversations into a single place. (Shout out to Adium.) But today, our messages are often scattered across a dozen or more corporate inboxes, and accessing them typically requires picking up your phone and navigating to a separate app.

As a result, I spend a lot of time typing on a glass screen, where I am slow and typo-prone, rather than on a physical keyboard, where I’m lightning-quick. And each time I pick up my phone to respond to a message on WhatsApp, or Snapchat, or Signal, I inevitably find a notification for some other app, and the next thing I know 20 minutes have passed.

All of which is to say, I was extremely excited today to see Instagram’s announcement that it had begun rolling out direct messages on the web. (The company gave me access to the feature, and it’s glorious.) Here’s Ashley Carman at The Verge:

Starting today, a “small percentage” of the platform’s global users will be able to access their DMs from Instagram’s website, which should be useful for businesses, influencers, and anyone else who sends lots of DMs, while also helping to round out the app’s experience across devices. Today’s rollout is only a test, the company says, and more details on a potential wide-scale rollout will come in the future.

The direct messaging experience will be essentially the same through the browser as it is on mobile. You can create new groups or start a chat with someone either from the DM screen or a profile page; you can also double-tap to like a message, share photos from the desktop, and see the total number of unread messages you have. You’ll be able to receive desktop DM notifications if you enable notifications for the entire Instagram site in your browser.

Instagram didn’t state a strategic rationale for the move, but it makes sense in a world that is already moving toward small groups and private communication. Messengers win in part by being ubiquitous, and even if deskbound users like myself are in the minority, Facebook can only grab market share from rivals if it’s everywhere those rivals can be found. (iMessage and Signal, for example, have long been usable on desktop as well as mobile devices.)

Now, thanks to this move, I can make greater use of Instagram as both a social and reporting tool, and the web itself feels just a bit more vital. All of which is good news — but, asks former Facebook security chief Alex Stamos, is it secure? After all, Facebook is in the midst of a significant shift toward private, end-to-end encrypted messaging, with plans to create a single, encrypted backend for all of its messaging apps.

Stamos went on to highlight two core challenges in making web-based communications secure. One is securely storing cryptographic information in JavaScript, the lingua franca of the web. (This problem is being actively worked on, Stamos notes.) The second is that the nature of the web would allow a company to create a custom backdoor targeting an individual user — if compelled by a government, say. For that, there are few obvious workarounds.

One alternative is to take the approach that Signal and Facebook-owned WhatsApp have, and create native or web-based apps. As security researcher Saleem Rashid told me, the web version of WhatsApp generates a public key in the browser using JavaScript, then encodes it in a QR code that a users scans with their phone. This creates an encrypted tunnel between the web and the smartphone, and so long as the JavaScript involved in generating the key is not malicious, WhatsApp should not be able to encrypt any of the messages.

When I asked Instagram about how it plans to square the circle between desktop messages and encryption, the company declined to comment. I’m told that it still plans to build encryption into its products, and is still working through exactly how to accomplish this.

Granted, when I think of the tasks that I hope Facebook accomplishes this year, encrypted Instagram DMs are low on the list. But with our authoritarian president browbeating Apple today for failing to unlock a suspected criminal’s phone, the stakes for all this are relatively clear. We will either have good encrypted messaging backed by US corporations, or we won’t. As Apple put it this week:

“We have always maintained there is no such thing as a backdoor just for the good guys,” the company explained. “Backdoors can also be exploited by those who threaten our national security and the data security of our customers. … We feel strongly encryption is vital to protecting our country and our users’ data.”

On one level, today’s Instagram news is a small story about a niche feature. But in the background, questions about the security of our private communications are swirling. Which should give us all reason to watch Facebook’s next moves here very closely.

The Ratio

Today in news that could affect public perception of the big tech platforms.

Trending down: Facebook said it doesn’t need to change its web-tracking services to comply with California’s new consumer-privacy law. The company’s rationale is that routine data transfers about consumers don’t fit the law’s definition of “selling” data. The move puts it at odds with Google, which is taking the opposite tack.

Trending down: Grindr, OkCupid and Tinder are sharing sensitive user data like dating choices and precise location to advertisers in ways that may violate privacy laws, according to a new report. I don’t want to downplay that, but if you think that data is sensitive, you should see the average Grindr user’s DMs.


Two days before the UK election in December, some 74,000 political advertisements vanished from Facebook’s Ad Library, a website that serves as an archive of political and issue ads run on the platform. The company said a bug wiped 40 percent of all political Facebook ads in the UK from the public record. Rory Smith at BuzzFeed has the story:

In the wake of the failure during the UK elections, Facebook said it had launched a review of how to prevent these issues, as well as how to communicate them more clearly.

But the events of Dec. 10 are not the first time Facebook’s Ad Library has failed since its launch in May 2018. The API, which is supposed to give researchers greater access to data than the library website, went live in March 2019 and ran into trouble within weeks of the European Parliament election in May. Researchers have been documenting a myriad of issues ever since.

The platform also drew the ire of researchers when it failed to deliver the data it promised as part of a partnership with the nonprofit Social Science Research Council and Social Science One, a for-profit initiative run by researchers — a project that was funded by several large US foundations. Facebook said it remains committed to providing data to researchers, but the SSRC and funders have begun withdrawing from the project due to the company’s delays.

Russian military hackers may have been boring into the Ukrainian gas company at the center of the impeachment inquiry, where Hunter Biden served on the board. Experts say the timing and scale of the attacks suggest that the Russians could be searching for potentially embarrassing material on the Bidens, similar to what Trump was looking for. On Twitter, security experts like Facebook’s Nathaniel Gleicher have urged caution when writing about this story, arguing that the case for attribution to Russia is thin. (Nicole Perlroth and Matthew Rosenberg / The New York Times)

There’s been an explosion of online disinformation, including the use of doctored images, from politicians. They do it for a simple reason: It’s effective at spreading their messages, and so far none have paid a price for trafficking in bogus memes. (Drew Harwell / The Washington Post)

Artificial personas, in the form of AI-driven text generation and social-media chatbots, could drown out actual human discussions on the internet, experts warn. They say the issue could manifest itself in particularly frightening ways during an election. (Bruce Schneier / The Atlantic)

The Treasury Department unveiled new rules designed to increase scrutiny of foreign investors whose potential stakes in US companies could pose a national security threat. The rules are focused on businesses that handle personal data, and come after the United States has heightened scrutiny of foreign involvement in apps such as Grindr and TikTok. (Katy Stech Ferek / The Wall Street Journal)

The Harvard Law Review just floated the idea of adding 127 more states to the union. These states would add enough votes in Congress to rewrite the Constitution by passing amendments aimed at making every vote count equally. Worth a read.(Ian Millhiser / Vox)

The New York Times editorial board interviewed Bernie Sanders on how he plans carry out his ambitious policy ideas if faced with the Republican-led Senate that stymied so many of President Barack Obama’s proposals. Notably, he says he’s not an Amazon Prime customer and tries never to use any apps.

Workers for grocery delivery platform Instacart are organizing a national boycott of the company next week to push for the reinstatement of a 10 percent default tip on all orders. One of 2020’s big stories is going to be tech-focused labor movements; this is but the latest example. (Kim Lyons / The Verge)

Microsoft CEO Satya Nadella strongly criticized a new citizenship law that the Indian government passed last month. The law, known as the Citizenship Amendment Act, fast-tracks Indian citizenship for immigrants from most major South Asian religions except Islam. India is Nadella’s birthplace, and one of Microsoft’s largest markets, making his comments all the more notable. (Pranav Dixit / BuzzFeed)


Facebook’s push into virtual reality has resulted in a slew of new patents, mostly for heads-up displays. The company won 64 percent more patents in 2019 than in 2018. Christopher Yasiejko and Sarah Frier at Bloomberg explain what this might mean:

The breadth of Facebook’s patent growth, said Larry Cady, a senior analyst with IFI, resembled that of intellectual-property heavyweights Inc. and Apple Inc., which were No. 9 and No. 7, respectively, with each winning more than twice as many patents as the social media titan. Facebook’s largest numbers were in categories typical of Internet-based computer companies — data processing and digital transmission, for example — but its areas of greatest growth were in more novel categories that may suggest where the company sees its future.

Facebook’s 169 patents in the Optical Elements category marked a nearly six-fold jump. Most of that growth stems from the Heads-Up Displays sub-category, which Cady said probably is related to virtual-reality headsets. Facebook owns the VR company Oculus and in November acquired the Prague-based gaming studio behind the popular Beat Saber game. One such patent, granted Nov. 5, is titled “Compact head-mounted display for artificial reality.”

Popular “e-boys” on TikTok are nabbing fashion and entertainment deals. They’re known mostly for making irony-steeped videos of themselves in their bedrooms wearing tragically hip outfits composed of thrifted clothes. Some observers predict that top e-boys will have success reminiscent of the boy bands of yore. (Rebecca Jennings / Vox)

YouTube signed three video stars — Lannan “LazarBeam” Eacott, Elliott “Muselk” Watkins and Rachell “Valkyrae” Hofstetter — to combat Amazon’s Twitch and Facebook. Exclusive deals for top video game streamers have been one of the big tech stories of the year so far. (Salvador Rodriguez / CNBC)

Uncanny Valley, Anna Wiener’s beautiful memoir about life working at San Francisco tech companies, is out today. Kaitlyn Tiffany has a great interview with Wiener in the Atlantic. Read this book and stay tuned for news about an Interface Live event with Wiener in San Francisco next month!

Mark Bergen, friend of The Interface and a journalist at Bloomberg, is writing a book about YouTube titled Like, Comment, Subscribe. Bergen is a former Recode colleague and ace YouTube reporter, and this book will be a must-read in our world. (Kia Kokalitcheva / Axios)

The Information published a Twitter org chart that identifies the company’s 66 top executives, including the nine people who report directly to CEO Jack Dorsey. (Alex Heath / The Information)

A new app called Doublicat allows users to put any face on a GIFs in seconds, essentially allowing them to create deepfakes. The app launches just as prominent tech companies like Facebook and Reddit ban deepfakes almost completely. (Matthew Wille / Input)

And finally…

Wired got Jack Dorsey to do 11 minutes of Twitter tech support on video. Enjoy!

Talk to us

Send us tips, comments, questions, and web-based DMs: and

Why activists get frustrated with Facebook

On Monday morning I met with a group of activists who live under authoritarian regimes. The delegation had been brought to San Francisco by the nonprofit Human Rights Foundation as part of a fellowship focused on the relationship between activism and Silicon Valley. And the big question they had for me was: why do social networks keep taking down my posts?

The question caught me off guard. For every story in this newsletter about an activist’s post wrongly (and often temporarily) being removed, there are three more about the consequences of a post that was left up: a piece of viral misinformation, a terrorist recruitment video, a financial scam, and so on. As I wrote in 2018, we are well into the “take it down” era of content moderation.

Sometimes the activists’ posts came down because their governments demanded it. Other times the posts came down because of over-cautious content moderation. Increasingly, the activists told me, social networks were acting as if they would rather be safe from government intervention than sorry. And whenever their posts and pages came down, they said, they had very little recourse. Facebook does not have a customer support hotline, much less a judicial branch. (Yet. More on that below.)

The activists’ concerns were fresh in my mind when I read about the weekend’s removal of Instagram accounts in Iran that expressed support for the Iranian general Qassem Soleiman, who was killed by the United States last week. Like a strong antibiotic, it appears that Instagram’s enforcement action wiped out both accounts tied to the ruling regime and the posts of everyday Iranians.

Facebook’s explanation? Sanctions. Here’s Donie O’Sullivan and Artemis Moshtaghian in CNN:

As part of its compliance with US law, the Facebook spokesperson said the company removes accounts run by or on behalf of sanctioned people and organizations.

It also removes posts that commend the actions of sanctioned parties and individuals and seek to help further their actions, the spokesperson said, adding that Facebook has an appeals process if users feel their posts were removed in error.

GoFundMe also removed at least two fundraising campaigns for passengers on the Ukrainian flight brought down by Iranian missiles, only to later reinstate them, my colleague Colin Lecher reported at The Verge. But Twitter, on the other hand, said it would leave posts up so long as they complied with the company’s rules.

The confusion is to be expected. Legal experts disagree on the extent to which sanctions require tech platforms to remove user posts, and the issue of Iran in particular has been giving companies fits for years. Here’s Lecher in The Verge:

While recent news has put the focus on Iran, it’s hardly the first time tech companies have mounted a zealous response to sanctions. Last year, GitHub restricted users in several countries under US sanctions.

Iran, which has faced sanctions for years, has regularly had tech companies limit use in the country in response to US policy. In 2018, Slack deactivated accounts around the world that were tied to Iran, in a move that stretched well beyond the borders of the country. Apple took several popular Iranian apps off its store in 2017 in the face of US sanctions. At the time, Apple issued a statement that’s still relevant: “This area of law is complex and constantly changing.”

At the same time, once again people around the world are waking up to the reality that their speech is governed by actors who are not accountable to them. Instagram has users but not citizens. Executives in California will decide what can be said in Tehran.

Of course, there’s vastly more free speech on Instagram than in a country like Iran, where activism is brutally repressed. But as the activists shared with me on Monday, the ramifications of social networks acting as quasi-states to reshape political speech in their countries are significant. And their struggles to appeal unjust content removals are real.

The good news is that later this year, Facebook will launch its independent Oversight Board: a Supreme Court for content moderation that will allow users to appeal in cases like the activists’ and the Iranian citizens’. One of the board’s rules will be that cases selected for review will include at least one person from the region in which the case originated. That’s not quite a democratically elected representative — but hopefully it bolsters the board’s accountability to Facebook’s user base.

There are still many questions about how the board will work in practice, and whether it can serve as a model for quasi-judicial systems at other companies. But hearing the activists’ stories today, and reading about the confusion over sanctions in Iran, it seemed to me that the board can’t launch quickly enough.

The Ratio

Today in news that could affect public perception of the big tech platforms.

Trending up: In December, Facebook updated its standards surrounding hate speech and banned many dehumanizing comparisons.

Trending down: In 2019, Americans said that social media wastes our time, spreads lies and divides the nation. And yet 70 percent still use Twitter or Facebook at least once a day.


Senate majority leader Mitch McConnell introduced a new bill that would give news organizations an exemption from antitrust laws. It would allow them to band together to negotiate with Google and Facebook over how their articles and photos are used online, and what payments the newspapers get from the tech companies. Cecilia Kang from The New York Times has the story:

Supporters of the legislation said it was not a magic pill for profitability. It could, they say, benefit newspapers with a national reach — like The Times and The Washington Post — more than small papers. Facebook, for instance, has never featured articles from Mr. NeSmith’s newspaper chain in its “Today In” feature, an aggregation of local news from the nation’s smallest papers that can drive a lot of traffic to a news site.

“It will start with larger national publications, and then the question is how does this trickle down,” said Otis A. Brumby III, the publisher of The Marietta Daily Journal in Georgia.

But the supporters say it could stop or at least slow the financial losses at some papers, giving them time to create a new business model for the internet.

Attorney General William Barr asked Apple to unlock two iPhones used by the gunman in the Pensacola shooting last month. The company already gave investigators data on the shooter’s iCloud account, but has refused to help them open the phones, which would undermine its privacy-focused marketing. (Katie Benner / The New York Times)

A Microsoft tool used to transcribe audio from Skype and Cortana, its voice assistant, ran for years with “no security measures”, according to one former contractor. He says he reviewed thousands of potentially sensitive recordings on his personal laptop from his home in Beijing over the two years he worked at the company. (Alex Hern / The Guardian)

Most cookie consent pop-ups seen by people in the EU are likely flouting regional privacy laws, a new study suggests. The pop-ups are ostensibly supposed to get permission to track people’s web activity. (Natasha Lomas / TechCrunch)

India’s Supreme Court said indefinite internet shutdowns violate the country’s laws concerning freedom of speech and expression. However, the order won’t immediately impact the ongoing internet shutdown in Kashmir. The government still has a week to produce a restrictive order detailing the reasons for the shut down. (Ivan Mehta / TNW)

India ordered an investigation into Amazon and Walmart’s Flipkart over allegedly anti-competitive practices. It’s the latest setback for US e-commerce giants operating in the country. (Aditya Kalra and Aditi Shah / Reuters)


Facebook and Google are no longer the top destinations for college students looking to land prestigious jobs after graduation. While some still see Big Tech as a way to make a lot of money, others feel like it’s an ethical minefield. Emma Goldberg at The New York Times explains the trend:

The share of Americans who believe that technology companies have a positive impact on society has dropped from 71 percent in 2015 to 50 percent in 2019, according to a 2019 Pew Research Center survey.

At this year’s Golden Globes, Sacha Baron Cohen compared Mark Zuckerberg to the main character in “JoJo Rabbit”: a “naïve, misguided child who spreads Nazi propaganda and only has imaginary friends.”

That these attitudes are shared by undergraduates and graduate students — who are supposed to be imbued with high-minded idealism — is no surprise. In August, the reporter April Glaser wrote about campus techlash for Slate. She found that at Stanford, known for its competitive computer science program, some students said they had no interest in working for a major tech company, while others sought “to push for change from within.”

Facebook shares hit an all-time high, despite attacks from both sides of the aisle ahead of this year’s presidential election. The company closed at $218.30 on Thursday, exceeding its previous high of $217.50 in July 2018 and valuing the company at $622 billion. (Tim Bradshaw / The Financial Times)

Facebook’s newest Oculus headset is in high demand, and the company has a VR-only sequel to Valve’s “Half Life” game series due out in March. The news signals Facebook’s VR quest is finally getting real. (Dan Gallagher / Wall Street Journal)

Facebook’s redesigned look for desktops is already here for some users, and will be broadly available before the spring. If you’re getting a first peak, you’ll see a pop-up inviting you to help test the “The New Facebook” when you login. (Ian Sherr / CNET)

Instagram added new Boomerang effects in an effort to compete with TikTok. Now, users can add SlowMo, “Echo” blurring, and “Duo” rapid rewind special effects to their Boomerangs, as well as trim their length. This all reminds me of one of my favorite tweets. (Josh Constine / TechCrunch)

AI-assisted health care systems, such as those being developed by Google, promise to combine humans and machines in order to facilitate cancer diagnosis. But they also has the potential to worsen pre-existing problems such as overtesting, overdiagnosis, and overtreatment. (Christie Aschwanden / Wired)

On TikTok, teens are using memes to cope with the possibility of World War III. The trend gained momentum after Soleimani’s death, with people posting bleak jokes about getting drafted. Fun!! (Kalhan Rosenblatt / NBC)

TikTok might launch a curated feed to provide a safer space for brands to advertise in. The decision comes as the Chinese-owned company faces new concerns about the volume of advertiser-unfriendly content on its platform.

Nine years after Twitch’s launch, the content that hardcore gamers most revile has officially become its most watched: just talking. A new report from StreamElements shows that in December, Twitch viewers watched 81 million hours of “Just Chatting.” (Cecilia D’Anastasio / Wired)

And finally…

My favorite thing on Twitter is just former costars Adam Sandler and Kathy Bates supporting one another as the Oscar nominations were announced.

Better luck next time, Sandman. (Uncut Gems is great.)

Talk to us

Send us tips, comments, questions, and sanctions: and

Facebook’s revised political advertising policy doubles down on division

In October, Facebook made the controversial decision to exempt most political ads from fact-checking. The announcement met with a swift backlash, particularly among leading Democratic candidates for president. As criticism mounted, Facebook began to hint that it would further refine its policy to address lawmakers’ concerns. One change that seemed likely was to limit the ability of candidates to use the company’s sophisticated targeting tools, particularly after hundreds of employees wrote an open letter to Mark Zuckerberg asking for it.

On Thursday, Facebook unveiled the refinements to its policy that it had been promising. But restrictions on targeting were nowhere to be found. Instead, the company doubled down on its current policy, and said the only major change in 2020 would be to allow users to see “fewer” ads. (Fewer than what? It didn’t say.) Here’s Rob Leathern, the company’s director of product management for ads, in the blog post:

There has been much debate in recent months about political advertising online and the different approaches that companies have chosen to take. While Twitter has chosen to block political ads and Google has chosen to limit the targeting of political ads, we are choosing to expand transparency and give more controls to people when it comes to political ads. […]

We recognize this is an issue that has provoked much public discussion — including much criticism of Facebook’s position. We are not deaf to that and will continue to work with regulators and policy makers in our ongoing efforts to help protect elections.

The move is rooted in ideas of personal responsibility — if you want to see fewer political ads and remove yourself from campaigns, that’s on you. In practice, though, it seems unlikely that many Facebook users would take advantage of the semi-opt-out, which is due to be released sometime before April. When’s the last time you visited your ad preferences dashboard?

Among the commentators I follow, condemnation of Facebook’s move was more or less universal. Elizabeth Warren hated it (and took a dig at the Teen Vogue imbroglio while she was at it.) Joe Biden hated it. Ellen Weintraub of the Federal Election Commission hated it. Barbra Streisand hated it. And the list goes on.

Republicans, who the conventional wisdom holds will benefit most from the move, were largely silent on the decision. (Ben Shapiro was a minor exception; and here’s a Washington Post columnist who likes the policy,) Still, it’s safe to assume that President Donald Trump, whose campaign made great use of targeting capabilities during the 2016 election, would have raged had Facebook taken those tools away. And given that Facebook is the subject of at least four ongoing federal investigations, it wouldn’t be surprising if the company developed this policy with appeasement in mind.

At the same time, Republicans aren’t the sole beneficiary of Facebook’s announcement. As Leathern noted, the Democratic National Committee opposed the elimination of targeting tools. There is also some evidence that Facebook tools have prompted more candidates overall to buy ads, increasing the amount of paid political discussion generally.

I’ve come around to the idea that microtargeting ought to be banned, because it accelerates the polarization and tribalism that are transforming the country. Let politicians craft divisive messages to ever-smaller splinters of the populace and they probably will. The media will write about the most egregious examples of misinformation and hypocrisy that this practice enables, but it seems likely that much of it will go unchallenged. Meanwhile, sorting fact from fiction will become even harder for the average voter. The negatives here seem to far outweigh any benefits.

Andrew “Boz” Bosworth, a top Facebook executive who ran the ad platform during the 2016 election, called polarization “the real disaster” in an internal post made public this week by the New York Times. Bosworth wrote:

What happens when you see 26% more content from people you don’t agree with? Does it help you empathize with them as everyone has been suggesting? Nope. It makes you dislike them even more. This is also easy to prove with a thought experiment: whatever your political leaning, think of a publication from the other side that you despise. When you read an article from that outlet, perhaps shared by an uncle or nephew, does it make you rethink your values? Or does it make you retreat further into the conviction of your own correctness? If you answered the former, congratulations you are a better person than I am. Every time I read something from Breitbart I get 10% more liberal.

A world in which politicians are able to advertise only to large groups of people — as they do on broadcast television, for example — is one in which they have incentives to promote more unifying messages. But if they can slice and dice the electorate however they like, those incentives are much weaker.

Meanwhile, misleading political ads will continue to go viral, prompting a fresh news cycle whenever a candidate’s lie crosses a few hundreds thousand impressions. In each case, calls for Facebook to revisit its policies will be renewed, and the beleaguered PR team will dig up old quotes from Leathern’s post and email them to reporters by way of explanation.

And make no mistake: Facebook executives already know all this, and have decided that it beats the alternative. The company is committing to 11 full months of getting kicked in the teeth. It may well be the company’s smartest move politically. But it would seem to augur very poorly for our politics.

The Ratio

Today in news that could affect public perception of the big tech platforms.

Trending up: Microsoft released a new tool that scans online chats for people seeking to sexually exploit children. It’s part of a broader push by the tech industry to crack down on the dangers facing kids online, amid pressure from lawmakers.

Trending down: Anti-vaxxers continue to circumvent Facebook’s ban against ads that contain vaccine misinformation. “Facebook does not have a policy that bans advertising on the basis that it expresses opposition to vaccines,” a Facebook spokesperson said. OK!


⭐ House lawmakers introduced a new bill that would give parents the right to delete data that companies have collected about their children and extend the Children’s Online Privacy Protection Act to older minors. The Verge’s Makena Kelly explains the significance:

The bill would make big updates to the law that’s already brought enormous changes to YouTube and TikTok and infuriated creators. In its settlement with YouTube, the FTC fined the company over $170 million and prohibited the company from running targeted ads on videos the agency could deem child-friendly. Many critics argued that this settlement didn’t go far enough, and if the PROTECT Kids Act was approved, YouTube and other online platforms would be under a lot more pressure than they already are to ensure children’s data remains safe online.

Under current law, COPPA only prohibits platforms from collecting the data of children under the age of 13. Under the PROTECT Kids Act, that age would be increased to 16. COPPA also doesn’t include precise geolocation and biometric information as part of its definition of “personal information.” This House bill would ban platforms from collecting those sensitive pieces of information from children as well. And if a parent wanted to remove their children’s data from a website, the company would have to provide some kind of delete feature for them to use.

Here are 10 things tech platforms can do to create more election security before November. The list, which includes contributions from Facebook co-founder Chris Hughes, offers a refreshingly concrete take on the ongoing debate over big tech and election manipulation. (John Borthwick / Medium)

Iranian teenagers are defacing US websites in protest of the Trump administration killing Soleimani. Some of the hackers say they do not work for the Iranian government. (Kevin Collier / The Verge)

A pro-Iran Instagram campaign targeted the Trump family after the funeral of Iranian general Qassem Soleimani. The campaign consisted of tagging the president’s family, especially Ivanka and Melania, in images ranging from the Iranian flag to a beheaded Donald Trump. (Jane Lytvynenko and Jeremy Singer-Vine / BuzzFeed)

Android users in the EU will soon be able to choose their default search engine from a list of four options, including Google, when setting up their new phones or tablets. The changes follow a $5 billion fine from EU regulators that found Google had used its mobile operating system to hurt rivals. (Lauren Feiner / CNBC)

Many politicians have been hesitant to create profiles on TikTok, the video looping app plagued by national security concerns. The vacuum has allowed impersonators to roam free. The problem is compounded by the fact that TikTok lacks a robust verification system, which makes identifying and taking down such accounts difficult. (Maria Jose Valero and Yueqi Yang / Bloomberg)

Reddit updated its impersonation policy ahead of the 2020 election. The new policy covers fake articles misleadingly attributed to real journalists, forged election communications purporting to come from real government agencies, and scammy domains posing as those of a particular news outlet or politician.

YouTube’s algorithm isn’t the only thing responsible for making the platform a far-right propaganda machine, researcher Becca Lewis argues. The company’s celebrity culture and community dynamics play a major role in the amplification of far-right content. (Becca Lewis / Medium)

A judge in Brazil ruled that a film made by a YouTube comedy group that depicts Jesus as gay must be temporarily removed from Netflix. Two million people signed a petition calling for the movie to be axed, and the production company was attacked with Molotov cocktails last month. And you thought Richard Jewell got bad reviews. (BBC)


Mark Zuckerberg is giving up on annual personal challenges. Instead, he wrote a more thematic list of goals for the next decade, which include a new private social platform, a decentralized payments platform, and new forms of community governance. Here’s how he framed the pivot:

This decade I’m going to take a longer term focus. Rather than having year-to-year challenges, I’ve tried to think about what I hope the world and my life will look in 2030 so I can make sure I’m focusing on those things. By then, if things go well, my daughter Max will be in high school, we’ll have the technology to feel truly present with another person no matter where they are, and scientific research will have helped cure and prevent enough diseases to extend our average life expectancy by another 2.5 years.

I’m really glad to see this — as I argued here, the annual challenges had outlived their usefulness.

Meanwhile, here’s your content moderation story of the day, from David Gilbert at Vice. It centers on Facebook moderators in Europe.

One moderator who worked at CPL for 14 months in 2017 and 2018 told VICE News that he decided to leave the company when a manager sanctioned him while he was having a panic attack at his computer. He’d just found out that his elderly mother, who lived in a different country, had had a stroke and gone missing.

“On the day I had the most stress in the world, when I think I might lose my mother, my team leader, a 23-year-old without any previous experience, decided to put more pressure on me by saying that I might lose my job,” said the moderator, who did not want to be identified.

YouTube creator David Dobrik has gotten well over one million downloads on his new digital disposable camera app. YouTubers launching apps isn’t anything new, but the disposable camera idea is tied directly to David B’s brand, and it’s one that fans want to try for themselves. (Julia Alexander / The Verge)

Thanks to YouTuber MrBeast’s viral tree planting campaign, more than 21 million trees will be planted across the United States, Australia, Brazil, Canada, China, France, Haiti, Indonesia, Ireland, Madagascar, Mozambique, Nepal, and the United Kingdom. (Justine Calma / The Verge)

Amazon’s Twitch is facing mounting competition from Facebook. Facebook Gaming was the fastest growing streaming platforms (in terms of streaming hours watched) in December. (Olga Kharif / Bloomberg)

The Chinese version of TikTok, called Douyin, just hit 400 million daily active users. The news was revealed by parent company ByteDance in its annual report this week. (Manish Singh / TechCrunch)

And finally…

Text this number for an infinite feed of AI-generated feet

It’s a big day for foot fetishists, The Next Web reports:

The site relies on a generative adversarial network (GAN) to produce eerily realistic images of feet. Of course, since these are all the figment of a computer’s imagination, you’re bound to see some gruesome deformities.

Have a wonderful evening and absolutely do not send us any AI-generated feet.

Talk to us

Send us tips, comments, questions, and microtargeted advertisements: and

How Facebook’s ad in Teen Vogue came back to haunt it

Today Facebook appeared at a Congressional hearing about synthetic and manipulated media, and so it was only fitting that the day was consumed by confusion over whether the company had placed a flattering article in Teen Vogue to manipulate the media.

“How Facebook Is Helping Ensure the Integrity of the 2020 Election,” a 2,000-word question-and-answer session with five women working at Facebook to protect it from election interference, appeared this morning on the website of Condé Nast’s popular portal for young people. Facebook COO Sheryl Sandberg called it a “great piece.”

Unusually for an American publication, the article appeared without a byline. More unusually, after the article appeared and raised questions among some reporters, it was slapped with a “sponsored content” label. Then Teen Vogue removed the sponsored content label, and then Teen Vogue pulled the article from its website altogether.

Facebook initially insisted that the article had been an act of journalism rather than sponsored content, and that the sponcon label had been applied by an overzealous copyeditor. Teen Vogue’s only comment on the subject was a reply to a reader who, on Twitter, had asked “What is this?”, to which someone with the magazine’s Twitter credentials responded “literally idk.”

Then that tweet got deleted.

Hours later, The Daily Beast’s Max Tani got a statement from the magazine:

Regrettably, the statement caused further confusion. (Condé Nast didn’t respond to my request for comment.)

In the grand conclusion to the day’s events, Facebook itself reversed course and revealed that … the sponcon was sponcon after all! “We had a paid partnership with Teen Vogue related to their women’s summit, which included sponsored content. Our team understood this story was purely editorial, but there was a misunderstanding.”

On one hand, of course, all of this is very silly. Sponsored or not, the Teen Vogue piece didn’t break a lot of new ground on the old platforms-and-democracy beat. The world will survive this exchange having been scrubbed from the web:

Q: Why did encouraging voting become common practice of for-profit media platforms, particularly Facebook?

Facebook is about shared experiences, and the chance to use your voice. So is voting.

On the other, there are a few lessons to be drawn here.

One, Teen Vogue clearly did not hold up whatever its end of the bargain with Facebook had been. People would have rolled their eyes at a properly disclosed paid advertorial, but publications have survived worse.

Two, Facebook probably erred by commissioning sponsored content about platform integrity. The thing about your integrity efforts is that you want to promote them with, you know, integrity. Slipping them into online magazines as articles with a small-font disclosure that the thing was bought and paid for undermines the very credibility you were hoping to bolster. Especially if the magazine screws up and forgets to disclose!

The whole reason you run sponsored content is control: you script the questions and edit the answers to your liking. But sometimes what looks like control is only an illusion. Today Facebook learned that the hard way.

The Ratio

Today in news that could affect public perception of the big tech platforms.

Trending down: Researchers discovered serious security vulnerabilities in TikTok that would have allowed hackers to manipulate user data and reveal personal information. The company fixed the flaws less than a month after they were discovered.


Just three days after Trump ordered the assassination of Iranian commander Qassem Soleimani, the president’s reelection campaign began running hundreds of Facebook ads praising him for ordering the killing. As of Tuesday, Facebook had taken down a few dozen of the ads, some of which appeared to violate the site’s policy against using fake buttons in ads. Alex Kantrowitz at BuzzFeed has the story:

“Thanks to the swift actions of our Commander-in-Chief, Iranian General Qassem Soleimani is no longer a threat to the United States, or to the world,’’ read one ad. “Take the Official Trump Military Survey TODAY to let me know what you think of my leadership as Commander-in-Chief.”

The survey, meant to collect contact information for future outreach, contained questions like “Do you stand by President Trump in his decision to take out the very dangerous Iranian terrorist leader, Qassem Soleimani?” At the end, it asks respondents for their name, zip code, email, and phone number. Those who provided their phone number, according to a footnote, consented to receive texts, automated calls, and phone calls from the president’s reelection campaign and the Republican National Committee.

Trump’s threats against Iran on Twitter are the latest example of the president seemingly promoting violence, in violation of Twitter’s rules. The rules, notably, don’t apply to politicians unless they threaten individuals or incite hatred against particular nationalities, which is why Trump’s remarks have gone unpunished. (Emily Birnbaum / The Hill)

Twitter suspended an account impersonating a New York Post reporter after it sent out a series of fake stories promoting pro-Iranian regime propaganda and attacking adversaries of the Islamic Republic. (Adam Rawnsley / The Daily Beast)

A misinformation campaign that states the bushfires in Australia are the result of arson — not climate change — is circulating on social media. (Brian Kahn / Gizmodo)

Kuwait’s state news agency said its Twitter account was hacked and used to spread false information about US troops withdrawing from the country. It’s unclear who may be responsible for a hack. (Colin Lecher / The Verge)

How a misleading video of former vice president Joe Biden spread around the internet, from verified accounts on Twitter, to 4chan, Facebook and Reddit. (Nick Corasaniti / The New York Times)

One of the people who helped draft the California Consumer Privacy Act (CCPA) wrote an op-ed about why the law might not be as effective as people think. Without proper enforcement, she says, the regulation is “largely toothless.” (Mary Stone Ross / Fast Company)

TikTok updated its content guidelines to spell out categories of videos that aren’t allowed on its platform. The new categories include videos that glorify terrorism, show illegal drug use, feature violent, graphic or dangerous content or seek to peddle misinformation that’s designed to deceive the public in an election. (Tony Romm and Drew Harwell / The Washington Post)

Amazon-owned home security camera company Ring fired employees for watching customer videos, according to a letter the company wrote to Senators. The news highlights a risk across many different tech companies: employees may abuse access granted as part of their jobs to look at customer data or information. (Joseph Cox / Vice)


Misinformation surrounding the new Star Wars movie shows how much power online communities now have to control the cultural conversation. Ryan Broderick at BuzzFeed explains:

The misinformation and anger inside the Star Wars fandom is what happens after decades of corporatization and anonymous decentralized networking. It is a glimpse of a future in which anxieties over the motives of the megacorporations that drive our culture — down to our very mythologies — set off conflicts between warring information tribes who inhabit their own artificial narratives. What began with small but vocal insurgent online communities like 4chan or the alt-right has now come for the mainstream.

Except there is no “mainstream” culture — just as there is no central Star Wars fandom anymore. Today, popular culture is just Gamergates of varying size.

Twitter announced that it’s going to allow users to limit replies directly from the compose screen. It’s part of a new setting called “conversation participants” that the company announced at CES. (Dieter Bohn / The Verge)

Paul Zimmer, a disgraced TikTok star who left social media nearly two years ago, is trying to reinvent himself online with an entirely new identity. Zimmer went dark in 2017 after fans accused him of soliciting gifts in exchange for shout outs that never actually materialized. (Sarah Manavis / The New Statesman)

Twitch hasn’t become the advertising powerhouse that Amazon hoped it would be. The steaming company brought in about $230 million in ad revenue in 2018, and was on track to bring in about $300 million last year, which was far short of an internal goal of between $500 million and $600 million that year. (Priya Anand / The Information)

And finally…

On TikTok, LGBTQ youth role play as future President Pence’s conversion therapy campers. This Joseph Longo piece is honestly more nihilistic than funny, but if you like your comedy pitch-dark you might enjoy these incredibly charming queer youth.

These days, the coolest place to be on TikTok is a conversion-therapy camp run by Vice President Mike Pence. At Camp Pence, everything is free — from the electric full-body “massage chairs” to the signature bleach drinks. But most importantly, it’s invite-only. Because to get into Camp Pence, you have to identify as a queer youth facing discrimination for your identity. (Per Snopes, Pence infamously supported using federal funds to treat people “seeking to change their sexual behavior” during his 2000 congressional run, which many have interpreted as support for conversion therapy.)

I need a drink.

Talk to us

Send us tips, comments, questions, and your favorite Teen Vogue articles: and