Facebook’s deepfakes ban has some obvious workarounds

We’re used to social networks waiting until the damage has already been done before announcing a cleanup effort. When it comes to the synthetic media known as “deepfakes,” they’ve been notably ahead of the curve. In November, Twitter announced a draft policy on deepfakes and began soliciting public input. And on Monday night, Facebook announced that it would ban certain manipulated photos and videos from the platform. Here’s the blog post from Monika Bickert, Facebook’s vice president of global policy management:

Going forward, we will remove misleading manipulated media if it meets the following criteria:

– It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:

– It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.

This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words.

The move comes ahead of a planned hearing Wednesday about misinformation at which Bickert is scheduled to speak.

The change represents a significant step forward at a time where anxieties over deepfakes, and their potential role in shaping the 2020 election, are running high. The technology is improving at a steady clip: see these companies selling synthetic (but convincing) people to populate dating apps. It’s not hard to imagine an unscrupulous campaign posting synthetic videos on Facebook or Instagram of their opponent saying or doing something they didn’t. As of today, that’s officially against policy.

Notably — and contra to what Facebook initially said — the policy will apply to advertisements as well as regular posts. Create a phony video of your opponent clubbing a baby seal and Facebook will make (yet another) exception to its policy against fact-checking political speech in advertisements, and remove anything found to be fake.

Still, some doubts lingered. Nina Jankowicz, who has a book coming out this year on Russian disinformation operations, said she is “still more worried about cheap fakes than deep fakes. Crudely edited, deliberately misleading videos and images are still effective, and they’re still allowed on most platforms.”

What’s a cheap fake? Something like this video of campaign workers doing a corny dance in support of presidential candidate Michael Bloomberg. In reality, they aren’t campaign workers at all — they’re audience members at an improv show filming a bit for a comedian, who shared it on a Twitter profile he had edited to make it appear as if he worked for Bloomberg. The ruse was exposed relatively quickly, but plenty of people still fell for it.

There are all sorts of ways to trick people like this. You can also grab an old video and put a new date on it, or just tweet it as if it’s brand new. You can Photoshop. You don’t need a state-of-the-art media lab to wreak havoc. That’s one reason why, even as the technology improved, information operations haven’t yet seemed very interested in deepfakes, as my colleague Russell Brandom wrote last year. “Uploading an algorithmically doctored video is likely to attract attention from automated filters, while conventional film editing and obvious lies won’t,” Brandom wrote. “Why take the risk?”

There’s one last workaround to Facebook’s new rule: comedy. For good reason, Facebook permits people to post satire and parody. Unfortunately, this rule is often exploited by fake-news purveyors and other sites adept at straddling the line between comedy and misinformation. Last week, in the wake of the military strike in Iran, an article titled “Democrats Call For Flags To Be Flown At Half-Mast To Grieve Death Of Soleimani” was posted to a site called the Babylon Bee. From there, it was shared more than 660,000 times on Facebook.

Surely some of the people who shared the article knew that the Babylon Bee is a satirical site. But read the comments in the original Facebook post and you’ll see that just as many seem to believe the article is real. In the flattened design of the News Feed, where every shared article carries equal weight, it can be hard to tell.

All these many asterisks help explain why Democratic politicians seem mostly unimpressed with Facebook’s deepfakes ban. “Facebook wants you to think the problem is video-editing technology, but the real problem is Facebook’s refusal to stop the spread of disinformation,” said a spokeswoman for Rep. Nancy Pelosi, the unwitting star of a famously misleading (though not deepfaked) viral video last year.

Joe Biden’s campaign struck a similar note: “Facebook’s policy does not get to the core issue of how their platform is being used to spread disinformation, but rather how professionally that disinformation is created.”

Still, Monday’s action doesn’t preclude the company from addressing some of these nuances down the road. And the more we see this sort of thing in 2020, I suspect that it will. In the meantime, one of the big platforms has established at least a partial bulwark against the infocalypse — though its strength will depend entirely on how strongly Facebook defends it. Policy, as ever, is what you enforce.

Update

Before the break, I reported here that Pinterest had cut contractors’ vacation benefits, forcing them to work over the holiday if they wanted to be paid during Christmas week. After I published that piece, employees were upset, and the company reversed course. Contractors got their paid week off after all, just like Pinterest’s full-time employees. “We realized our communication of this change may have come too late in the year for people to plan accordingly for this holiday season,” a spokesman told me.”

Not much more to say here, other than that journalism is the best job in the world and don’t let nobody ever tell you different.

The Ratio

Today in news that could affect public perception of the big tech platforms.

Trending up: Facebook fundraisers have generated $37 million for fire relief in Australia, the company says. Actor Celeste Barber’s fundraiser alone raised $30 million from 1.1 million people, and is now the largest fundraiser in Facebook history.

Trending sideways: Facebook is setting up a new engineering team in Singapore to focus on its lucrative China ad business. The news comes as CEO Mark Zuckerberg has ramped up criticism of the country over human-rights issues.

Trending sideways: Shares of Google hit an all-time high yesterday, closing out at $1,397.81 per share. Apparently, investors are unfazed by the ongoing antitrust investigation into the company, as well as employee unrest.

Governing

The FBI asked Apple to help unlock two iPhones linked to a shooting at Naval Air Station in Pensacola, Florida last month. Apple said it has been cooperating with the government and had already handed over all the data in its possession. Here’s what the company told Pete Williams at NBC:

“We have the greatest respect for law enforcement and have always worked cooperatively to help in their investigations,” Apple said in a statement. “When the FBI requested information from us relating to this case a month ago, we gave them all of the data in our possession and we will continue to support them with the data we have available.”

A law enforcement official said there’s an additional problem with one of the iPhones thought to belong to Alshamrani, who was killed by a deputy during the attack: He apparently fired a round into the phone, further complicating efforts to unlock it.

A leaked Facebook memo shows longtime executive Andrew “Boz” Bosworth told employees that the company has a moral duty not to tilt the scales against President Trump in the 2020 election. Kevin Roose, Sheera Frenkel and Mike Isaac from The New York Times have the scoop:

In a meandering 2,500-word post, titled “Thoughts for 2020,” Mr. Bosworth weighed in on issues including political polarization, Russian interference and the news media’s treatment of Facebook. He gave a frank assessment of Facebook’s shortcomings in recent years, saying that the company had been “late” to address the issues of data security, misinformation and foreign interference. And he accused the left of overreach, saying that when it came to calling people Nazis, “I think my fellow liberals are a bit too, well, liberal.”

Boz then shared the memo in its entirety from his own Facebook page. Read it.

The White House unveiled 10 principles that federal agencies should consider when devising laws and rules for the use of artificial intelligence in the private sector, but stressed that a key concern should be limiting regulatory “overreach.” (James Vincent / The Verge)

The 2020 election is likely the most anticipated event in US history when it comes to digital security. Russia still poses a massive threat, as do Iran and China. Experts are also warning that it’s not just the general election that is at risk — the primaries will be a target, too. (Joseph Marks / The Washington Post)

Experts warn that the United States needs to be prepared for cyber retaliation from Iran, which employs different tactics than Russia. Iran has spent years building an online influence apparatus that uses fake websites and articles meant to mimic real news and disappear quickly. (Sara Fischer / Axios)

Sonos sued Google, seeking financial damages and a ban on the sale of Google’s speakers, smartphones and laptops in the United States. Sonos accused Google of infringing on five of its patents, including technology that lets wireless speakers connect and synchronize with one another. (Jack Nicas and Daisuke Wakabayashi / The New York Times)

A researcher dove into the narratives surrounding the death of Qassem Soleimani, a top Iranian commander, on one of Iran’s most popular social media platforms, Telegram.

As Taiwan gears up for a major election this week, officials and researchers worry that China is experimenting with social media manipulation to sway the vote. Voters are already awash in false or highly partisan information, making such tactics easy to hide. (Raymond Zhong / The New York Times)

Violence erupted at Jawaharlal Nehru University in India last week, after members of a student group — apparently coordinating through WhatsApp — attacked fellow students and teachers. An investigation into the group revealed who the attackers were and how they coordinated the violence. (Meghnad S, Prateek Goyal and Anukriti Malik / Newslaundry)

Industry

Politicians, parties, and governments, are hiring dark-arts public-relations firms to spread lies and misinformation. One firm promised to “use every tool and take every advantage available in order to change reality according to our client’s wishes.” Craig Silverman, Jane Lytvynenko and William Kung have the story:

If disinformation in 2016 was characterized by Macedonian spammers pushing pro-Trump fake news and Russian trolls running rampant on platforms, 2020 is shaping up to be the year communications pros for hire provide sophisticated online propaganda operations to anyone willing to pay.

Also — the threat isn’t limited to the US:

Most recently, in late December, Twitter announced it removed more than 5,000 accounts that it said were part of “a significant state-backed information operation” in Saudi Arabia carried out by marketing firm Smaat. The same day, Facebook announced a takedown of hundreds of accounts, pages, and groups that it found were engaged in “foreign and government interference” on behalf of the government of Georgia. It attributed the operation to Panda, an advertising agency in Georgia, and to the country’s ruling party.

AI start-ups are selling pictures of computer-generated faces that appear to be real people. They offer companies a chance to “increase diversity” in their ads without needing human beings. They’ve also signed on dating apps that need more images of women. (Drew Harwell / The Washington Post)

The new trick to going viral on Instagram is making an Instagram filter, as seen by the bewilderingly popular “What Disney Character Are You?” sensation. This story breaks down how it works. (Chris Stokel-Walker / Input)

Michelle Obama launched an Instagram video series about students navigating their first year of college. The former First Lady partnered with digital media company ATTN: to launch a video series on IGTV, Instagram’s video platform. (Sara Fischer / Axios)

The CES gadget show in Las Vegas is all-in on surveillance technology, from face scanners that check in some attendees to the cameras-everywhere array of digital products. (Matt O’Brien / Associated Press)

And finally…

Woe unto the big tech executive who uses an extended metaphor about Lord of the Rings without checking his facts. Here’s Chaim Gartenberg on the Boz memo:

As part of his argument, Boz makes the comparison by citing none other than J.R.R. Tolkien’s The Lord of the Rings to explain his decision. Facebook, Boz argues, is akin to Sauron’s One Ring, and wielding its power — even with noble intent — would only lead to ruin. […]

In Tolkien’s books and the film adaptations, Galadriel is concerned about the power of the Ring corrupting her — as it does all, save the Dark Lord himself. But not once does she contemplate using its power for good. “In place of the Dark Lord you will set up a Queen. And I shall not be dark, but beautiful and terrible as the Morning and the Night!.. All shall love me and despair!” Tolkien writes.

Later, nerds.

Talk to us

Send us tips, comments, questions, and Lord of the Rings analogies casey@theverge.com and zoe@theverge.com.

16 predictions for social networks in 2020

Programming note: With this edition, The Interface is now on holiday break! We return January 6th.

And just like that, we’ve reached the final issue of the year — and also, somehow, the decade. As is tradition around here, let’s close out the year with some predictions from you about where platforms and democracy are headed in 2020 and beyond.

Thanks to everyone who contributed. Here are your thoughts, along with some of mine. This year, I’m ordering these in roughly how likely I think they are. So, the most likely things to happen at the top, and we move further into crazy town as you scroll down. Generally speaking, I feel more comfortable predicting product moves than policy shifts. But we’ll see!

Social platforms continue to struggle with disinformation and its consequences. An obvious point, maybe, but Blake Bowyer makes it in a compelling way. He argues that Facebook’s decision not to fact-check political ads leads to misinformation campaigns and their awful second-order consequences, such as Pizzagate. Facebook is going to get beat up every time a major politician lies on its platform in 2020 unless — until? — it reverses its policy. (Joe Albanese, a former Facebook employee himself, predicts the company will do just that.)

Metrics keep going invisible. Instagram reportedly ditched like counts because it led to people — particularly young people — posting more. If that proves true elsewhere, expect more metrics to disappear in 2020, reader M.D. predicts.

The flight from feeds to curation. Algorithms fade a bit into the background in 2020 as human editors return to the big aggregators. They’re already working on Facebook’s new news tab, on Apple News, and on editorial teams at Twitter and Snap. Even Google says it is beginning to take into account the quality of original reporting in its suggested news stories. All of this is welcome, even if feeds still command the lion’s share of attention.

The next big social network is email. Newsletters are the new websites, and expect to see communities growing up around them in interesting new ways, led by companies like Substack. Allen Ramos predicts that the rise of newsletters — and, I’d say, of subscription-based media generally — will contribute to a new divide between those who see ads and those who pay to avoid them.

A deepfake app goes mainstream in the US. Depending on how you think about that viral Snapchat aging filter, one arguably already has. But Ben Cunningham (ex-Facebook) predicts some machine-learning-based video editing app will take off in 2020, with its features eventually coming to the Instagram camera. Feels like a solid bet.

Splinternet happens. We’ve talked before in this column about how the internet is quickly dividing into zones. There’s an American internet, a European internet, and a Sino-Russian-authoritarian internet, and they all appear to be rapidly pulling apart. Jason Barrett Prado predicts that this trend accelerates in 2020, limiting the potential size of any one social network.

Discord goes mainstream. The gamer chat network is already popular among young people — and journalists who now routinely find white supremacist networks and criminal gangs using it. Reader Ian Greenleigh predicts Discord will have a big 2020 as giant everyone-in-the-same-room social networks lose favor and “the interest graph moves underground.”

Oculus will finally take off — thanks to Twitch. Cunningham also suspects that streamers will gravitate toward the blue ocean of virtual reality, where Facebook’s Oculus Quest is arguably the best of breed. Streamers will draw audiences, who will buy Quests to see what all the fun is about. As Cunningham acknowledges, this prediction might take a few extra years to come true.

The debate over Section 230 hits a stalemate. Just as Congress couldn’t reach consensus on a national privacy law in 2019, they’ll stumble over how to alter the Communications Decency Act in 2020. Andrew Hutchinson predicts Congress will legislate the removal of “misinformation,” but that seems unlikely (and, perhaps, unconstitutional) to me.

The next big policy fight is over location data. With increasing attention being paid to the expanding surveillance networks created by our smartphones, reader Dan Calacci predicts location becomes a hot topic among regulators.

TikTok gets serious competition. Matt Navarra predicts we’ll see a rash of new short-form video apps take off, including Byte and Firework. Add that to ByteDance’s list of challenges in America next year, along with skeptical regulators and a churning customer base.

Slack will become the target of coordinated investor shorts along with a big expose on business practices, reader H.B. predicts. Certainly it seems some companies are reconsidering how they use the platform in light of recent cases where executives were embarrassed by their messages becoming public.

Libra fails to launch. The beleaguered Facebook cryptocurrency project struggles to get off the ground in 2019 as regulators continue to hate it, partners continue to leave it, and Facebook itself decides to save its powder to fight government battles elsewhere. (Calacci predicts it will launch.)

Wilder ideas. Beth Becker says: “Facebook will unleash at least a few of the following: an actual podcast platform, paid music streaming and I still think that instant articles will eventually turn into some kind of platform for magazines and even books for long-form reading.”

A reader who asked to remain anonymous predicted that a European Union country would fund a public social network.

Question marks. Do regulators seek the breakup of Facebook or Google? Will the various ongoing privacy-related investigations lead to any meaningful changes among the platforms? Will Facebook’s oversight board emerge as a true justice system for a social network? Will Libra actually launch? Will the platforms adequately defend against election challenges? What challenge that no one is thinking about will emerge and surprise us all?

No one really had any sharp guesses about those subjects, and to me the answers are basically a coin flip. For our final prediction of the year, we turn to Galen Pranger: “Until Trump is out of office, the psychological impact of his presidency will continue to drive an especially negative narrative about the social impacts of the Internet and social media. A Democratic win next year will help stabilize some of the media pressure on the industry.”

I certainly hope we find out!

Thanks to everyone who read, shared, and responded to The Interface this year. I got to meet so many of you in person this year at live events and conferences, and heard from dozens more via email. It’s a privilege to write four columns a week for some of the smartest and most thoughtful people in the industry. Zoe and I have big plans in 2020, and we look forward to you following along with us.

So thanks again, and happy holidays. We’ll see you back here on January 6th.

The Ratio

Today in news that could affect public perception of the big tech platforms.

Trending up: Facebook will remove posts that mislead people about the US Census starting next year. The goal is to prevent malicious actors from interfering in a critical, once-in-a-decade process that determines political representation.

Trending down: Facebook failed to convince lawmakers it needs to track peoples’ location even when their tracking services are turned off. The company said it uses location data to target ads and for certain security functions, but Congress is still arguing the company should give users more control.

Governing

Every minute of every day, dozens of private data companies are logging the movements of tens of millions of people with mobile phones and storing the information in gigantic files. The New York Times received one of those files, and is publishing a series of eye-opening articles on what this level of surveillance could mean. Stuart A. Thompson and Charlie Warzel set the stage:

It doesn’t take much imagination to conjure the powers such always-on surveillance can provide an authoritarian regime like China’s. Within America’s own representative democracy, citizens would surely rise up in outrage if the government attempted to mandate that every person above the age of 12 carry a tracking device that revealed their location 24 hours a day. Yet, in the decade since Apple’s App Store was created, Americans have, app by app, consented to just such a system run by private companies. Now, as the decade ends, tens of millions of Americans, including many children, find themselves carrying spies in their pockets during the day and leaving them beside their beds at night — even though the corporations that control their data are far less accountable than the government would be.

“The seduction of these consumer products is so powerful that it blinds us to the possibility that there is another way to get the benefits of the technology without the invasion of privacy. But there is,” said William Staples, founding director of the Surveillance Studies Research Center at the University of Kansas. “All the companies collecting this location information act as what I have called Tiny Brothers, using a variety of data sponges to engage in everyday surveillance.”

Facebook will no longer feed user phone numbers provided to it for two-factor authentication purposes into its “people you may know” feature. The move is part of a wide-ranging overhaul of its privacy practices, which advocates have been calling for since last year. (Reuters)

The legal advisor to the EU’s top court said Facebook sharing data on European users with the US is legal and provides sufficient privacy protections. It’s a symbolic victory for the company in its fight against privacy activist Max Schrems, who has argued that such practices are illegal. (Ryan Browne / CNBC)

Hundreds of partisan news outlets are distributing algorithmic stories and conservative talking points, according to an investigation by The Tow Center for Digital Journalism. Of the 450 “pink slime” sites they discovered, at least 189 were set up as local news networks across ten states within the last twelve months by an organization called Metric Media. (Priyanjana Bengani / The Tow Center for Digital Journalism)

Bing appears to be returning an alarming amount of disinformation and misinformation in response to user queries — far more than Google does. While its share of the search market in the US is dwarfed by Google, it has steadily increased over the past ten years. Daniel Bush and Alex Zaheer / Stanford Internet Observatory)

After a series of embarrassing leaks from their WhatsApp groups, Conservative MPs have been downloading the end-to-end encrypted messaging app Signal, which allows users to auto-delete messages. (Mark Di Stefano and Emily Ashton / BuzzFeed)

Industry

All those tech IPOs that were supposed to make people megarich this year only made them rich-ish. “Instead of yachts, tech workers are funding more mundane ventures like college savings plans,” write Nellie Bowles and Kate Conger in The New York Times. They add:

San Francisco has been left as a slightly more normal town of tech workers who got rich-ish, maybe making a few hundred thousand dollars. But that doesn’t go far in a city where the median cost of a single family home is about $1.6 million.

“Everyone that came back post-I.P.O. seemed to be the same person. I didn’t see any Louis Vuitton MacBook case covers or champagne in their Yeti thermos,” said J.T. Forbus, a tax manager at Bogdan & Frasco in San Francisco.

Private wealth managers are now meeting with a chastened clientele. Developers are having to cut home prices — unheard-of a year ago. Party planners are signing nondisclosure agreements to stage secret parties where hosts can privately enjoy their wealth. Union organizers are finding an opportunity.

Everyone had gotten too excited, and who could blame them? The money was once so close: A start-up that coordinated dog walkers raised $300 million. The valuations of the already giant ride-hailing behemoths had nearly doubled again. WeWork, a commercial real estate management start-up that owned very little of its own real estate, was valued at $47 billion.

Facebook is pursuing rights to music videos from major record labels, to boost interest in its Watch video service. Record labels have been pushing Facebook to step up and give them a credible alternative to YouTube. (Lucas Shaw / Bloomberg)

Facebook announced it will run its first commercial in the Super Bowl, buying time for a 60-second ad featuring Chris Rock and Sylvester Stallone. The ad will promote Facebook Groups. (Nat Ives / The Wall Street Journal)

Facebook is betting big on hardware, investing billions of dollars in technologies that could make it a gatekeeper when — and if — augmented reality becomes the next big thing. (Alex Heath / The Information)

Facebook is building its own operating system so it can be less dependent on Android. The company doesn’t want hardware like Oculus and Portal to be at the mercy of Google and its mobile operating system. (Josh Constine / TechCrunch)

Facebook acquired a Spanish cloud video gaming company called PlayGiga. The acquisition is part of Facebook’s efforts to expand more into gaming. (Salvador Rodriguez / CNBC)

Felix “PewDiePie” Kjellberg is ending 2019 with a couple of major decisions: he plans to take a small break from YouTube in 2020, and he’s wiped out his popular Twitter account, losing its 19.3 million followers in the process. The news has generated a lot of attention, and highlights just how hard it is for many YouTubers to take time off. (Julia Alexander / The Verge)

Delivery apps are turning gig workers into drug mules in Argentina. The companies allow them to transport anything, leaving gig workers liable if they’re caught with illegal drugs. (Amy Booth / OneZero)

One woman talks about her experience using Tinder in a very small town, where she went from attempting witty banter to more uniform question and answers in a relatable and depressing way. (CJ Hauser / The Guardian)

New York Magazine did a “decade in internet culture” list with 34 emblematic posts that highlight the weirdest and most unforgettable things that happened on the internet in the 2010s. (Brian Feldman / Intelligencer)

Also: BuzzFeed curated a list of the 50 worst things that happened on the internet this year, and it is hilarious and horrifying. (Ryan Broderick and Katie Notopoulos / BuzzFeed)

A guy logged back on to Twitter after a decade to announce he married the woman he Tweeted a joke about back then. An absolutely perfect story to end a decade of tweeting from Tanya Chen.

And finally …

Facebook is doing a Super Bowl ad this year, and I asked you to give me your absolute worst creative ideas. You really came through:

Sadly for you lot, I won my own competition.

Don’t take my word for it — Facebook’s chief marketing officer awarded me the prize.

Happy New Year!

Talk to us

Send us tips, comments, questions, and holiday e-cards: casey@theverge.com and zoe@theverge.com.

The software behind Facebook’s new Supreme Court for content moderation

As we start looking forward to 2020 — and your predictions are coming to this space tomorrow — a big subject on my mind is accountability. Tech platforms have evolved into quasi-states with only the most rudimentary of justice systems. If trolls brigade Twitter into suspending your account, or you’re removed from YouTube’s monetization program, or your tasteful nudes are banned from Instagram, you typically have almost no recourse. In most cases you will fill out a short form, send it in, and pray. In a few days, you’ll hear back from a robot about the decision in your case — unless you don’t.

That’s why I’ve been so interested this year in Facebook’s development of what it calls an “oversight board” — a Supreme Court for content moderation. When it launches next year, the board will hear appeals from people whose posts might have been removed from Facebook in error, as well as making judgments on emerging policy disputes at Facebook’s request. And the big twist is that the board will be independent of Facebook — funded by it, but accountable only to itself.

Recently I went down to Facebook’s headquarters in Menlo Park to meet some of the people working on the project. Milancy Harris, manager for governance and strategic initiatives, and Fay Johnson, who is the lead product manager on the oversight team, talked to me about how they have approached the unique assignment of devolving content moderation power from the company to an independent board.

Facebook set up an irrevocable trust to fund the board, and last week the company said it had agreed to fund it to the tune of $130 million — a figure far higher than some of the civil-society folks I’ve spoken with had expected.

It’s a figure that speaks to the seriousness with which Facebook has approached the project. It also speaks to the sheer complexity of what Facebook is trying to do. The company spent the past year consulting experts, holding mock appeals, developing the board’s bylaws, and recruiting board members. Simultaneously, Harris and Johnson have been among those working on what Facebook calls its “case management tool” — the hyper-niche software, to be used by perhaps 100 people at a time, that will route cases from Facebook to the board and its staff.

Here are some of the things I learned from Harris and Johnson.

  • For starters, you’ll only be able to appeal decisions to remove your content. Facebook itself can request “expedited review” for basically anything — including asking the board to make a call in cases where the company itself has not yet decided what to do. You could imagine, for example, Facebook referring the infamous altered footage of Nancy Pelosi to the board to request a ruling before it makes a decision. (Facebook says you’ll be able to appeal stuff that is left up against your will eventually.)
  • Facebook can also request what it’s calling a “policy advisory opinion,” asking the board to weigh in on a general policy matter independent of an individual post. What exceptions should Facebook allow to its “real names” policy? When is it OK to show a nipple on Instagram? These are decisions I can imagine the board having been asked about in the past … and perhaps they still will be in the future.
  • The privacy and security challenges involved in setting up the board are enormous. Facebook’s biggest problems have always come from data sharing, and now it needs to create a new data-sharing apparatus that involves an independent body. It’s arduous, tedious, high-stakes work.
  • When you appeal your case to Facebook, you’ll get send a statement to the board making your case. Facebook promises not to edit it in any way.
  • Your appeals will be added to what is presumably a very long queue of cases to be heard by the board.
  • Facebook doesn’t really know how many cases the board will be able to hear in a year. The initial board will comprise about 20 members, and grow to 40 over time. Members will serve three-year terms.
  • The board will have case-selection committees that are charged with picking cases to hear. Harris told me that Facebook will encourage the board to pick cases that represent geographical diversity and go beyond that day’s public-relations crisis.
  • Facebook is putting together mock queues of cases and taking practice runs to see how long a typical case takes to adjudicate.
  • The board will only have a set (but as yet undetermined) amount of time to decide whether to hear your case — otherwise it will be rejected automatically. The reason Facebook gave me for this is that its own policies require content to be deleted after certain time periods, making an indefinite stay in the queue impossible.
  • The board will publish its decisions, but the user will get to decide whether they want any personally identifiable information included in the decision. So if you’re an artist and your tasteful nudes were banned from Instagram, you might want to attach your name to the decision and amplify your protest. If you’re a human rights worker whose newsworthy photo of carnage got removed, you might opt not to.

One of the miniature debates I had with the Facebook folks was about the decision to limit user appeals, at least at the start, to cases in which posts had been removed. Aren’t posts that stay up generally more harmful? Think about cases where vulnerable people are doxxed or harassed on a social platform and the platform won’t remove it. Isn’t that where this process should start?

Johnson told me that she has a different perspective. “If you’re a person who feels that you’re being targeted because you’re speaking about, let’s say racism in the United States, and those sort of cases where people are potentially being silenced because somebody is interpreting it wrong — that is really impacting your psyche, your well being, your voice,” Johnson told me. “Those [are] situations where I’m like, oh man, I would love this board to be able to help us. Just to think through the better way to handle some of those nuances.”

I had lots more questions, but around that time the meeting ended the way all meetings end at Facebook — with us getting kicked out of our conference room by the next group that had booked it. The good news is that 2020 should bring us lots more information about the board, including its first batch of members.

What do you want to know about the board? Let me know and I’ll make sure to ask the next time I get the chance.

The Ratio

Today in news that could affect public perception of the big tech platforms.

Trending sideways: Twitter and Facebook are exploring decentralization projects, supposedly to put more power in the hands of users. But the moves might also help them shift the burden of identifying bad actors and filtering out disinformation.

Trending down: Facebook is labeling ads for maternity clothes that feature real pregnant women as “sexually suggestive or provocative” and barring them from appearing on the platform.

Governing

The National Labor Relations Board decided yesterday that businesses can ban workers from using company email for union and other organizing purposes. The decision revokes a right granted in 2014 to workers who have access to employers’ email systems. Hassan A. Kanu at Bloomberg has the story:

The decision is a blow to worker advocacy groups and unions, who urged the NLRB to maintain the 2014 policy on the basis that email has become a central and natural way for co-workers to organize and communicate. The policy reversal also marks another step in the Republican-majority NLRB’s push to reinterpret the central federal law on unions in ways that advocates say have made it easier for companies to avoid a unionized workforce.

The NLRB agreed with Caesars, the U.S. Chamber of Commerce, and other business groups, which had argued that employers have property and First Amendment rights to limit the use of their own email systems. Requiring access to email networks also could cause workplace disruption and increase cybersecurity threats, businesses have said.

Employees “do not have a statutory right to use employers’ email and other information-technology (IT) resources to engage in non-work-related communications,” the Board said in a Dec. 17 announcement.

“Rather, employers have the right to control the use of their equipment, including their email and other IT systems, and they may lawfully exercise that right to restrict the uses to which those systems are put, provided that in doing so, they do not discriminate” against union-related communications.

President Trump spent an estimated $648,713.27 on anti-impeachment Facebook ads during a three-week period beginning Nov. 23, according to an analysis of Facebook data by the Democratic digital firm Bully Pulpit Interactive. (Cat Zakrzewski / The Washington Post)

New evidence shows a network of accounts involved in spreading disinformation before the 2016 presidential election also participated in circulating false claims about Marie Yovanovitch, the American ambassador to Ukraine. The claims led to her recall from the US Embassy in Kyiv earlier this year. (Isaac Stanley-Becker / The Washington Post)

In a Q&A, Rappler’s Maria Ressa discusses her views on how Silicon Valley — and Facebook in particular — warped society. “Facebook broke democracy in many countries around the world, including in mine,” she said. “We’re forever changed because of the decisions made in Silicon Valley.” (Catherine Tsalikis / Centre for International Governance Innovation)

Facebook is investigating a voter engagement app used by The Five Star Movement, a populist party in Italy, as part of a broader probe into potential historical data misuse. (Alberto Nardelli / BuzzFeed)

The House introduced a new bill that would require the federal government to study how a pair of laws that targeted online sex trafficking impacted sex workers, by kicking them off the internet. These laws are the reason Craigslist got rid of personal ads. (Makena Kelly / The Verge)

Democracy requires participation from average people. It requires citizenship! And so I loved this first-person account by former Twitter product manager Sachin Agarwal about being a poll worker in San Francisco. (Sachin Agarwal / Medium)

The digital revolution was supposed to bring about community and civic engagement. Instead, it made people feel isolated, distrustful, and disengaged. Writer Joe Bernstein does a deep dive here on how that happened — and why it took people so long to wake up from the promise of utopia. I expect a lot of people will be referencing this piece in the years to come. (Joseph Bernstein / BuzzFeed)

Industry

How Facebook’s ‘like’ button hijacked our attention and broke the 2010s, becoming the social currency of the internet. Of all the many retrospectives coming out right now on a decade of tech, this one by Fast Company’s Christopher Zara offers a particularly nice look at how a tiny feature had an outsized impact:

The like button, a ridiculously simple yet undeniably inventive way to collect information about people’s interests, promised to be the bold new form of micro-attention capture that the emerging social web was demanding. For Facebook, which was still two years away from its IPO, like buttons were also a signal to investors that its then-stated mission to “make the world more open and connected” could translate into a very profitable form of surveillance capitalism.

“Scattered around the web, [Like buttons] allowed Facebook to follow users wherever they wandered online, sending messages back to the mother ship (‘She’s looking for cruises’),” Columbia University professor Tim Wu points out in his book The Attention Merchants. “That would allow, for example, the Carnival Cruise Line to hit the user with cruise ads the moment they returned to Facebook.”

But if like buttons were a godsend for advertisers, a lifeline for businesses, and a future cash cow for Facebook, there was one group for whom the benefits were less apparent—the users themselves, who now number in the billions.

Instagram is cracking down on influencers’ branded posts and the products they hawk. The company announced that although it’s always prohibited branded posts that advertise vapes, tobacco, and weapons, it’s going to start enforcing those rules more strictly. (Ashley Carman / The Verge)

Facebook, Microsoft, Twitter and others are spending millions on video game streaming to compete with Amazon’s Twitch. But so far, Twitch is still winning. The platform increased its viewership during the third quarter, despite losing Tyler Blevins, better known for his online alias, Ninja, to Microsoft’s Mixer back in August. (Imad Khan / The New York Times)

This writer made a deepfake of Mark Zuckerberg, and process was neither time-consuming nor expensive. (Timothy B. Lee / Ars Technica)

Spotify is prototyping a new way to see what friends have been listening to, called “Tastebuds”. It’s the first truly social feature the company has launched since killing off its inbox in 2017. And it has a great name! (Josh Constine / TechCrunch)

A mobile game called Photo Roulette is gaining popularity with younger users. Players invite up to 49 friends to join, and the app then chooses a random photo from someone’s phone and displays it to the rest of the group (other players have to guess who it came from). What could go wrong?! (Julie Jargon / The Wall Street Journal)

YouTube inspired a toy company to launch a line of dolls (L.O.L. Dolls!) that were made specifically to make for interesting unboxing videos. The result was a $5 billion brand. (Chavie Lieber / The New York Times)

And finally…

Can’t decide if this is evil or perfect.

Either way, I do want this feature for the weight benches at my gym.

Talk to us

Send us tips, comments, questions, and your 2020 social network predictions: casey@theverge.com and zoe@theverge.com.

Snap Spectacles 3 review: reaching new depths

It was three years ago this week that Spectacles first arrived, via colorful Snapbot vending machines that captivated Snapchat fans. But early buzz and largely positive reviews led Snap to make too many of the first-generation video-recording sunglasses than it could actually sell, and the company was forced to write down nearly $40 million in costs.

A second generation arrived last spring with a refreshed, waterproof design and the ability to snap still photos for the first time. But Spectacles 2 didn’t cause half the stir that their predecessors had, the company’s vice president of hardware departed, and Snap’s device ambitions faded into the background.

Now Spectacles 3 have arrived, available exclusively through Snap’s online Spectacles store. They come with a striking new design and a much higher price — $380, up from $150 to $200 for the previous edition. (Spectacles 2 remain on sale.) Snap says the changes reflect its intended audience for the new Spectacles: fans of high fashion and artists who relish new creative tools. It’s also a way of avoiding another big writedown: measuring demand carefully with a single online storefront, then selling each unit at a price that lets the company recoup a bigger share of its investment.

And Spectacles 3 are a milestone for the company in another way, too, CEO Evan Spiegel told me in a recent interview. Thanks to a second camera that lets the device perceive depth for the first time, Snap can now integrate its software into the real world using special filters that map to the world captured in a video.

“What’s really exciting about this version is that, because V3 has depth, we’re starting to actually understand the world around you,” Spiegel said. “So those augmented reality effects are not just a 2D layer. It actually integrates computing into the world around you. And that is where, to me, the real turning point is.”

Spiegel is playing a long game. He often says that AR glasses are unlikely to be a mainstream phenomenon for another 10 years — there are simply too many hardware limitations today. The available processors are basically just repurposed from mobile phones; displays are too power hungry; batteries drain too quickly.

But he can see a day where those problems are solved, and Spectacles becomes a primary way of interacting with the world. Spiegel says the glasses will be a pillar of the company over the next decade, along with Snapchat and Lens Studio, the company’s tool for building AR effects.

“I do think this is the first time that we’ve brought all the pieces of our business together, and really shown the power of creating these AR experiences in Lens Studio and deploying them through Spectacles,” Spiegel said. “And to me, that is the bridge to computing overlaid on the world.”

Last week, I spent some time with Spectacles 3 to see how that bridge is coming along.

As with many products, first impressions count for a lot, and I expect the new Spectacle design will be polarizing. I strongly suspect that I am not the target audience for Spectacles 3, but in any case I never did feel entirely myself when I had them on. Part of it was that big steel bar running across my nose, which I felt gave me a vaguely bug-like affect. And part of it was that thin steel frame, which consistently dug into my ears and scalp. The black and mineral colors are sleek, but for the most part I missed the toy-like, but comfortable, plastic of the first two generations.

Next, I put the cameras through their paces. Image quality is sharp, at least when you view the shots on a phone: photos are stored at a resolution of 1,642 x 1,642 pixels, and videos record at 60 frames per second and are stored at a resolution of 1,216 x 1,216. There are four microphones built into Spectacles 3, and audio fidelity on the videos I recorded sounded good.

The company says you can capture 70 videos or 200-plus photos on a single charge, which should be enough to get you through most day-long outdoor activities. To recharge Spectacles 3, you store them in an attractive fold-out leather-wallet. (The elegant wallet may actually be my favorite part of the entire product.) A full charge takes 75 minutes, and the case itself recharges via USB-C.

The Spectacles 3 are charged through their included case

Spectacles reverses the normal user interface for capturing images: you tap on either of the two camera buttons to record a 10-second video, or press and hold to shoot a 3D photo. As with previous generations, you can tap the button again to add 10 seconds to your video, up to a total of 60 seconds.

The marquee feature on Spectacles 3 is a new kind of Snapchat filter that takes advantage of the glasses’ depth perception to create a new category of 3D effects. There are 10 of these depth perception effects available at launch — adding disco lights that bend as they hit your body; big red hearts that pop as you move through them, and so on.

Unfortunately, though, you can’t see those effects while you’re shooting video. The actual process goes like this:

  • Shoot a video.
  • Open Snapchat.
  • Import the snap from your Spectacles into Snapchat, where it’s stored in Memories.
  • Choose the snap from Memories.
  • Tap “edit snap.”
  • Wait for the snap to be sent to the cloud for image processing, and then re-downloaded to your phone.
  • Begin swiping to apply 3D filters to your snap.

In practice, this may only take about a minute. But I found that image processing could take much longer when I was away from Wi-Fi, as I suspect many Spectacles 3 users might be when playing around with their new glasses. Delays like this can discourage the kind of artistic experimentation that Snap has put at the center of its marketing campaign for Spectacles 3.

Moreover, I found the initial set of depth-sensing filters mostly underwhelming. Some applied color effects to my videos in a way that made the video look grainy and unattractive. Others aren’t particularly differentiated from regular old filters — it turns out that confetti with depth perception looks a lot like confetti without depth perception.

I also found some annoying bugs. Sometimes, after sending the snap to the cloud and back for image processing, two of the included filters simply didn’t work. I swiped over to the filter, and it didn’t apply any effect to my snaps at all.

One last frustration with Spectacles’ integration with Snapchat: snaps taken with Spectacles still don’t transfer automatically to your Snapchat account. Instead, you connect to your phone over Bluetooth or Wi-Fi and and transfer them manually. In my experience, this has made me reach for Spectacles less and less over time. (If you’re at home, on Wi-Fi, and have your Spectacles charging, Spectacles can be set up to export snaps to Snapchat automatically, but there’s no way to do it while you’re wearing or using them.)

The Spectacles 3 package also comes with the 3D Viewer, a cardboard tool for viewing the 3D photos you take with the glasses. (It’s the same basic product as Google Cardboard, which Google just discontinued for lack of interest.) Assemble the Viewer, slip your phone into it, and Snapchat enters a special viewing mode designed for photos. I liked browsing 3D photos in the Viewer — you tap a conductive cardboard button to advance through them, and the photos rotate slightly as you move your head. To me the Viewer felt more like a novelty than a core part of the Spectacles product, but I can see how artists might find better uses for 3D photos.

Taken together, the advancements in Spectacles 3 represent a meaningful improvement over what came before it — without quite making a complete case for themselves as an essential creative tool. There’s a good amount of novelty in the product, but I fear that, as with the previous two generations, that novelty will fade quickly.

And that matters, since the latest generation of Spectacles is more than twice as expensive as the previous one. Snap’s best hope here is that its community of AR developers, who have proven themselves quite adept at building compelling filters and lenses, make better use of Spectacles’ new second camera than the first batch of filters do.

And Spiegel is dreaming much bigger than that. I asked whether it might someday be possible to send messages from Spectacles to Spectacles, making the product feel as immediate as Snapchat itself. He told me that it was already in testing.

“This is something that we’re actively experimenting with and playing with,” Spiegel said. “And I think it’s really fun to — in near-real time — see the world through someone else’s perspective, in 3D.”

Of course, Snap is far from alone in working on AR glasses. Apple, Facebook, and Microsoft are among the companies with versions in the works. Of those, though, Snap is the only company currently selling to consumers. (Microsoft’s HoloLens, at $3,500, isn’t really in the same conversation.)

That means its failures get more attention. I asked Spiegel what Snap got in exchange for all the pressure of building in public. He said that getting direct feedback from customers helped Snap iterate faster on its designs.

“If you compare version one of spectacles to version three, it’s like night and day in terms of the quality of the product,” Spiegel said. “And so to see that evolution in such a short period of time tells me that if we just keep at this, 10 years from now, I think we’re going to be able to deliver ultra-precise, very high-quality products. And that that’s something that we’re just gonna have to learn, and it’s expensive, and it takes time. But I think in the long run, it’ll pay off.”

Vox Media has affiliate partnerships. These do not influence editorial content, though Vox Media may earn commissions for products purchased via affiliate links. For more information, see our ethics policy.

The Trauma Floor

Content warning: This story contains discussion of serious mental health issues and racism.

The panic attacks started after Chloe watched a man die.

She spent the past three and a half weeks in training, trying to harden herself against the daily onslaught of disturbing posts: the hate speech, the violent attacks, the graphic pornography. In a few more days, she will become a full-time Facebook content moderator, or what the company she works for, a professional services vendor named Cognizant, opaquely calls a “process executive.”

For this portion of her education, Chloe will have to moderate a Facebook post in front of her fellow trainees. When it’s her turn, she walks to the front of the room, where a monitor displays a video that has been posted to the world’s largest social network. None of the trainees have seen it before, Chloe included. She presses play.

The video depicts a man being murdered. Someone is stabbing him, dozens of times, while he screams and begs for his life. Chloe’s job is to tell the room whether this post should be removed. She knows that section 13 of the Facebook community standards prohibits videos that depict the murder of one or more people. When Chloe explains this to the class, she hears her voice shaking.

Returning to her seat, Chloe feels an overpowering urge to sob. Another trainee has gone up to review the next post, but Chloe cannot concentrate. She leaves the room, and begins to cry so hard that she has trouble breathing.

No one tries to comfort her. This is the job she was hired to do. And for the 1,000 people like Chloe moderating content for Facebook at the Phoenix site, and for 15,000 content reviewers around the world, today is just another day at the office.

Over the past three months, I interviewed a dozen current and former employees of Cognizant in Phoenix. All had signed non-disclosure agreements with Cognizant in which they pledged not to discuss their work for Facebook — or even acknowledge that Facebook is Cognizant’s client. The shroud of secrecy is meant to protect employees from users who may be angry about a content moderation decision and seek to resolve it with a known Facebook contractor. The NDAs are also meant to prevent contractors from sharing Facebook users’ personal information with the outside world, at a time of intense scrutiny over data privacy issues.

But the secrecy also insulates Cognizant and Facebook from criticism about their working conditions, moderators told me. They are pressured not to discuss the emotional toll that their job takes on them, even with loved ones, leading to increased feelings of isolation and anxiety. To protect them from potential retaliation, both from their employers and from Facebook users, I agreed to use pseudonyms for everyone named in this story except Cognizant’s vice president of operations for business process services, Bob Duncan, and Facebook’s director of global partner vendor management, Mark Davidson.

Collectively, the employees described a workplace that is perpetually teetering on the brink of chaos. It is an environment where workers cope by telling dark jokes about committing suicide, then smoke weed during breaks to numb their emotions. It’s a place where employees can be fired for making just a few errors a week — and where those who remain live in fear of the former colleagues who return seeking vengeance.

It’s a place where, in stark contrast to the perks lavished on Facebook employees, team leaders micromanage content moderators’ every bathroom and prayer break; where employees, desperate for a dopamine rush amid the misery, have been found having sex inside stairwells and a room reserved for lactating mothers; where people develop severe anxiety while still in training, and continue to struggle with trauma symptoms long after they leave; and where the counseling that Cognizant offers them ends the moment they quit — or are simply let go.

The moderators told me it’s a place where the conspiracy videos and memes that they see each day gradually lead them to embrace fringe views. One auditor walks the floor promoting the idea that the Earth is flat. A former employee told me he has begun to question certain aspects of the Holocaust. Another former employee, who told me he has mapped every escape route out of his house and sleeps with a gun at his side, said: “I no longer believe 9/11 was a terrorist attack.”

Chloe cries for a while in the break room, and then in the bathroom, but begins to worry that she is missing too much training. She had been frantic for a job when she applied, as a recent college graduate with no other immediate prospects. When she becomes a full-time moderator, Chloe will make $15 an hour — $4 more than the minimum wage in Arizona, where she lives, and better than she can expect from most retail jobs.

The tears eventually stop coming, and her breathing returns to normal. When she goes back to the training room, one of her peers is discussing another violent video. She sees that a drone is shooting people from the air. Chloe watches the bodies go limp as they die.

She leaves the room again.

Eventually a supervisor finds her in the bathroom, and offers a weak hug. Cognizant makes a counselor available to employees, but only for part of the day, and he has yet to get to work. Chloe waits for him for the better part of an hour.

When the counselor sees her, he explains that she has had a panic attack. He tells her that, when she graduates, she will have more control over the Facebook videos than she had in the training room. You will be able to pause the video, he tells her, or watch it without audio. Focus on your breathing, he says. Make sure you don’t get too caught up in what you’re watching.

”He said not to worry — that I could probably still do the job,” Chloe says. Then she catches herself: “His concern was: don’t worry, you can do the job.”

On May 3, 2017, Mark Zuckerberg announced the expansion of Facebook’s “community operations” team. The new employees, who would be added to 4,500 existing moderators, would be responsible for reviewing every piece of content reported for violating the company’s community standards. By the end of 2018, in response to criticism of the prevalence of violent and exploitative content on the social network, Facebook had more than 30,000 employees working on safety and security — about half of whom were content moderators.

The moderators include some full-time employees, but Facebook relies heavily on contract labor to do the job. Ellen Silver, Facebook’s vice president of operations, said in a blog post last year that the use of contract labor allowed Facebook to “scale globally” — to have content moderators working around the clock, evaluating posts in more than 50 languages, at more than 20 sites around the world.

The use of contract labor also has a practical benefit for Facebook: it is radically cheaper. The median Facebook employee earns $240,000 annually in salary, bonuses, and stock options. A content moderator working for Cognizant in Arizona, on the other hand, will earn just $28,800 per year. The arrangement helps Facebook maintain a high profit margin. In its most recent quarter, the company earned $6.9 billion in profits, on $16.9 billion in revenue. And while Zuckerberg had warned investors that Facebook’s investment in security would reduce the company’s profitability, profits were up 61 percent over the previous year.

Since 2014, when Adrian Chen detailed the harsh working conditions for content moderators at social networks for Wired, Facebook has been sensitive to the criticism that it is traumatizing some of its lowest-paid workers. In her blog post, Silver said that Facebook assesses potential moderators’ “ability to deal with violent imagery,” screening them for their coping skills.

Bob Duncan, who oversees Cognizant’s content moderation operations in North America, says recruiters carefully explain the graphic nature of the job to applicants. “We share examples of the kinds of things you can see … so that they have an understanding,” he says. “The intention of all that is to ensure people understand it. And if they don’t feel that work is potentially suited for them based on their situation, they can make those decisions as appropriate.”

Until recently, most Facebook content moderation has been done outside the United States. But as Facebook’s demand for labor has grown, it has expanded its domestic operations to include sites in California, Arizona, Texas, and Florida.

The United States is the company’s home and one of the countries in which it is most popular, says Facebook’s Davidson. American moderators are more likely to have the cultural context necessary to evaluate U.S. content that may involve bullying and hate speech, which often involve country-specific slang, he says.

Facebook also worked to build what Davidson calls “state-of-the-art facilities, so they replicated a Facebook office and had that Facebook look and feel to them. That was important because there’s also a perception out there in the market sometimes … that our people sit in very dark, dingy basements, lit only by a green screen. That’s really not the case.”

It is true that Cognizant’s Phoenix location is neither dark nor dingy. And to the extent that it offers employees desks with computers on them, it may faintly resemble other Facebook offices. But while employees at Facebook’s Menlo Park headquarters work in an airy, sunlit complex designed by Frank Gehry, its contractors in Arizona labor in an often cramped space where long lines for the few available bathroom stalls can take up most of employees’ limited break time. And while Facebook employees enjoy a wide degree of freedom in how they manage their days, Cognizant workers’ time is managed down to the second.

A content moderator named Miguel arrives for the day shift just before it begins, at 7 a.m. He’s one of about 300 workers who will eventually filter into the workplace, which occupies two floors in a Phoenix office park.

Security personnel keep watch over the entrance, on the lookout for disgruntled ex-employees and Facebook users who might confront moderators over removed posts. Miguel badges in to the office and heads to the lockers. There are barely enough lockers to go around, so some employees have taken to keeping items in them overnight to ensure they will have one the next day.

The lockers occupy a narrow hallway that, during breaks, becomes choked with people. To protect the privacy of the Facebook users whose posts they review, workers are required to store their phones in lockers while they work.

Writing utensils and paper are also not allowed, in case Miguel might be tempted to write down a Facebook user’s personal information. This policy extends to small paper scraps, such as gum wrappers. Smaller items, like hand lotion, are required to be placed in clear plastic bags so they are always visible to managers.

To accommodate four daily shifts — and high employee turnover — most people will not be assigned a permanent desk on what Cognizant calls “the production floor.” Instead, Miguel finds an open workstation and logs in to a piece of software known as the Single Review Tool, or SRT. When he is ready to work, he clicks a button labeled “resume reviewing,” and dives into the queue of posts.

Last April, a year after many of the documents had been published in the Guardian, Facebook made public the community standards by which it attempts to govern its 2.3 billion monthly users. In the months afterward, Motherboard and Radiolab published detailed investigations into the challenges of moderating such a vast amount of speech.

Those challenges include the sheer volume of posts; the need to train a global army of low-paid workers to consistently apply a single set of rules; near-daily changes and clarifications to those rules; a lack of cultural or political context on the part of the moderators; missing context in posts that makes their meaning ambiguous; and frequent disagreements among moderators about whether the rules should apply in individual cases.

Despite the high degree of difficulty in applying such a policy, Facebook has instructed Cognizant and its other contractors to emphasize a metric called “accuracy” over all else. Accuracy, in this case, means that when Facebook audits a subset of contractors’ decisions, its full-time employees agree with the contractors. The company has set an accuracy target of 95 percent, a number that always seems just out of reach. Cognizant has never hit the target for a sustained period of time — it usually floats in the high 80s or low 90s, and was hovering around 92 at press time.

Miguel diligently applies the policy — even though, he tells me, it often makes no sense to him.

A post calling someone “my favorite n—–” is allowed to stay up, because under the policy it is considered “explicitly positive content.”

“Autistic people should be sterilized” seems offensive to him, but it stays up as well. Autism is not a “protected characteristic” the way race and gender are, and so it doesn’t violate the policy. (“Men should be sterilized” would be taken down.)

In January, Facebook distributes a policy update stating that moderators should take into account recent romantic upheaval when evaluating posts that express hatred toward a gender. “I hate all men” has always violated the policy. But “I just broke up with my boyfriend, and I hate all men” no longer does.

Miguel works the posts in his queue. They arrive in no particular order at all.

Here is a racist joke. Here is a man having sex with a farm animal. Here is a graphic video of murder recorded by a drug cartel. Some of the posts Miguel reviews are on Facebook, where he says bullying and hate speech are more common; others are on Instagram, where users can post under pseudonyms, and tend to share more violence, nudity, and sexual activity.

Each post presents Miguel with two separate but related tests. First, he must determine whether a post violates the community standards. Then, he must select the correct reason why it violates the standards. If he accurately recognizes that a post should be removed, but selects the “wrong” reason, this will count against his accuracy score.

Miguel is very good at his job. He will take the correct action on each of these posts, striving to purge Facebook of its worst content while protecting the maximum amount of legitimate (if uncomfortable) speech. He will spend less than 30 seconds on each item, and he will do this up to 400 times a day.

When Miguel has a question, he raises his hand, and a “subject matter expert” (SME) — a contractor expected to have more comprehensive knowledge of Facebook’s policies, who makes $1 more per hour than Miguel does — will walk over and assist him. This will cost Miguel time, though, and while he does not have a quota of posts to review, managers monitor his productivity, and ask him to explain himself when the number slips into the 200s.

From Miguel’s 1,500 or so weekly decisions, Facebook will randomly select 50 or 60 to audit. These posts will be reviewed by a second Cognizant employee — a quality assurance worker, known internally as a QA, who also makes $1 per hour more than Miguel. Full-time Facebook employees then audit a subset of QA decisions, and from these collective deliberations, an accuracy score is generated.

Miguel takes a dim view of the accuracy figure.

“Accuracy is only judged by agreement. If me and the auditor both allow the obvious sale of heroin, Cognizant was ‘correct,’ because we both agreed,” he says. “This number is fake.”

Facebook’s single-minded focus on accuracy developed after sustaining years of criticism over its handling of moderation issues. With billions of new posts arriving each day, Facebook feels pressure on all sides. In some cases, the company has been criticized for not doing enough — as when United Nations investigators found that it had been complicit in spreading hate speech during the genocide of the Rohingya community in Myanmar. In others, it has been criticized for overreach — as when a moderator removed a post that excerpted the Declaration of Independence. (Thomas Jefferson was ultimately granted a posthumous exemption to Facebook’s speech guidelines, which prohibit the use of the phrase “Indian savages.”)

One reason moderators struggle to hit their accuracy target is that for any given policy enforcement decision, they have several sources of truth to consider.

The canonical source for enforcement is Facebook’s public community guidelines — which consist of two sets of documents: the publicly posted ones, and the longer internal guidelines, which offer more granular detail on complex issues. These documents are further augmented by a 15,000-word secondary document, called “Known Questions,” which offers additional commentary and guidance on thorny questions of moderation — a kind of Talmud to the community guidelines’ Torah. Known Questions used to occupy a single lengthy document that moderators had to cross-reference daily; last year it was incorporated into the internal community guidelines for easier searching.

A third major source of truth is the discussions moderators have among themselves. During breaking news events, such as a mass shooting, moderators will try to reach a consensus on whether a graphic image meets the criteria to be deleted or marked as disturbing. But sometimes they reach the wrong consensus, moderators said, and managers have to walk the floor explaining the correct decision.

The fourth source is perhaps the most problematic: Facebook’s own internal tools for distributing information. While official policy changes typically arrive every other Wednesday, incremental guidance about developing issues is distributed on a near-daily basis. Often, this guidance is posted to Workplace, the enterprise version of Facebook that the company introduced in 2016. Like Facebook itself, Workplace has an algorithmic News Feed that displays posts based on engagement. During a breaking news event, such as a mass shooting, managers will often post conflicting information about how to moderate individual pieces of content, which then appear out of chronological order on Workplace. Six current and former employees told me that they had made moderation mistakes based on seeing an outdated post at the top of their feed. At times, it feels as if Facebook’s own product is working against them. The irony is not lost on the moderators.

“It happened all the time,” says Diana, a former moderator. “It was horrible — one of the worst things I had to personally deal with, to do my job properly.” During times of national tragedy, such as the 2017 Las Vegas shooting, managers would tell moderators to remove a video — and then, in a separate post a few hours later, to leave it up. The moderators would make a decision based on whichever post Workplace served up.

“It was such a big mess,” Diana says. “We’re supposed to be up to par with our decision making, and it was messing up our numbers.”

Workplace posts about policy changes are supplemented by occasional slide decks that are shared with Cognizant workers about special topics in moderation — often tied to grim anniversaries, such as the Parkland shooting. But these presentations and other supplementary materials often contain embarrassing errors, moderators told me. Over the past year, communications from Facebook incorrectly identified certain U.S. representatives as senators; misstated the date of an election; and gave the wrong name for the high school at which the Parkland shooting took place. (It is Marjory Stoneman Douglas High School, not “Stoneham Douglas High School.”)

Even with an ever-changing rulebook, moderators are granted only the slimmest margins of error. The job resembles a high-stakes video game in which you start out with 100 points — a perfect accuracy score — and then scratch and claw to keep as many of those points as you can. Because once you fall below 95, your job is at risk.

If a quality assurance manager marks Miguel’s decision wrong, he can appeal the decision. Getting the QA to agree with you is known as “getting the point back.” In the short term, an “error” is whatever a QA says it is, and so moderators have good reason to appeal every time they are marked wrong. (Recently, Cognizant made it even harder to get a point back, by requiring moderators to first get a SME to approve their appeal before it would be forwarded to the QA.)

Sometimes, questions about confusing subjects are escalated to Facebook. But every moderator I asked about this said that Cognizant managers discourage employees from raising issues to the client, apparently out of fear that too many questions would annoy Facebook.

This has resulted in Cognizant inventing policy on the fly. When the community standards did not explicitly prohibit erotic asphyxiation, three former moderators told me, a team leader declared that images depicting choking would be permitted unless the fingers depressed the skin of the person being choked.

Before workers are fired, they are offered coaching and placed into a remedial program designed to make sure they master the policy. But often this serves as a pretext for managing workers out of the job, three former moderators told me. Other times, contractors who have missed too many points will escalate their appeals to Facebook for a final decision. But the company does not alw

The best to-do list app right now

The best to-do list app will always be whatever works for you. One reason for the enduring popularity of pen-and-paper-based methods is that they can map perfectly to your individual needs. Bullet journals, which have surged in popularity in recent years, encourage you to pepper them with your own idiosyncrasies: widgets to track various goals, say, or lists of books to read, nestled alongside your daily chores. You impose your own point of view on a paper to-do list, for better and for worse.

Software, on the other hand, imposes its viewpoint on you. It asks you to bend your way of working to the only one it knows, in ways that can be suffocating. So, for you to trust your personal productivity with software, it has to go far beyond what pen and paper can do. Properly used, it should feel like a superpower in your pocket. You should get more things done, more easily than you would without it. Otherwise, what’s the point?

Many to-do list apps are free — or built into your phone — and there’s no harm in trying out a handful. Given the large overlap in features between these apps, you’re likely to make your decision in large part on how you feel about their designs. But I encourage you to resist the trap I have fallen into consistently now for a decade: assuming that what I need at any given moment is a new to-do app, rather than the willpower necessary to get things done.

Once you’re ready to be more productive in earnest, one app stands above the rest.

Todoist for Android on a Galaxy S8 Plus.
Photo by Dan Seifert / The Verge

The best to-do list app right now: Todoist

Todoist, which is available on virtually any platform you can think of, is clean, fast, and easy to use. Its natural language processing makes entering new tasks lightning-fast. Power users will appreciate advanced features including custom labels and filters, location-based reminders, and templates for recurring projects. You can also use it to collaborate with co-workers. But even if your needs are less robust, you’ll likely still appreciate Todoist for its straightforward approach to getting things done.

Todoist was the runner-up the last time we surveyed to-do apps, in September 2014. In the time since, the app on Android and iOS has received a simple but attractive redesign. It organizes your tasks into three useful tabs: Inbox, for stuff you haven’t yet processed; Today, for things due today; and Next 7 Days, for the week ahead. Most weeks, that’s all I need to stay on top of my tasks. I’ll tap out something like “finish review for Dan Tuesday,” and Todoist will put a task labeled “finish review for Dan” and remind me about it before my deadline. It takes all of one second, and the reminders are way more effective for me than relying on pen and paper.

But to-do apps can also be places to dream big, too. That’s why I appreciate Todoist’s simple but effective project view, for organizing anything that involves multiple tasks. When I’m planning something more complicated, I’ll pull up Todoist’s app for Mac — there’s one for Windows, too — and think through my project on the larger screen. It’s also a good place to add comments or file attachments to individual tasks, or to set custom reminders for each step.

Other features added in the past few years should have wide appeal. If you have an Echo device in your home, you can now add tasks with your voice via Alexa. Or you can add tasks from Slack. This year, Todoist also introduced a powerful integration with Google Calendar, allowing you to sync tasks to your calendar and back in real time. And if you get overwhelmed, a feature named Smart Schedule will offer to find time on your calendar for you to complete overdue tasks.

Todoist’s basic plan is free, and it gets you access to apps for every major platform, where you can add up to 80 active projects. A $29 annual fee gets you a lot more: up to 200 active projects, task comments, reminders, and project templates, to name a few. But you may find that the free tier is good enough.

Todoist won’t actually do any of your tasks for you. But in my experience, it will make it easier to get started — and follow through on the most important stuff on the list.

The competition

There are a lot of other to-do list apps to choose from, and depending on your needs, they may suit you better than Todoist does. Here are a few of the more popular ones we’ve tested: