How to FBI-proof your encrypted iPhone backups

If you’re an iPhone user who is steadfast about retaining your privacy, you’re probably not very happy about the recent news that Apple is retaining the ability to decrypt most of what’s in an iCloud backup at the request of government entities, such as the FBI.

In that case, you may want to pay attention to the adage that sometimes the best ways are the old ways. While it’s more convenient to use iCloud to back up your phone, you can back up your iPhone to your Mac or Windows computer and retain full control of your data backups.

If you’ve always backed up via iCloud, or if you haven’t done a local backup in a while, you might be able to use a refresher course. Here’s how you do it.

  • Connect your phone to your computer using its charging cable
  • Fire up iTunes if you’re using Windows or a Mac with a macOS 10.14 or earlier; use Finder on a Mac with macOS 10.15 Catalina

If you’re using Finder, open a Finder window (by either clicking on the Finder icon in your dock or selecting “File” > “New Finder Window” in the top Finder menu bar), and look for your iPhone in the left-hand menu under “Locations.”

Using Finder to change your backup from iCloud to local.

Using Finder to change your backup from iCloud to local.

  • In iTunes, you should see a small iPhone icon in the upper left corner; select that. (If you don’t see it, you may need to authorize your system. Go to the top iTunes menu and select “Account” > “Authorizations” > “Authorize This Computer…” and follow the instructions.)

Look for the phone icon in iTunes.

Look for the phone icon in iTunes.

  • After that, the process for either iTunes or Finder is much the same, although the look of the pages and the language will be slightly different. Look for the category labeled “Backups.” Select “This computer” (in iTunes) or “Back up all of the data on your iPhone to your Mac” (in Finder).
  • You’re going to want to encrypt your backup for increased security. Check “Encrypt iPhone backup” (in iTunes) or “Encrypt local back” (in Finder) and enter a password. Don’t lose that password; otherwise, you’re going to lose access to your data.
  • Once you set the encryption, it’s probable that the backup will start automatically. Otherwise, click on “Back Up Now.”

Using iTunes to change your backup from iCloud to this computer.

Using iTunes to change your backup from iCloud to this computer.

If you need to restore your backup, just go to the same page and click on “Restore Backup…”

Note that you can either manually back up your iPhone, or you can also have it automatically back up each time you connect it to your computer. Look for “Options” just below the “Backups” section, and select “Automatically sync when this iPhone is connected.”

Once you’ve set up your backup to your computer, you probably want to delete any backups you’ve made to iCloud.

To do this on your Mac:

  • Click on the Apple icon in the top corner of your system
  • Select “System Preferences” > “iCloud”
  • Select the “Manage” button in the lower-right corner of the window

  • Select a backup to delete, and select “Delete.” You’ll be asked to select “Delete” again; this will both delete all your backups from iCloud and turn off any further backups.

To do this on your iPhone:

  • Go to “Settings” and tap your name
  • Select “iCloud” > “Manage Storage” > “Backups”
  • Tap on a backup and then on “Delete Backup”
  • Tap on “Turn Off & Delete”

One more thing: dealing with iMessage without saving your data to iCloud can get a bit complicated, partly because iMessage uses end-to-end encryption (which means that it needs a key at either end) and partly because iMessage can also use Messages for iCloud, the feature that allows for syncing iMessage between multiple Mac or iOS devices with the same account. We consulted with Apple, and this is basically how it works:

  • If you have iCloud Backup turned on, then your backup includes a copy of the key that protects your messages. This is the most convenient setup. But in this article, we’re assuming that you want to turn iCloud Backup off.
  • If you have iCloud Backup turned off but Messages for iCloud turned on (which you can do on your iPhone by going to Settings, tapping on your name, and selecting “iCloud” > “iMessage”), your messages will be shared among all your devices, but your encryption key will remain local to those devices. According to Apple, that encryption key will not be saved to the company’s servers.
  • If you have both iCloud Backup and Messages for iCloud turned off, then your only backup options will be local.

Update January 24th, 2020, 10:37AM ET: This article has been updated to clarify how to open a Finder window.

Vox Media has affiliate partnerships. These do not influence editorial content, though Vox Media may earn commissions for products purchased via affiliate links. For more information, see our ethics policy.

Can Apple live up to Apple’s privacy ads?

The thing that convulsed the internet for much of yesterday was this Reuters report that Apple decided against throwing away its keys to users’ encrypted iCloud backups after the FBI complained about encryption.

The word “after” does a lot of work in that formulation — it reads as though it’s meant to be about cause but might just simply be about chronology. Reuters itself didn’t come out and say that Apple chose to retain the ability to unlock your iCloud backups because it was worried about the FBI freaking out if it locked them down, but didn’t not not say that either. One source told the outlet that “Apple didn’t want to poke the bear,” the bear being the FBI.

The news isn’t that the iCloud loophole exists — we’ve always known that. If Reuters’ reporting is correct (and I have no reason to doubt it), the news is Apple’s rare about-face on its march to protect your data.

It’s caused a stir because the larger context is that the US Attorney General is accusing Apple of refusing to help with FBI investigations, a claim Apple strenuously denies. But inside that denial is also the awkward fact that Apple has access to that data in the first place via the iCloud loophole.

Apple set itself up as the paragon of privacy over the past year. I’d argue that Apple’s own rhetoric around privacy and security meant that anything less than perfectly private and secure data would be seen as a failure. And friends: there is no such thing as perfectly private and secure data.

To be clear, Apple really is doing a lot to try to limit the collection and spread of your data — that’s one of the core issues in the big Browser War I wrote about last week. It also has been way out ahead of the rest of big tech when it comes to on-device encryption. Other big tech companies should be doing more to follow Apple’s example when it comes to device encryption and tracking. Credit where due.

Speaking of credit where due — and I’m embarrassed to say I forgot about this until John Gruber mentioned it — Google offers full backup encryption that it can’t access on its servers for newer Android phones. (If only it would offer a more secure default messaging experience!)

Anyway, this whole story was all anybody in tech was talking about yesterday (until the Bezos phone hack story hit. Like I said, there’s a lot going on!). My favorite tweet on the whole fight comes from Joe Cieplinski, who puts the whole debate into exactly the right context:

I love that now the non-tech world thinks Apple is aiding terrorists, and the tech world is simultaneously thinking Apple is selling us out to the FBI. … Gotta love the complete absence of reason in our discourse these days.

I don’t know if there is a complete absence of reason, but the truth is that data privacy and encryption is Really Actually Quite Complicated. As much a we’d like it to be a simple binary choice between secure and not, the truth is that security is a spectrum. You make a trade-off every time you choose a password you have a ghost of a chance of remembering. Apple makes a trade-off when it chooses to keep the decryption key for iCloud backups.

The last time Tim Cook spoke directly to this issue that I’m aware of, he said Apple kept the keys for users who forget their passwords. That’s a legitimate use case, and whether you believe that to be the main reason or not is between you and your general level of trust in Apple and in big tech generally.

This debate has been a long time coming, by the way. It was already one of those things that tech people sort of knew but didn’t think much about when Walt Mossberg wrote about the “iCloud loophole” in 2016 in his column on The Verge. It was a vaguely troubling thing back in 2016. Now in 2020, it’s a much bigger story because Apple itself made it the story of the iPhone all of last year.

When you put up a giant billboard at the biggest consumer electronics show in America touting that “What happens on your iPhone stays on your iPhone,” as Apple did at CES in 2019, people tend to want to see you live up to it. When you follow it up with a “Privacy matters” ad in May, people expect you to live up to it. The heat on this topic is high in large part because Apple’s own rhetoric has been so vociferous.

This might sound like I’m railing against Apple for hypocrisy. I am not — yet. As I mentioned, data security is a spectrum and it’s difficult to understand how everything works in the first place. If I’m unhappy with Apple for anything, it’s for talking about data security and privacy in such absolutist terms.

And I get the impetus! Putting up a billboard that reads “Every security and privacy decision involves trade-offs and we are making the best choices we can in that regard without locking your phone down so much you can barely use it” isn’t going to sell a lot of phones. That’s not how marketing works.

What’s next? I expect a lot of hunkering down from Apple (it hasn’t responded to our request for comment, for example). I don’t know how long it can simply stay silent, however. The FBI and the Attorney General are definitely going to keep pushing. I doubt Apple’s big tech competitors will make hay about it in the way Apple itself has, but that doesn’t mean Apple’s users won’t demand better.

Apple’s choices for iCloud backups involve trade-offs that reasonable people can argue about. I don’t know that I agree with them (in fact I don’t think I do), but it would be nice to have an open, nuanced discussion about them. The problem is that, as Cieplinski tweeted, nuance and reason are in pretty short supply when it comes to discussions about encryption.

More stories from The Verge

Exclusive look at Cruise’s first driverless car without a steering wheel or pedals

Andrew Hawkins with the inside (pardon the pun) look at GM’s entry into the self-driving car discussion. Don’t miss the video, especially.

Inside are two bench seats facing each other, a pair of screens on either end… and nothing else. The absence of all the stuff you expect to see when climbing into a vehicle is jarring. No steering wheel, no pedals, no gear shift, no cockpit to speak of, no obvious way for a human to take control should anything go wrong. There’s a new car smell, but it’s not unpleasant. It’s almost like cucumber-infused water.

Microsoft’s CEO looks to a future beyond Windows, iOS, and Android

Tom Warren has a great write up of what a bunch of reporters learned about Microsoft’s strategy at a small summit in New York last week. This quote from CEO Satya Nadella is really something:

”Windows with its billion is good, Android with its 2 billion is good, iOS with its billion is good — but there is 46 billion more. So let’s go and look at what that 46 billion plus 4 [billion] looks like, and define a strategy for that, and then have everything have a place under the sun.”

Saudi Arabian prince reportedly hacked Jeff Bezos’ phone with malicious WhatsApp message

Google favors temporary facial recognition ban as Microsoft pushes back

James Vincent on the recent back and forth about facial recognition. Here’s an idea: what if we set tech policy by a democratic system involving our duly elected representative instead of whatever these corporations think is best for their image. Weird, I know.

So far, the market is indeed dictating the rules, with big tech companies taking different stances on the issue. Microsoft sells facial recognition but has self-imposed limits, for example, like letting police use the technology in jails but not on the street, and not selling to immigration services. Amazon has eagerly pursued police partnerships, particularly though its video Ring doorbells, which critics say gives law enforcement access to a massive crowdsourced surveillance network.

SpaceX successfully tests escape system on new spacecraft — while destroying a rocket

Loren Grush has all the details, including this bit from Elon Musk, who knows how to give a good quote:

But Musk said the Crew Dragon could have survived if it had been right on top of the fireball. “Since the spacecraft has a very powerful base heat shield, it should not really be significantly affected by the fireball,” Musk said. “It could quite literally look like something out of Star Wars, where it flies right out of the fireball.” Musk also noted that the Crew Dragon could do an escape like at any point during the climb to space, right up until it’s deployed into orbit.

Meet the 26-year-old socialist trucker running for Congress on TikTok

Great profile from Makena Kelly! Mostly, though, I’m going to be laughing for days over the phrase “yeet the rich.”

Sonos and Tile execs warn Congress that Amazon, Google, and Apple are killing competition

Adi Robertson’s story on one of the most engaging hearings I’ve seen in quite some time. I know it’s nothing like impeachment or having big tech CEOs testify, but if you care at all about the consumer tech ecosystem, you should pay attention. At the very least I recommend watching some of the surprisingly forthright opening statements from all of these companies — each of them could be snuffed out in an instant by Amazon or Google or Apple and those giant companies might not even notice they did it. The testimony starts at around the 45 minute mark here.

Things that are not modular and things that are

Sonos will stop providing software updates for its oldest products in May

Speaking of Sonos, this is a tough but probably necessary call. The fact that its system won’t update beyond what the oldest speaker in your network can handle is a bummer. Maybe in the future that won’t be a limitation. I cracked a joke on Twitter about how this is an example of why the lack of modular gadgetry is short-sighted. But it’s not really a joke. All the attempts to do it on phones have basically flopped — some so much so (Ara!) that it has poisoned consumers on the whole idea, which is a shame.

Riding 27 mph downhill on a Dot electric skateboard

Super fun video with with Becca Farsace. This board seems genuinely cool and modular (at least something can be in 2020!). Mixing and matching parts to get the thing to meet your needs is great. …The fact that it can’t brake when the battery is full is not so great.

Skip pulls back the curtain on the high costs of electric scooter maintenance

Modularity! Turns out re-purposing a bunch of scooters original designed for light, personal use into heavy-duty rideshare vehicles was a bad idea. Good on Skip for the transparency here, and hopefully we’ll see these things get more durable over time.

“It’s still early, and we can’t yet extrapolate the long term impact of 4,786 spare parts per 1M trips. Some parts will require replacement due to wear and tear as the fleet ages,” the company says. “But thus far, all parts failures have been caused by vandalism or as the result of premature material failures.”

Google now treats iPhones as physical security keys

The latest update to Google’s Smart Lock app on iOS means you can now use your iPhone as a physical 2FA security key for logging into Google’s first-party services in Chrome. Once it’s set up, attempting to log in to a Google service on, say, a laptop, will generate a push notification on your nearby iPhone. You’ll then need to unlock your Bluetooth-enabled iPhone and tap a button in Google’s app to authenticate before the login process on your laptop completes. The news was first reported by 9to5Google.

Two-factor authentication is one of the most important steps you can take to secure your online accounts, and provides an additional layer of security beyond a standard username and password. Physical security keys are much more secure than the six digit codes that are in common use today, since these codes can be intercepted almost as easily as passwords themselves. Google already lets you use your Android phone as a physical security key, and now that the functionality is available on iOS it means that anyone with a smartphone now owns a security key without having to buy a dedicated device.

Attempting to log in to a Google service will send a push notification to your phone over Bluetooth.
Screenshot by Thomas Ricker / The Verge

The new process is similar to the existing Google Prompt functionality, but the key difference is that Smart Lock app works over Bluetooth, rather than connecting via the internet. That means your phone will have to be in relatively close proximity to your laptop for the authentication to work, which provides another layer of security. However, the app itself doesn’t ask for any biometric authentication — if your phone is already unlocked then a nearby attacker could theoretically open the app and authenticate the login attempt.

According to one cryptogopher working at Google, the new functionality makes use of the iPhone processor’s Secure Enclave, which is used to securely store the device’s private keys. The feature was first introduced with the iPhone 5S, and Google’s app says that it requires iOS 10 or later to function.

The new iPhone support appears to be limited to authenticating Google logins from the Chrome browser. When we attempted to use an iPhone to authenticate a login of the same service (we tested with Gmail) using Safari on a MacBook, we were prompted to insert our key fob (which we don’t have), meaning it created an extra step in our login process where we had to pick an alternative 2FA option.

The Smart Lock app’s new functionality means that iPhones can now be used with Google’s Advanced Protection Program, which is Google’s strongest protection against phishing or other attacks. Along with iPhones, the program also supports Android phones and physical security keys.

Update January 15th, 8:13AM ET: Updated with details of Google’s Advanced Protection Program.

Why the NYT thinks Russia hacked Burisma — and where the evidence is still shaky

The disastrous Democratic National Committee hack in 2016 was a wake-up call for anyone worried about international chaos campaigns, and on Monday night, we got a new reason to be worried about 2020. The New York Times and cybersecurity firm Area1 broke the story of a new hack by Russian intelligence services targeting Burisma, the Ukrainian natural gas company at the center of President Trump’s ongoing impeachment. For months, Republican operatives have been hinting at some horrible corruption inside the company, and if Russian spies really did hack the company, it raises frightening possibilities.

Some in Congress are already predicting a replay of 2016, with Rep. Adam Schiff (D-CA) commenting, “It certainly looks like they are at it again with an eye towards helping this president.” It’s an alarming thought, and given Trump’s refusal to acknowledge Russian hacking the last time around, there’s no indication the White House would do anything to stop it.

But while the report painted a terrifying picture, the evidence is less definitive than it might seem. There’s strong evidence that Burisma was successfully targeted by a phishing campaign, but it’s much harder to be sure who was behind the campaign. There are real suggestions that Russia’s GRU intelligence service could be involved, but the evidence is mostly circumstantial, as is often the case with hacking campaigns. The result leaves the case against Russia frustratingly incomplete and suggests we may head into the presidential campaign with more questions than answers.

The bulk of Area1’s evidence is laid out in an eight-page report released in conjunction with the Times article. The core evidence is a pattern of attacks that have previously targeted the Hudson Institute and George Soros, typically using the same domain registrars and ISPs. Most damning, all three phishing campaigns used the same SSL provider and versions of the same URL, masquerading as a service called “My Sharepoint.” As Area1 sees it, this is the GRU playbook, and Burisma is just the latest in a long line of targets. (Area1 did not respond to repeated requests for comment.)

A chart from the Area1 report.

But not everyone sees that domain-based attribution as a slam dunk. When Kyle Ehmke examined earlier iterations of the same pattern for ThreatConnect, he came away with a more measured conclusion, assessing with only “moderate confidence” that the domains were involved with APT28, researcher shorthand for Russia’s GRU.

“We see consistencies,” Ehmke told The Verge, “but in some cases those consistencies aren’t consistent to a single actor.” This pattern of registrations and phishing attacks really does seem to be a GRU playbook, but it’s not its only playbook, and it’s not the only one running it.

In practical terms, that means that network operators should raise the alarm any time they see an attack that fits this profile, but making a definitive ruling on a single incident is much harder. The web infrastructure used in the campaign is all publicly available and used by lots of other parties, too, so none of it counts as a smoking gun. The most distinctive characteristic is the term “sharepoint,” which researchers have only seen in URLs closely linked to the GRU. But anyone can register a URL with “sharepoint” in it, so the connection is only circumstantial.

“It’s a notable set of consistencies to look for and potentially use to identify their infrastructure,” Ehmke said. “But that’s not to say that everything that has those consistencies has been and will be APT28.”

In the absence of specific information about a given outfit’s strategies and goals, it’s hard to make that attribution any stronger. But going the opposite direction — from a weak attribution to a presumption of intent — can be dangerous.

This kind of weak attribution is frustratingly common in the cybersecurity world, and it can cause real problems as countries struggle to figure out the international diplomacy of cyberwarfare. Farzaneh Badii, former executive director of Georgia Tech’s Internet Governance Project, classifies weak attribution as “circumstantial evidence that can be technically questioned.” She sees it as a global problem and has advocated for international attribution groups that could solve the deadlock, so observers wouldn’t have to rely on private companies or government intelligence agencies. Without that, the problem of trust can be difficult to solve.

“States mostly fund cyber attacks through individual contractors and do not carry them out themselves,” Badii says, making state actors and private criminals difficult to distinguish. If you’re worried about governments ginning up a case for war or private companies grasping for headlines, that problem only gets worse. “Attribution companies are not forthcoming and transparent about all of their methods for undertaking attribution so it is not easy to assess their attribution mechanism.”

If you’re concerned about Russian meddling in the 2020 election, none of this should be reassuring. The GRU really did hack the DNC in 2016, and there’s no reason to think it won’t try similar tricks again, whether or not it was behind this particular phishing campaign. There really is reason to think the GRU was involved. The lack of a smoking gun isn’t reassuring — if anything, it means whoever did this got away relatively clean. But if you just want to know whether Russia hacked Burisma, the real answer may be that we still don’t know.

Microsoft patches Windows 10 security flaw discovered by the NSA

Microsoft is patching a serious flaw in various versions of Windows today after the National Security Agency (NSA) discovered and reported a security vulnerability in Microsoft’s handling of certificate and cryptographic messaging functions in Windows. The flaw, which hasn’t been marked critical by Microsoft, could allow attackers to spoof the digital signature tied to pieces of software, allowing unsigned and malicious code to masquerade as legitimate software.

The bug is a problem for environments that rely on digital certificates to validate the software that machines run, a potentially far-reaching security issue if left unpatched. The NSA reported the flaw to Microsoft recently, and it’s recommending that enterprises patch it immediately or prioritize systems that host critical infrastructure like domain controllers, VPN servers, or DNS servers. Security reporter Brian Krebs first revealed the extent of the flaw yesterday, warning of potential issues with authentication on Windows desktops and servers.

Microsoft is now patching Windows 10, Windows Server 2016, and Windows Server 2019. The software giant says it has not seen active exploitation of the flaw in the wild, and it has marked it as “important” and not the highest “critical” level that it uses for major security flaws. That’s not a reason to delay patching, though. Malicious actors will inevitably reverse-engineer the fix to discover the flaw and use it on unpatched systems.

The NSA warns of exactly that in its own advisory, and suggests that this is a major vulnerability despite Microsoft not marking it as critical. “The vulnerability places Windows endpoints at risk to a broad range of exploitation vectors,” says an NSA statement. “NSA assesses the vulnerability to be severe and that sophisticated cyber actors will understand the underlying flaw very quickly and, if exploited, would render the previously mentioned platforms as fundamentally vulnerable.”

It’s unusual to see the NSA reporting these types of vulnerabilities directly to Microsoft, but it’s not the first time the government agency has done so. This is the first time the NSA has accepted attribution from Microsoft for a vulnerability report, though. Krebs claims it’s part of a new initiative to make the agency’s research available to software vendors and the public.

A previous NSA exploit targeting Windows’ file-sharing protocol, dubbed EternalBlue, leaked two years ago and caused widespread damage. It led to WannaCry ransomware and other variants locking up computers from the UK’s National Health Service to the Russian Ministry of the Interior. Microsoft was forced to issue an emergency patch for Windows XP, even though the operating system had reached end of support.

Update, January 14th 2PM ET: Article updated with statement from the NSA.

Burisma targeted by Russia-linked phishing attack, raising election-meddling fears

A Silicon Valley-based security firm called Area1 says it has found indications that state-sponsored Russian hackers have successfully hacked the Ukranian gas company Burisma, as first reported by the The New York Times. The company has taken on a central role in US politics because of its connection to Democratic presidential front-runner Joe Biden, whose son Hunter sits on the company’s board.

In July, President Trump asked Ukraine’s government to investigate Burisma to find damaging information on the Biden family, allegedly threatening to withhold military aid to the country if the prime minister did not announce an investigation. That request is at the center of the president’s ongoing impeachment proceedings, and has made Burisma a tempting target for anyone seeking to meddle in US politics.

According to Area1, the security firm that detected the attacks, says they detected phishing emails sent to Burisma employees bearing many of the hallmarks of GRU hacking campaigns. The hackers were apparently successful in getting employee login info they used to gain entry into one of Burisma’s servers, although it is unclear how much information was obtained. If the GRU is in fact involved, it’s possible the group could have been looking for embarrassing information to be released during the 2020 presidential campaign.

In hacking Burisma, the Russian hackers could be following a similar playbook as what they reportedly did to undermine Hillary Clinton’s presidential campaign during the 2016 election. In January 2017, US intelligence officials released a report outlining how that Russian intelligence services successfully hacked the Democratic National Committee and stole information that was slowly and regularly leaked to the public to help the campaign of then-candidate Trump.

Microsoft CEO says encryption backdoors are a ‘terrible idea’

As Apple squares off for another encryption fight, Microsoft CEO Satya Nadella offered mixed messages on the encryption question. In a Monday meeting with reporters in New York, Nadella reiterated the company’s opposition to encryption backdoors, but expressed tentative support for legal and technical solutions in the future.

“I do think backdoors are a terrible idea, that is not the way to go about this,” Nadella said. “We’ve always said we care about these two things: privacy and public safety. We need some legal and technical solution in our democracy to have both of those be priorities.”

Along those lines, Nadella expressed support for key escrow systems, versions of which have been proposed by researchers in the past.

Apple’s device encryption systems first became a point of controversy after a 2016 shooting in San Bernardino, which led to a heated legal push to force Apple to unlock the phone. That fight ultimately ended in a stalemate, but many have seen the recent shooting at a naval base in Pensacola as a potential place to restart the fight. Committed by a Saudi national undergoing flight training with the US Navy, the shooting has already been labeled a terrorist act by the FBI, and resulted in 21 other Saudi trainees being disenrolled from the program. Two phones linked to the assailant are still subject to Apple’s device encryption, and remain inaccessible to investigators.

But Nadella stopped short of simply saying companies could never provide data under such circumstances, or that Apple shouldn’t provide a jailbroken iOS modification under the circumstances. “We can’t take hard positions on all sides… [but if they’re] asking me for a backdoor, I’ll say no.” Nadella continued, “My hope is that in our democracy these are the things that arrive at legislative solutions.”

That’s a significantly milder tone than Microsoft took during the San Bernardino case in 2016. At the time, Microsoft expressed “wholehearted” support for Apple’s position in the case, and joined Apple in opposing some of the encryption bills pushed in the wake of the trial.

Correction 9:43PM ET: Due to a transcription error, Nadella’s two priorities were listed as privacy and national security. He said they were privacy and public safety. This has been corrected.

FBI arrests alleged member of prolific neo-Nazi swatting ring

A man loosely linked to violent neo-Nazi group Atomwaffen has been charged with participating in a swatting ring that hit hundreds of targets, potentially including journalists and a Facebook executive. John William Kirby Kelley supposedly picked targets for swatting calls in an IRC channel, then helped record the hoax calls for an audience of white supremacists. He was allegedly caught after making a bomb threat to get out of classes.

The Justice Department unsealed the case against Kelley late last week, and he was arrested and appeared in court on January 10th. He’s charged with conspiracy to transmit a threat, which carries up to five years in prison. The Washington Post writes that his attorney didn’t comment on the allegations.

According to an affidavit, the FBI started investigating Kelley in late 2018, after Old Dominion University in Virginia received an anonymous bomb and shooting threat. They linked the call to numerous other swatting incidents and a chat channel called Deadnet IRC where participants openly discussed coordinating them. The affidavit also links Kelley to Doxbin, a site that hosts the sensitive personal information of journalists, federal judges, company executives, and other potential swatting victims.

As Krebs previously reported, the group behind Doxbin and Deadnet IRC have claimed responsibility for swatting a Facebook executive last year. Krebs, who has been swatted multiple times, says he was targeted after appearing on Doxbin, as was Pulitzer-winning columnist Leonard G. Pitts Jr., who was labeled on Doxbin as “anti-white race.”

Krebs apparently also reviewed some Deadnet logs, revealing other details not directly connected with Kelley’s case. He writes that one member admitted to making a bomb threat around a university speech by former Breitbart editor Milo Yiannopoulos, hoping to “frame feminists at the school for acts of terrorism.” Another member supposedly maintains a site for followers of the neo-Nazi James Mason who has advised Atomwaffen and posed with members of the group. Three Atomwaffen members are currently on trial for five murders.

Swatting hoaxes — where a perpetrator makes a fake threat to draw an extreme police response — can be highly difficult to trace. It’s easy to make anonymous phone calls online, and the results of a SWAT raid can be deadly; police have repeatedly killed innocent residents during them, including one swatting victim. Many swatters are never found, although the serial offender behind that death was sentenced to 20 years in prison.

In this case, Kelley seems to have been remarkably careless. He called the police later from his own university-registered phone number, allowing officers to match his voice with the anonymous caller. When confronted, he apparently admitted to being interested in swatting. Soon after, he logged on to Deadnet IRC and discussed new targets, while other members explicitly confirmed the bomb threat to his school. He also apparently kept Deadnet IRC logs and swatting videos on thumb drives, which police seized in a search of his dorm room.

Meanwhile, an FBI search of Kelley’s phone reportedly revealed violent neo-Nazi sympathies. It contained pictures of Kelley and others “dressed in tactical gear holding assault-style rifles” alongside “recruiting materials” for Atomwaffen. Another member of Deadnet IRC apparently agreed to inform on the group after being arrested separately, and he told the FBI that he and fellow swatters were white supremacists “sympathetic to the neo-Nazi movement.” Deadnet itself was filled with racist invective, and among other swatting victims, it targeted the historically black Alfred Street Baptist Church in Virginia.

Microsoft says Skype audio is now reviewed in ‘secure facilities’ after a worrying report

Microsoft says Skype calls are now transcribed in “secure facilities in a small number of countries,” following a new report in The Guardian about the company’s use of contractors in China to listen to some calls to make sure the company’s transcription software is working properly. The company confirmed to The Verge that China is not currently one of the countries where transcription takes place.

A former contractor who lived in Beijing told The Guardian that he transcribed Skype calls with little cybersecurity protection from potential state interference. The unidentified former contractor told The Guardian that he reviewed thousands of audio recordings from Skype and Cortana on his personal laptop from his home in Beijing over a two-year period.

Workers who were part of the review process accessed the recordings via a web app in a Chrome browser over the internet in China. There was little vetting of employees and no security measures in place to protect the audio recordings from state or criminal interference, according to The Guardian.

The contractor told The Guardian he heard “all kinds of unusual conversations” while performing the transcription. “It sounds a bit crazy now, after educating myself on computer security, that they gave me the URL, a username and password sent over email.”

A Microsoft spokesperson told The Verge in an email that “If there is questionable behavior or possible violation by one of our suppliers, we investigate and take action.” The audio “snippets” that contractors get to review are ten seconds long or shorter, according to the spokesperson, “and no one reviewing these snippets would have access to longer conversations.”

“We’ve always disclosed this to customers and operate to the highest privacy standards set out in laws like Europe’s GDPR,” the spokesperson added.

The existence of the Skype transcription program was first detailed in a report from Motherboard in August. Although Skype’s terms of service indicated at the time that the company analyzed call audio, this was the first report showing how much of the analysis was done by humans. And unlike competitors who publicly declared that they would end the practice of having humans transcribing audio from virtual assistants, Microsoft continued the practice, apparently updating its privacy policy to admit it was doing so.

Microsoft says it reviewed its processes and communications with customers over the summer. “As a result, we’ve updated our privacy statement to be even more clear about this work, and since then we’ve significantly enhanced the process including by moving these reviews to secure facilities in a small number of countries,” the company said in its statement to The Verge. “We will continue to take steps to give customers greater transparency and control over how we manage their data.”

Microsoft did not elaborate on what these “steps” entailed.

Microsoft is not the only company to face blowback for how it’s handled audio recordings of customers. The practices of data annotation, where humans help AI learn by interpreting audio and other information, have come under intense scrutiny as people weigh the convenience of having on-demand answers from virtual assistants with the discomfort of relinquishing chunks of their private lives often to people they didn’t know were listening.

An April report from Bloomberg highlighted how Amazon used full-time employees and contractors to “listen” to customers’ conversations with Alexa. The report found the company wasn’t clear about how long such recordings are stored, or whether employees or even third parties have accessed or would be able to access the information for nefarious purposes. And both Apple and Google reportedly suspended their programs that used humans to review audio recordings of their Siri and Assistant virtual assistant programs.

Here’s how to prevent audio assistants from retaining audio recordings.

Teen hackers are defacing unsuspecting US websites with pro-Iran messages

Phil Openshaw, a retired Californian dentist, hadn’t checked his website in months. So he was unaware that it no longer displayed details for his annual mission trip that provides free dental services in Uganda. Instead, it displayed a photo of recently assassinated Iranian Gen. Qassem Soleimani with the message “Down with America.”

“Hoo boy. Thanks for the good news,” he said when informed that his site,, had been defaced. “I don’t really know how to respond to that. I’ll take a look at it.”

It’s part of an unofficial front in the simmering conflict between the US and Iran, kicked off by the assassination Soleimani on January 3rd. The strike was followed by retaliatory Iranian missile strikes on two Iraqi bases that house US troops as well as the downing of a Ukrainian passenger plane, the implications of which are still unknown.

But while the brief military conflict has settled into an impasse, there’s a smaller skirmish that hasn’t stopped. While leaders weigh their options, pro-Iranian wannabe hackers who claim no government affiliation can deface unpatched websites run by individual Americans and small businesses but little else. It’s a tactic of online posturing and inflated threats — one at home in a conflict where tweets and perceived insults have often dictated the course of events.

The hacker who defaced Openshaw’s site goes by “Mr Behzad” and claims to be a 19-year-old operating out of a sense of patriotism. (It’s impossible to verify his identity with complete certainty, but he left his Telegram handle on the sites he defaced.)

“I do not work for the government. I work for my home country of Iran,” he told The Verge, adding a heart emoji after his country’s name. He said he learned how to deface sites through work programming and coding. “We want to know that if they harm our people or our country, we will not fail.”

One of the defaced sites

“Ebrahim Vaker,” who left his Telegram handle card on the briefly defaced page of the University of Maryland, Baltimore County, said he was 23 and the leader of the “Iranian Anonymous Team,” created last year.

“Most of these attacks are a sign of protest,” Vaker told The Verge. He said the UMBC defacement was the biggest attack yet from his seven-member team, whose members range as young as 18.

Website defacements, especially against small or neglected websites, are broadly considered among the bottom tier of cyberattacks. They frequently rely on simply copying and pasting malicious script that’s easy to find online — the easy work for unskilled “script kiddies.” In the heyday of Anonymous, an unexpectedly altered website seemed to evoke a much more sinister and capable adversary, but defacement now has little effect besides calling a small amount of attention and bringing a minor annoyance to web hosts.

In this case, the defacements are ominous because of the very real possibility of a more militarized cyberattack. The conflict between the US and Iranian governments has been peppered with skirmishes in the cyber domain: the US reportedly interfered with Iranian rocket controls in June and propaganda outlets in September. Iran has historically used devastating “wiper” attacks on a US target at least once before and has used them regionally as recently as last year, prompting fears it could use them again. The US Department of Homeland Security warned last summer that Iran could renew wiper attacks on networks, and it said Monday that Iran’s Islamic Revolutionary Guard Corps may look to leverage their substantial offensive cyber capabilities against American targets, especially critical infrastructure, especially down the road.

So far, the most significant defacement linked to Iran targeted the Federal Library Depository Program, which was hacked by an entity or group calling itself “Iran Cyber Security Group Hackers” that put up a photo of Trump getting punched. (The FLDP site was also defaced by “Turkey Cyber Pirates” in 2012.)

Adam Meyers, the vice president of intelligence at the cybersecurity company CrowdStrike, which keeps tabs on Iranian hacktivists as part of the general cyber threat landscape, said hacktivists like Behzad and Vaker are “exactly who you think you are.”

“They’re people with a security awareness who operate in Iran, typically teenagers and young men in their 20s, who are engaged in security and the hacker scene,” Meyers told The Verge. “They largely engage in defacement and tend to be more focused on web-based technology like [web programing language] PHP and WordPress.”

Many of the victims of defacement didn’t want to speak to the press about what happened, but, resoundingly, they weren’t major hubs of the US military, government, business, or culture. Some of the sites seemed abandoned or were URL redirects to sites that sell clothes. Those who were willing to speak about it shrugged it off.

On Tuesday, a site for CPI Pipe and Steel, an Oklahoma company that makes heavy-duty steel feeding troughs for livestock, instead showed Behzad’s name, alongside the message “Suleimani was not a person/he was a belief/Beliefs never die.”

“We’re not anybody of interest,” laughed CPI Pipe and Steel owner Carolyn Tolle. “If they were really trying to do something they’d try to hack something more protected. I would guess this is like a startup guy, a newbie into the business.”