London police to deploy facial recognition cameras across the city

Live facial recognition cameras will be deployed across London, with the city’s Metropolitan Police announcing today that the technology has moved past the trial stage and is ready to be permanently integrated into everyday policing.

The cameras will be placed in locations popular with shoppers and tourists, like Stratford’s Westfield shopping center and the West End, reports BBC News. Each camera will scan for faces contained in “bespoke” watch lists, which the Met says will predominantly contain individuals “wanted for serious and violent offences.”

When the camera flags an individual, police officers will approach and ask them to verify their identity. If they’re on the watch list, they’ll be arrested. “This is a system which simply gives police officers a ‘prompt’, suggesting ‘that person over there may be the person you’re looking for,’” said the Metropolitan police in a press release.

Operational use of the cameras will only last for five or six hours at a time, says BBC News, but the Met makes clear that the use of this technology is to be the new normal in London.

Facial Recognition Technology

Previous trials of facial recognition technology in London have been signposted to the public.
Photo by Kirsty O’Connor/PA Images via Getty Images

“As a modern police force, I believe that we have a duty to use new technologies to keep people safe in London,” said the Met’s assistant commissioner Nick Ephgrave said in a press statement. “Every day, our police officers are briefed about suspects they should look out for; [facial recognition] improves the effectiveness of this tactic.”

The use of facial recognition by law enforcement in the UK has previously been limited to small trials and public events like concerts and football matches. Such deployments have been widely criticized, with data from one trial indicating that the 81 percent of “matches” suggested by the facial recognition system were incorrect.

Despite this, the Met calls the technology “tried and tested,” and says the algorithms it uses from biometric firm NEC identify 70 percent of wanted suspects and only generate false alerts for one in every 1,000 cases.

Privacy advocates described the deployment of the technology as an attack on civil liberties. The use of facial recognition around the world has been criticized by many tech experts and privacy advocates, who note such systems are often racially biased and are misused by the police. Even some big tech companies like Google now back a moratorium on the technology.

“This decision represents an enormous expansion of the surveillance state and a serious threat to civil liberties in the UK,” Silkie Carlo, director of Big Brother Watch, told The Daily Mail. “This is a breath-taking assault on our rights and we will challenge it, including by urgently considering next steps in our ongoing legal claim against the Met and the Home Secretary. This move instantly stains the new Government’s human rights record and we urge an immediate reconsideration.”

Google’s search engine for scientists upgraded for better data scouring

Google’s search engine for datasets, the cunningly named Dataset Search, is now out of beta, with new tools to better filter searches and access to almost 25 million datasets.

Dataset Search launched in September 2018, with Google hoping to slowly unify the fragmented world of online, open-access data. Although many institutions like universities, governments, and labs publish data online, it’s often difficult to find using traditional search. But by adding open-source metadata tags to their webpages, these groups can have their data indexed by Dataset Search, which now covers a huge range of information — everything from skiing injuries to volcano eruptions to penguin populations.

Google would not share any specific usage figures for the search engine, but it said “hundreds of thousands of users” have tried Dataset Search since its launch, and the reaction from the scientific community was overall positive.

Natasha Noy, a research scientist at Google AI who helped create the tool, tells The Verge that “most [data] repositories have been very responsive” and that the engine’s launch meant older scientific institutions are now taking “publishing metadata more seriously.”

“For example, [the prestigious scientific journal] Nature is changing its policies to require data sharing with proper metadata,” Noy says, highlighting a change that will make the data underpinning top-flight scientific research more accessible in future.

“Finally! My thesis ‘Hitting The Slopes A Little Too Hard: Shattered Femurs and Broken Dreams In the 2012 World Ski Cup,’ will have the rigorous, data-based grounding it deserves.”
Image: The Verge

New features added to Dataset Search include the ability to filter data by type (tables, images, text, etc), whether it’s free to use, and the geographic areas it covers. The engine is also now available to use on mobile and has expanded dataset descriptions.

Google says the corpus covered by the search engine — almost 25 million datasets — is only a “fraction of datasets on the web,” but a “significant” one all the same. The largest topics indexed are geosciences, biology, and agriculture, and the most common queries include “education,” “weather,” “cancer,” “crime,” “soccer,” and “dogs.” The US is also the leader in open government datasets, publishing more than 2 million online.

Noy would not comment on future plans for Dataset Search, but she says the team was thinking about a number of functions they hope would be useful, including “understanding how datasets are cited and reused” and “helping users explore datasets in Dataset Search when they don’t necessarily know what they are looking for.”

“And, of course, continuing to expand the corpus,” says Noy. There’s always more data out there.

Google publishes largest ever high-resolution map of brain connectivity

Scientists from Google and the Janelia Research Campus in Virginia have published the largest high-resolution map of brain connectivity in any animal, sharing a 3D model that traces 20 million synapses connecting some 25,000 neurons in the brain of a fruit fly.

The model is a milestone in the field of connectomics, which uses detailed imaging techniques to map the physical pathways of the brain. This map, known as a “connectome,” covers roughly one-third of the fruit fly’s brain. To date, only a single organism, the roundworm C. elegans, has had its brain completely mapped in this way.

Connectomics has a mixed reputation in the science world. Advocates argue that it helps link physical parts of the brain to specific behaviors, which is a key goal in neuroscience. But critics note it has yet to produce any major breakthroughs, and they say that the painstaking work of mapping neurons is a drain on resources that might be better put to use elsewhere.

“The reconstruction is no doubt a technical marvel,” Mark Humphries, a neuroscientist at the University of Nottingham, told The Verge. But, he said, it’s also primarily a resource for other scientists to now use. “It will not in itself answer pressing scientific questions; but it might throw up some interesting mysteries.”

The 3D map produced by Google and the FlyEM team at Janelia is certainly a technical achievement, the product of both automated methods and laborious human labor.

The first step in creating the map was to slice sections of fruit fly brain into pieces just 20 microns thick, roughly a third the width of a human hair. Fruit flies are a common subject in connectomics as they have relatively simple brains about the size of a poppy seed but display complex behaviors like courtship dances.

These slices of brain are then imaged by bombarding them with streams of electrons from a scanning electron microscope. The resulting data comprises some 50 trillion 3D pixels, or voxels, which are processed using an algorithm that traces the pathways of each cell.

Despite Google’s algorithmic prowess, it still took substantial human labor to check the software’s work. The company says it took two years and hundreds of thousands of hours for scientists at Janelia to “proofread” the 3D map, verifying the route of each of the 20 million chemical synapses using virtual reality headsets and custom 3D editing software.

Even then, the resulting map only covers a portion of the fruit fly’s brain, known as the hemibrain. In total, a fruit fly’s brain contains 100,000 neurons, while a human brain has roughly 86 billion. That suggests how far we are from creating a full connectome of our own neural pathways.

Joshua Vogelstein, a biomedical engineer and co-founder of the Open Connectome Project, told The Verge that the work would be a boon to scientists. Vogelstein said that in the decade to come, the data provided by such projects would finally start to yield results.

“I believe people were impatient about what [connectomes] would provide,” said Vogelstein. “The amount of time between a good technology being seeded, and doing actual science using that technology is often approximately 15 years. Now it’s 15 years later and we can start doing science.”

Google and the FlyEM team have made the data they collected available for anyone to view and download. The group has also published a pre-print paper describing their methodology, and say they’ll be publishing more papers on their work in the weeks to come.

Apple reportedly scrapped plans to fully secure iCloud backups after FBI intervention

Apple reportedly dropped plans to fully secure users’ iPhone and iPad backups after the FBI complained about the initiative, reports Reuters.

Apple devices have a well-deserved reputation for protecting on-device data, but backups made using iCloud are a different matter. This information is encrypted to stop attackers, but Apple holds the keys to decrypt it and shares it with police and governments when legally required.

Privacy advocates like the Electronic Frontier Foundation have long criticized this arrangement, but Apple says it’s needed for when users are locked out of their account. For iCloud backups, “our users have a key and we have one,” said CEO Tim Cook in 2019. “We do this because some users lose or forget their key and then expect help from us to get their data back.”

Back in 2018, Apple reportedly planned to close this loophole by applying the same end-to-end encryption used on devices to users’ iCloud backups — but the plan never moved forward. Reuters now says the iPhone maker reversed course after talking to the FBI about the issue.

One former Apple employee told the publication: “Legal killed it, for reasons you can imagine.”

The source said the decision was influenced by Apple’s long court battle in 2016 with the FBI over an iPhone belonging to one of the San Bernardino shooters. The FBI demanded that Apple build a backdoor into its own devices, but Apple refused, saying this would permanently undermine its security. Eventually, the FBI found its own way in.

According to the former employee Reuters spoke to, Apple didn’t want to aggravate the FBI further by locking it out of iCloud backups. “They decided they weren’t going to poke the bear anymore,” said the source.

In meetings with the agency, FBI officials told Apple that the plan would harm its investigations. The FBI and other law enforcement bodies regularly ask Apple to decrypt iCloud data, and in the first half of 2019, they requested access to thousands of accounts. Apple says it complies with 90 percent of such requests.

One former FBI official who was not involved with these talks told Reuters that Apple was won over by the agency. “It’s because Apple was convinced,” said the source. “Outside of that public spat over San Bernardino, Apple gets along with the federal government.”

As mentioned earlier, Apple may have been motivated by user convenience for dropping fully encrypted backups, and Reuters says that, ultimately, it “could not determine why exactly Apple dropped the plan.”

The report is timely considering confrontations between Apple and law enforcement agencies have sprung back to life this month, with the FBI demanding access to another phone, this one connected to a shooting at a Pensacola naval base last December.

The White House has hit Apple hard on the issue, with Attorney General William Barr and President Donald Trump launching attacks on the company. “We are helping Apple all of the time on TRADE and so many other issues, and yet they refuse to unlock phones used by killers, drug dealers and other violent criminal elements,” Trump tweeted this month.

Apple has rejected these criticisms, particularly Barr’s accusation that the company has provided no “substantive assistance” to the FBI. Reuters’ report about the company reversing plans to fully encrypt iCloud backups gives some credence to this claim. The Verge has reached out to Apple for comment.

Google favors temporary facial recognition ban as Microsoft pushes back

The regulation of facial recognition is emerging as a key disagreement among the world’s biggest tech companies, with Alphabet and Google CEO Sundar Pichai suggesting a temporary ban, as recently suggested by the EU, might be welcome, while Microsoft’s chief legal officer Brad Smith cautions against such intervention.

“I think it is important that governments and regulations tackle it sooner rather than later and give a framework for it,” Pichai said at a conference in Brussels on Monday, reports Reuters. “It can be immediate but maybe there’s a waiting period before we really think about how it’s being used … It’s up to governments to chart the course.”

But in an interview published last week, Smith, who also serves as Microsoft’s chief legal officer, was dismissive of the idea of a moratorium.

“Look, you can try to solve a problem with a meat cleaver or a scalpel,” Smith told NPR when questioned about a potential ban. “And, you know, if you can solve the problem in a way that enables good things to get done and bad things to stop happening … that does require a scalpel. This is young technology. It will get better. But the only way to make it better is actually to continue developing it. And the only way to continue developing it actually is to have more people using it.”

The two executives’ comments come as the EU considers a five-year ban on the use of facial recognition in public spaces. The EU’s proposal, which was leaked to the press last week and could change when announced officially, says a temporary ban would give governments and regulators time to assess the dangers of the technology.

Across the world, law enforcement and private enterprise are increasingly using facial recognition to identify people in public spaces. While proponents argue that the technology helps solve crimes, critics say its unchecked adoption undermines civil liberties and leads to increased discrimination due to algorithmic bias.

Facial recognition is a key technology used by the Chinese state in the repression of its Muslim Uighur minority, for example, and the country sells the same technology to other repressive regimes around the world. In the US the technology is increasingly used by the police via small contractors. A recent report from the New York Times shed light on a facial recognition system that can search 3 billion photos scraped from websites like Facebook without users’ consent, and is used by more than 600 local law enforcement agencies.

Pichai’s comments this week are particularly noteworthy as Google itself refuses to sell facial recognition to customers (citing fears of misuse and mass surveillance) but has not previously argued for a ban. Writing in an editorial for The Financial Times on Monday, Pichai advocated for greater regulation of artificial intelligence.

“[T]here is no question in my mind that artificial intelligence needs to be regulated,” he wrote. “Companies such as ours cannot just build promising new technology and let market forces decide how it will be used.”

CHINA-POLICE-TECHNOLOGY

Facial recognition has become portable, fitting into products like these sunglasses worn by Chinese police officers.
Photo credit: AFP/Getty Images

So far, the market is indeed dictating the rules, with big tech companies taking different stances on the issue. Microsoft sells facial recognition but has self-imposed limits, for example, like letting police use the technology in jails but not on the street, and not selling to immigration services. Amazon has eagerly pursued police partnerships, particularly though its video Ring doorbells, which critics say gives law enforcement access to a massive crowdsourced surveillance network.

In the US, at least, it seems unlikely that a nationwide ban could be introduced. Some cities in America like San Francisco and Berkley have independently banned the technology, but the White House has cited such measures as examples of regulatory overreach. The government has indicated that it wants to take a hands-off approach to the regulation of AI, including facial recognition, in the name of spurring innovation.

Alphabet CEO Sundar Pichai says there is ‘no question’ that AI needs to be regulated

Google and Alphabet CEO Sundar Pichai has called for new regulations in the world of AI, highlighting the dangers posed by technology like facial recognition and deepfakes, while stressing that any legislation must balance “potential harms … with social opportunities.”

“[T]here is no question in my mind that artificial intelligence needs to be regulated. It is too important not to,” writes Pichai in an editorial for The Financial Times. “The only question is how to approach it.”

Although Pichai says new regulation is needed, he advocates a cautious approach that might not see many significant controls placed on AI. He notes that for some products like self-driving cars, “appropriate new rules” should be introduced. But in other areas, like healthcare, existing frameworks can be extended to cover AI-assisted products.

“Companies such as ours cannot just build promising new technology and let market forces decide how it will be used,” writes Pichai. “It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone.”

The Alphabet CEO, who heads perhaps the most prominent AI company in the world, also stresses that “international alignment will be critical to making global standards work,” highlighting a potential area of difficulty for tech companies when it comes to AI regulation.

Currently, US and EU plans for AI regulation seem to be diverging. While the White House is advocating for light-touch regulation that avoids “overreach” in order to encourage innovation, the EU is considering more direct intervention, such as a five-year ban on facial recognition. As with regulations on data privacy, any divergence between the US and EU will create additional costs and technical challenges for international firms like Google.

Pichai’s editorial did not call out any specific proposals for regulations, but in comments made later in the day at a conference in Brussels he suggested a temporary ban on facial recognition — as being mooted by the EU — might be welcome. This fits with Google’s own approach to facial recognition, which it refuses to sell because of worries it will be used for mass surveillance. Rivals like Microsoft and Amazon continue to sell the technology.

As Pichai notes, “principles that remain on paper are meaningless.” Sooner or later, talk about the need for regulation is going to have to turn into action.

Update January 21, 6:30AM ET: Story has been updated to incorporate Pichai’s comments, shared after the editorial was published, on the need for regulation of facial recognition.

Facebook’s problems moderating deepfakes will only get worse in 2020

Last summer, a video of Mark Zuckerberg circulated on Instagram in which the Facebook CEO appeared to claim he had “total control of billions of people’s stolen data, all their secrets, their lives, their futures.” It turned out to be an art project rather than a deliberate attempt at misinformation, but Facebook allowed it to stay on the platform. According to the company, it didn’t violate any of its policies.

For some, this showed how big tech companies aren’t prepared to deal with the onslaught of AI-generated fake media known as deepfakes. But it isn’t necessarily Facebook’s fault. Deepfakes are incredibly hard to moderate, not because they’re difficult to spot (though they can be), but because the category is so broad that any attempt to “clamp down” on AI-edited photos and videos would end up affecting a whole swath of harmless content.

Banning deepfakes altogether would mean removing popular jokes like gender-swapped Snapchat selfies and artificially aged faces. Banning politically misleading deepfakes just leads back to the same political moderation problems tech companies have faced for years. And given there’s no simple algorithm that can automatically spot AI-edited content, whatever ban they do enact would mean creating even more work for beleaguered human moderators. For companies like Facebook, there’s just no easy option.

“If you take ‘deepfake’ to mean any video or image that’s edited by machine learning then it applies to such a huge category of thing that it’s unclear if it means anything at all,” Tim Hwang, former director of the Harvard-MIT Ethics and Governance of AI Initiative, tells The Verge. “If I had my druthers, which I’m not sure if I do, I would say that the way we should think about deepfakes is as a matter of intent. What are you trying to accomplish at the sort of media that you’re creating?”

Notably, this seems to be the direction that big platforms are actually taking. Facebook and Reddit both announced moderation policies that covered deepfakes last week, and rather than trying to stamp out the format altogether, they took a narrower focus.

Facebook said it will remove “manipulated misleading media” which has been “edited or synthesized” using AI or machine learning “in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.” But the company noted that this does not cover “parody or satire” or misleading edits made using traditional means, like last year’s viral video of House Speaker Nancy Pelosi supposedly slurring her words.

New apps like Doublicat (pictured) will make deepfakes easier and more fun.
Credit: Doublicat

Reddit, meanwhile, didn’t mention AI at all, but instead said it will remove media that “impersonates individuals or entities in a misleading or deceptive manner.” It’s also created an exemption for “satire and parody,” and added that it will “always take into account the context of any particular content” — a broad caveat that gives its mods a lot of leeway.

As many have pointed out, these policies are full of loopholes. Writing at OneZero, Will Oremus notes that Facebook’s only covers edited media which includes speech, for example. This means that “a deepfake video that makes it look like a politician burned the

American flag, participated in a white nationalist rally, or shook hands with a terrorist” would not be prohibited — something Facebook confirmed to Oremus.

These are glaring omissions, but they highlight the difficulty separating deepfakes from the underlying problems of platform moderation. Although many reports in recent years have treated “deepfake” as synonymous with “political misinformation,” the actual definition is far more broad. And the problem will only get worse in 2020.

While earlier versions of deepfake software took some patience and technical skill to use, the next generation will make creating deepfakes as easy as posting. Apps that use AI to edit video (the standard definition of a deepfake) will become commonplace, and as they spread — used for in-jokes, brand tweets, bullying, harassment, and everything in between — the idea of the deepfake as a unique threat to truth online will fade away.

Just this week, an app named Doublicat launched on iOS and Android that uses machine learning to paste users’ faces onto popular reaction GIFs. Right now it only works with preselected GIFs, but the company’s CEO told The Verge it’ll allow users to insert faces into any content they like in future, powered by a type of machine learning method known as a GAN.

Does all this make Doublicat a deepfake app? Yep. And will it undermine democracy? Probably not.

Just look at the quality of its output, as demonstrated by the GIF below, which shows my face pasted onto Chris Pratt’s in Parks and Recreation. Technologically, it’s impressive. The app made the GIF in a few seconds from just a single photo. But it’s never going to be mistaken for the real thing. Meanwhile, the creator of TikTok, ByteDance, has been experimenting with deepfake features (though it says they won’t be incorporated into its wildly popular app) and Snapchat recently introduced its own face-swapping tools.


The author as Chris Pratt.

Hwang argues that the dilution of the term deepfakes could actually have benefits in the long run. “I think the great irony of people saying that all of these consumer features are also deepfakes, is that it in some ways commoditizes what deepfake means,” says Hwang. If deepfakes become commonplace and unremarkable, then people will “get comfortable with the notion of what this technology can do,” he says. Then, hopefully, we can understand it better and focus on the underlying problems of misinformation and political propaganda.

It’s possible to argue that the problem of moderating deepfakes on social media has been mostly a distraction from the start. AI-edited political propaganda has failed to materialize in a meaningful way, and studies show that the vast majority of deepfakes are nonconsensual porn (making up 96 percent of online deepfake videos).

Social media platforms have happily engaged in the debate over deepfake moderation, but as Facebook and Reddit’s recent announcements show, these efforts are mostly a sideshow. The core issues have not changed: who gets to lie on the internet, and who decides if they’re lying? Once deepfakes cease to be believable as an existential threat to truth, we’ll be left with the same, unchanging questions, more pressing than ever before.

Coral is Google’s quiet initiative to enable AI without the cloud

AI allows machines to carry out all sorts of tasks that used to be the domain of humans alone. Need to run quality control on a factory production line? Set up an AI-powered camera to spot defects. How about interpreting medical data? Machine learning can identify potential tumors from scans and flag them to a doctor.

But applications like this are useful only so long as they’re fast and secure. An AI camera that takes minutes to process images isn’t much use in a factory, and no patient wants to risk the exposure of their medical data if it’s sent to the cloud for analysis.

These are the sorts of problems Google is trying to solve through a little-known initiative called Coral.

“Traditionally, data from [AI] devices was sent to large compute instances, housed in centralized data centers where machine learning models could operate at speed,” Vikram Tank, product manager at Coral, explained to The Verge over email. “Coral is a platform of hardware and software components from Google that help you build devices with local AI — providing hardware acceleration for neural networks … right on the edge device.”

Coral’s products, like the dev board (above), can be used to prototype new AI devices.
Image: Google

You might not have heard of Coral before (it only “graduated” out of beta last October), but it’s part of a fast-growing AI sector. Market analysts predict that more than 750 million edge AI chips and computers will be sold in 2020, rising to 1.5 billion by 2024. And while most of these will be installed in consumer devices like phones, a great deal are destined for enterprise customers in industries like automotive and health care.

To meet customers’ needs Coral offers two main types of products: accelerators and dev boards meant for prototyping new ideas, and modules that are destined to power the AI brains of production devices like smart cameras and sensors. In both cases, the heart of the hardware is Google’s Edge TPU, an ASIC chip optimized to run lightweight machine learning algorithms — a (very) little brother to the water-cooled TPU used in Google’s cloud servers.

While its hardware can be used by lone engineers to create fun projects (Coral offers guides on how to build an AI marshmallow-sorting machine and smart bird feeder, for example), the long-term focus, says Tank, is on enterprise customers in industries like the automotive world and health care.

As an example of the type of problem Coral is targeting, Tank gives the scenario of a self-driving car that’s using machine vision to identify objects on the street.

“A car moving at 65 mph would traverse almost 10 feet in 100 milliseconds,” he says, so any “delays in processing” — caused by a slow mobile connection, for example — “add risk to critical use cases.” It’s much safer to do that analysis on-device rather than waiting on a slow connection to find out whether that’s a stop sign or a street light up ahead.

Tank says similar benefits exist with regard to improved privacy. “Consider a medical device manufacturer that wants to do real time analysis of ultrasound images using image recognition,” he says. Sending those images to the cloud creates a potential weak link for hackers to target, but analyzing images on-device allows patients and doctors to “have confidence that data processed on the device doesn’t go out of their control.”

Google’s Edge TPU, a tiny processing chip optimized for AI that sits at the heart of most Coral products.
Image: Google

Although Coral is targeting the world of enterprise, the project actually has its roots in Google’s “AIY” range of do-it-yourself machine learning kits, says Tank. Launched in 2017 and powered by Raspberry Pi computers, AIY kits let anyone build their own smart speakers and smart cameras, and they were a big success in the STEM toys and maker markets.

Tank says the AIY team quickly noticed that while some customers just wanted to follow the instructions and build the toys, others wanted to cannibalize the hardware to prototype their own devices. Coral was created to cater to these customers.

The problem for Google is that there are dozens of companies with similar pitches to Coral. These run the gamut from startups like Seattle-based Xnor, which makes AI cameras efficient enough to run on solar power, to powerful incumbents like Intel, which unveiled one of the first USB accelerators for enterprise in 2017 and paid $2 billion last December for the chipmaker Habana Labs to improve its edge AI offerings (among other things).

Given the large number of competitors out there, the Coral team says it differentiates itself by tightly integrating its hardware with Google’s ecosystem of AI services.

This stack of products — which covers chips, cloud training, dev tools, and more — has long been a key strength of Google’s AI work. In Coral’s case, there’s a library of AI models specifically compiled for its hardware, as well as AI services on Google Cloud that integrate directly with individual Coral modules like its environment sensors.

In fact, Coral is so tightly integrated with Google’s AI ecosystem that its Edge TPU-powered hardware only works with Google’s machine learning framework, TensorFlow, a fact that rivals in the AI edge market The Verge spoke to said was potentially a limiting factor.

“Coral products process specifically for their platform [while] our products support all the major AI frameworks and models in the market,” a spokesperson for AI edge firm Kneron told The Verge. (Kneron said there was “no negativity” in its assessment and that Google’s entry into the market was welcome as it “validates and drives innovation in the space.”)

But exactly how much business Coral is doing right now is impossible to say. Google is certainly not pushing Coral with anywhere near as much intensity as its cloud AI services, and the company wouldn’t share any sales figures or targets for the group. A source familiar with the matter, though, did tell The Verge that the majority of Coral’s orders are for single units (e.g. AI accelerators and dev boards), while only a few customers are making enterprise purchases on the order of 10,000 units.

For Google, the attraction of Coral may not necessarily be revenue, but simply learning more about how its AI is being applied in the places that matter. In the world of practical machine learning right now, all roads lead, inexorably, to the edge.

Google says new AI models allow for ‘nearly instantaneous’ weather forecasts

Weather forecasting is notoriously difficult, but in recent years experts have suggested that machine learning could better help sort the sunshine from the sleet. Google is the latest firm to get involved, and in a blog post this week shared new research that it says enables “nearly instantaneous” weather forecasts.

The work is in the early stages and has yet to be integrated into any commercial systems, but early results look promising. In the non-peer-reviewed paper, Google’s researchers describe how they were able to generate accurate rainfall predictions up to six hours ahead of time at a 1km resolution from just “minutes” of calculation.

That’s a big improvement over existing techniques, which can take hours to generate forecasts, although they do so over longer time periods and generate more complex data.

Speedy predictions, say the researchers, will be “an essential tool needed for effective adaptation to climate change, particularly for extreme weather.” In a world increasingly dominated by unpredictable weather patterns, they say, short-term forecasts will be crucial for “crisis management, and the reduction of losses to life and property.”


Google’s work used radar data to predict rainfall. The top image shows cloud location, while the bottom image shows rainfall.
Credit: NOAANWSNSSL

The biggest advantage Google’s approach offers over traditional forecasting techniques is speed. The company’s researchers compared their work to two existing methods: optical flow (OF) predictions, which look at the motion of phenomenon like clouds, and simulation forecasting, which creates detailed physics-based simulations of weather systems.

The problem with these older methods — particularly the physics-based simulation — is that they’re incredibly computationally intensive. Simulations made by US federal agencies for weather forecasting, for example, have to process up to 100 terabytes of data from weather stations every day and take hours to run on expensive supercomputers.

“If it takes 6 hours to compute a forecast, that allows only 3-4 runs per day and resulting in forecasts based on 6+ hour old data, which limits our knowledge of what is happening right now,” wrote Google software engineer Jason Hickey in a blog post.

Google’s methods, by comparison, produce results in minutes because they don’t try to model complex weather systems, but instead make predictions about simple radar data as a proxy for rainfall.

The company’s researchers trained their AI model on historical radar data collected between 2017 and 2019 in the contiguous US by the National Oceanic and Atmospheric Administration (NOAA). They say their forecasts were as good as or better than three existing methods making predictions from the same data, though their model was outperformed when attempting to make forecasts more than six hours ahead of time.

This seems to be the sweet spot for machine learning in weather forecasts right now: making speedy, short-term predictions, while leaving longer forecasts to more powerful models. NOAA’s weather models, for example, create forecasts up to 10 days in advance.

While we’ve not yet seen the full effects of AI on weather forecasting, plenty of other companies are also investigating this same area, including IBM and Monsanto. And, as Google’s researchers point out, such forecasting techniques are only going to become more important in our daily lives as we feel the effects of climate change.

Blurry photo suggests Samsung’s next foldable is ‘Bloom,’ with S11 flagship to become S20

Could Samsung’s next foldable phone be named the Galaxy Bloom?

That’s what Korean outlet Ajunews is reporting, showing a blurry photo of what it claims is an early marketing image for the Bloom. The outlet’s story (which we spotted via SamMobile) says that Samsung told partners at a closed-door meeting at CES that their next foldable is modeled after a makeup compact and intended to appeal to female customers. According to the same report, the upcoming S11 will actually be named the S20.

As ever with such rumors, it’s not possible to completely trust what we’ve read, but the report does line up with what we’ve previously heard about Samsung’s next foldable and the names of its upcoming flagship smartphones.

A teaser image for the Bloom matches the design of purported leaked handsets, and Koh reportedly told partners that the phone would use foldable glass instead of a plastic polymer for its display — a key spec that’s also previously been hinted at.

The purported leak from Ajunews matches earlier pictures supposedly showing Samsung’s “Fold 2.”
Image: 王奔宏 via Weibo

What’s new is the name and marketing for the Bloom. Ajunews says Samsung wants the device to appeal to young women, and says its clamshell design is easy to hold in one hand. Samsung Electronics CEO DJ Koh reportedly told one partner: “We designed Galaxy Bloom with the motif of compact powder from French cosmetics brand Lancôme.”

Hardware specs for the Bloom are still mostly a mystery, but Ajunews gives us two new snippets: it’ll be able to record 8K video, and a 5G version will be released in South Korea.

Other news from the meeting concerns Samsung’s flagship Galaxy S line. Matching previous reports we’ve seen, the company’s next big smartphone will not be called the S11, as many expected, but the S20, perhaps as a nod to the new decade. It’ll reportedly launch with three variants: a regular device, a lower-spec option, and an “Ultra” version.

Bear in mind these are all still rumors, but we expect to hear much more about the S20 and the Galaxy Bloom at Samsung’s big unpacked event next month on February 11th.