Alphabet CEO Sundar Pichai says there is ‘no question’ that AI needs to be regulated

Google and Alphabet CEO Sundar Pichai has called for new regulations in the world of AI, highlighting the dangers posed by technology like facial recognition and deepfakes, while stressing that any legislation must balance “potential harms … with social opportunities.”

“[T]here is no question in my mind that artificial intelligence needs to be regulated. It is too important not to,” writes Pichai in an editorial for The Financial Times. “The only question is how to approach it.”

Although Pichai says new regulation is needed, he advocates a cautious approach that might not see many significant controls placed on AI. He notes that for some products like self-driving cars, “appropriate new rules” should be introduced. But in other areas, like healthcare, existing frameworks can be extended to cover AI-assisted products.

“Companies such as ours cannot just build promising new technology and let market forces decide how it will be used,” writes Pichai. “It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone.”

The Alphabet CEO, who heads perhaps the most prominent AI company in the world, also stresses that “international alignment will be critical to making global standards work,” highlighting a potential area of difficulty for tech companies when it comes to AI regulation.

Currently, US and EU plans for AI regulation seem to be diverging. While the White House is advocating for light-touch regulation that avoids “overreach” in order to encourage innovation, the EU is considering more direct intervention, such as a five-year ban on facial recognition. As with regulations on data privacy, any divergence between the US and EU will create additional costs and technical challenges for international firms like Google.

Pichai’s editorial did not call out any specific proposals for regulations, but in comments made later in the day at a conference in Brussels he suggested a temporary ban on facial recognition — as being mooted by the EU — might be welcome. This fits with Google’s own approach to facial recognition, which it refuses to sell because of worries it will be used for mass surveillance. Rivals like Microsoft and Amazon continue to sell the technology.

As Pichai notes, “principles that remain on paper are meaningless.” Sooner or later, talk about the need for regulation is going to have to turn into action.

Update January 21, 6:30AM ET: Story has been updated to incorporate Pichai’s comments, shared after the editorial was published, on the need for regulation of facial recognition.

Apple’s latest AI acquisition leaves some Wyze cameras without people detection

Earlier today, Apple confirmed it purchased Seattle-based AI company (via MacRumors). Acquisitions at Apple’s scale happen frequently, though rarely do they impact everyday people on the day of their announcement. This one is different.

Cameras from fellow Seattle-based company Wyze, including the Wyze Cam V2 and Wyze Cam Pan, have utilized’s on-device people detection since last summer. But now that Apple owns the company, it’s no longer available. Some people on Wyze’s forum are noting that the beta firmware removing the people detection has already started to roll out.

Oddly enough, word of this lapse in service isn’t anything new. Wyze issued a statement in November 2019 saying that had terminated their contract (though its reason for doing so wasn’t as clear then as it is today), and that a firmware update slated for mid-January 2020 would remove the feature from those cameras.

There’s a bright side to this loss, though, even if Apple snapping up makes Wyze’s affordable cameras less appealing in the interim. Wyze says that it’s working on its own in-house version of people detection for launch at some point this year. And whether it operates on-device via “edge AI” computing like’s does, or by authenticating through the cloud, it will be free for users when it launches.

That’s good and all, but the year just started, and it’s a little worrying Wyze hasn’t followed up with a specific time frame for its replacement of the feature. Two days ago, Wyze’s social media community manager stated that the company was “making great progress” on its forums, but they didn’t offer up when it would be available.

As for what Apple plans to do with is anyone’s guess. Ahead of its partnership with Wyze, the AI startup had developed a small, wireless AI camera that ran exclusively on solar power. Regardless of whether Apple is more interested in its edge computing algorithm, as was seen working on Wyze cameras for a short time, or its clever hardware ideas around AI-powered cameras, it’s getting all of it with the purchase.

Facebook’s problems moderating deepfakes will only get worse in 2020

Last summer, a video of Mark Zuckerberg circulated on Instagram in which the Facebook CEO appeared to claim he had “total control of billions of people’s stolen data, all their secrets, their lives, their futures.” It turned out to be an art project rather than a deliberate attempt at misinformation, but Facebook allowed it to stay on the platform. According to the company, it didn’t violate any of its policies.

For some, this showed how big tech companies aren’t prepared to deal with the onslaught of AI-generated fake media known as deepfakes. But it isn’t necessarily Facebook’s fault. Deepfakes are incredibly hard to moderate, not because they’re difficult to spot (though they can be), but because the category is so broad that any attempt to “clamp down” on AI-edited photos and videos would end up affecting a whole swath of harmless content.

Banning deepfakes altogether would mean removing popular jokes like gender-swapped Snapchat selfies and artificially aged faces. Banning politically misleading deepfakes just leads back to the same political moderation problems tech companies have faced for years. And given there’s no simple algorithm that can automatically spot AI-edited content, whatever ban they do enact would mean creating even more work for beleaguered human moderators. For companies like Facebook, there’s just no easy option.

“If you take ‘deepfake’ to mean any video or image that’s edited by machine learning then it applies to such a huge category of thing that it’s unclear if it means anything at all,” Tim Hwang, former director of the Harvard-MIT Ethics and Governance of AI Initiative, tells The Verge. “If I had my druthers, which I’m not sure if I do, I would say that the way we should think about deepfakes is as a matter of intent. What are you trying to accomplish at the sort of media that you’re creating?”

Notably, this seems to be the direction that big platforms are actually taking. Facebook and Reddit both announced moderation policies that covered deepfakes last week, and rather than trying to stamp out the format altogether, they took a narrower focus.

Facebook said it will remove “manipulated misleading media” which has been “edited or synthesized” using AI or machine learning “in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.” But the company noted that this does not cover “parody or satire” or misleading edits made using traditional means, like last year’s viral video of House Speaker Nancy Pelosi supposedly slurring her words.

New apps like Doublicat (pictured) will make deepfakes easier and more fun.
Credit: Doublicat

Reddit, meanwhile, didn’t mention AI at all, but instead said it will remove media that “impersonates individuals or entities in a misleading or deceptive manner.” It’s also created an exemption for “satire and parody,” and added that it will “always take into account the context of any particular content” — a broad caveat that gives its mods a lot of leeway.

As many have pointed out, these policies are full of loopholes. Writing at OneZero, Will Oremus notes that Facebook’s only covers edited media which includes speech, for example. This means that “a deepfake video that makes it look like a politician burned the

American flag, participated in a white nationalist rally, or shook hands with a terrorist” would not be prohibited — something Facebook confirmed to Oremus.

These are glaring omissions, but they highlight the difficulty separating deepfakes from the underlying problems of platform moderation. Although many reports in recent years have treated “deepfake” as synonymous with “political misinformation,” the actual definition is far more broad. And the problem will only get worse in 2020.

While earlier versions of deepfake software took some patience and technical skill to use, the next generation will make creating deepfakes as easy as posting. Apps that use AI to edit video (the standard definition of a deepfake) will become commonplace, and as they spread — used for in-jokes, brand tweets, bullying, harassment, and everything in between — the idea of the deepfake as a unique threat to truth online will fade away.

Just this week, an app named Doublicat launched on iOS and Android that uses machine learning to paste users’ faces onto popular reaction GIFs. Right now it only works with preselected GIFs, but the company’s CEO told The Verge it’ll allow users to insert faces into any content they like in future, powered by a type of machine learning method known as a GAN.

Does all this make Doublicat a deepfake app? Yep. And will it undermine democracy? Probably not.

Just look at the quality of its output, as demonstrated by the GIF below, which shows my face pasted onto Chris Pratt’s in Parks and Recreation. Technologically, it’s impressive. The app made the GIF in a few seconds from just a single photo. But it’s never going to be mistaken for the real thing. Meanwhile, the creator of TikTok, ByteDance, has been experimenting with deepfake features (though it says they won’t be incorporated into its wildly popular app) and Snapchat recently introduced its own face-swapping tools.

The author as Chris Pratt.

Hwang argues that the dilution of the term deepfakes could actually have benefits in the long run. “I think the great irony of people saying that all of these consumer features are also deepfakes, is that it in some ways commoditizes what deepfake means,” says Hwang. If deepfakes become commonplace and unremarkable, then people will “get comfortable with the notion of what this technology can do,” he says. Then, hopefully, we can understand it better and focus on the underlying problems of misinformation and political propaganda.

It’s possible to argue that the problem of moderating deepfakes on social media has been mostly a distraction from the start. AI-edited political propaganda has failed to materialize in a meaningful way, and studies show that the vast majority of deepfakes are nonconsensual porn (making up 96 percent of online deepfake videos).

Social media platforms have happily engaged in the debate over deepfake moderation, but as Facebook and Reddit’s recent announcements show, these efforts are mostly a sideshow. The core issues have not changed: who gets to lie on the internet, and who decides if they’re lying? Once deepfakes cease to be believable as an existential threat to truth, we’ll be left with the same, unchanging questions, more pressing than ever before.

Coral is Google’s quiet initiative to enable AI without the cloud

AI allows machines to carry out all sorts of tasks that used to be the domain of humans alone. Need to run quality control on a factory production line? Set up an AI-powered camera to spot defects. How about interpreting medical data? Machine learning can identify potential tumors from scans and flag them to a doctor.

But applications like this are useful only so long as they’re fast and secure. An AI camera that takes minutes to process images isn’t much use in a factory, and no patient wants to risk the exposure of their medical data if it’s sent to the cloud for analysis.

These are the sorts of problems Google is trying to solve through a little-known initiative called Coral.

“Traditionally, data from [AI] devices was sent to large compute instances, housed in centralized data centers where machine learning models could operate at speed,” Vikram Tank, product manager at Coral, explained to The Verge over email. “Coral is a platform of hardware and software components from Google that help you build devices with local AI — providing hardware acceleration for neural networks … right on the edge device.”

Coral’s products, like the dev board (above), can be used to prototype new AI devices.
Image: Google

You might not have heard of Coral before (it only “graduated” out of beta last October), but it’s part of a fast-growing AI sector. Market analysts predict that more than 750 million edge AI chips and computers will be sold in 2020, rising to 1.5 billion by 2024. And while most of these will be installed in consumer devices like phones, a great deal are destined for enterprise customers in industries like automotive and health care.

To meet customers’ needs Coral offers two main types of products: accelerators and dev boards meant for prototyping new ideas, and modules that are destined to power the AI brains of production devices like smart cameras and sensors. In both cases, the heart of the hardware is Google’s Edge TPU, an ASIC chip optimized to run lightweight machine learning algorithms — a (very) little brother to the water-cooled TPU used in Google’s cloud servers.

While its hardware can be used by lone engineers to create fun projects (Coral offers guides on how to build an AI marshmallow-sorting machine and smart bird feeder, for example), the long-term focus, says Tank, is on enterprise customers in industries like the automotive world and health care.

As an example of the type of problem Coral is targeting, Tank gives the scenario of a self-driving car that’s using machine vision to identify objects on the street.

“A car moving at 65 mph would traverse almost 10 feet in 100 milliseconds,” he says, so any “delays in processing” — caused by a slow mobile connection, for example — “add risk to critical use cases.” It’s much safer to do that analysis on-device rather than waiting on a slow connection to find out whether that’s a stop sign or a street light up ahead.

Tank says similar benefits exist with regard to improved privacy. “Consider a medical device manufacturer that wants to do real time analysis of ultrasound images using image recognition,” he says. Sending those images to the cloud creates a potential weak link for hackers to target, but analyzing images on-device allows patients and doctors to “have confidence that data processed on the device doesn’t go out of their control.”

Google’s Edge TPU, a tiny processing chip optimized for AI that sits at the heart of most Coral products.
Image: Google

Although Coral is targeting the world of enterprise, the project actually has its roots in Google’s “AIY” range of do-it-yourself machine learning kits, says Tank. Launched in 2017 and powered by Raspberry Pi computers, AIY kits let anyone build their own smart speakers and smart cameras, and they were a big success in the STEM toys and maker markets.

Tank says the AIY team quickly noticed that while some customers just wanted to follow the instructions and build the toys, others wanted to cannibalize the hardware to prototype their own devices. Coral was created to cater to these customers.

The problem for Google is that there are dozens of companies with similar pitches to Coral. These run the gamut from startups like Seattle-based Xnor, which makes AI cameras efficient enough to run on solar power, to powerful incumbents like Intel, which unveiled one of the first USB accelerators for enterprise in 2017 and paid $2 billion last December for the chipmaker Habana Labs to improve its edge AI offerings (among other things).

Given the large number of competitors out there, the Coral team says it differentiates itself by tightly integrating its hardware with Google’s ecosystem of AI services.

This stack of products — which covers chips, cloud training, dev tools, and more — has long been a key strength of Google’s AI work. In Coral’s case, there’s a library of AI models specifically compiled for its hardware, as well as AI services on Google Cloud that integrate directly with individual Coral modules like its environment sensors.

In fact, Coral is so tightly integrated with Google’s AI ecosystem that its Edge TPU-powered hardware only works with Google’s machine learning framework, TensorFlow, a fact that rivals in the AI edge market The Verge spoke to said was potentially a limiting factor.

“Coral products process specifically for their platform [while] our products support all the major AI frameworks and models in the market,” a spokesperson for AI edge firm Kneron told The Verge. (Kneron said there was “no negativity” in its assessment and that Google’s entry into the market was welcome as it “validates and drives innovation in the space.”)

But exactly how much business Coral is doing right now is impossible to say. Google is certainly not pushing Coral with anywhere near as much intensity as its cloud AI services, and the company wouldn’t share any sales figures or targets for the group. A source familiar with the matter, though, did tell The Verge that the majority of Coral’s orders are for single units (e.g. AI accelerators and dev boards), while only a few customers are making enterprise purchases on the order of 10,000 units.

For Google, the attraction of Coral may not necessarily be revenue, but simply learning more about how its AI is being applied in the places that matter. In the world of practical machine learning right now, all roads lead, inexorably, to the edge.

Google says new AI models allow for ‘nearly instantaneous’ weather forecasts

Weather forecasting is notoriously difficult, but in recent years experts have suggested that machine learning could better help sort the sunshine from the sleet. Google is the latest firm to get involved, and in a blog post this week shared new research that it says enables “nearly instantaneous” weather forecasts.

The work is in the early stages and has yet to be integrated into any commercial systems, but early results look promising. In the non-peer-reviewed paper, Google’s researchers describe how they were able to generate accurate rainfall predictions up to six hours ahead of time at a 1km resolution from just “minutes” of calculation.

That’s a big improvement over existing techniques, which can take hours to generate forecasts, although they do so over longer time periods and generate more complex data.

Speedy predictions, say the researchers, will be “an essential tool needed for effective adaptation to climate change, particularly for extreme weather.” In a world increasingly dominated by unpredictable weather patterns, they say, short-term forecasts will be crucial for “crisis management, and the reduction of losses to life and property.”

Google’s work used radar data to predict rainfall. The top image shows cloud location, while the bottom image shows rainfall.

The biggest advantage Google’s approach offers over traditional forecasting techniques is speed. The company’s researchers compared their work to two existing methods: optical flow (OF) predictions, which look at the motion of phenomenon like clouds, and simulation forecasting, which creates detailed physics-based simulations of weather systems.

The problem with these older methods — particularly the physics-based simulation — is that they’re incredibly computationally intensive. Simulations made by US federal agencies for weather forecasting, for example, have to process up to 100 terabytes of data from weather stations every day and take hours to run on expensive supercomputers.

“If it takes 6 hours to compute a forecast, that allows only 3-4 runs per day and resulting in forecasts based on 6+ hour old data, which limits our knowledge of what is happening right now,” wrote Google software engineer Jason Hickey in a blog post.

Google’s methods, by comparison, produce results in minutes because they don’t try to model complex weather systems, but instead make predictions about simple radar data as a proxy for rainfall.

The company’s researchers trained their AI model on historical radar data collected between 2017 and 2019 in the contiguous US by the National Oceanic and Atmospheric Administration (NOAA). They say their forecasts were as good as or better than three existing methods making predictions from the same data, though their model was outperformed when attempting to make forecasts more than six hours ahead of time.

This seems to be the sweet spot for machine learning in weather forecasts right now: making speedy, short-term predictions, while leaving longer forecasts to more powerful models. NOAA’s weather models, for example, create forecasts up to 10 days in advance.

While we’ve not yet seen the full effects of AI on weather forecasting, plenty of other companies are also investigating this same area, including IBM and Monsanto. And, as Google’s researchers point out, such forecasting techniques are only going to become more important in our daily lives as we feel the effects of climate change.

Warner Bros. signs AI startup that claims to predict film success

Storied film company Warner Bros. has signed a deal with Cinelytic, an LA startup that uses machine learning to predict film success. A story from The Hollywood Reporter claims that Warner Bros. will use Cinelytic’s algorithms “to guide decision-making at the greenlight stage,” but a source at the studio told The Verge that the software would only be used to help with marketing and distribution decisions made by Warner Bros. Pictures International.

In an interview with THR, Cinelytic’s CEO Tobias Queisser stressed that AI was only an assistive tool. “Artificial intelligence sounds scary. But right now, an AI cannot make any creative decisions,” Queisser told the publication. “What it is good at is crunching numbers and breaking down huge data sets and showing patterns that would not be visible to humans. But for creative decision-making, you still need experience and gut instinct.”

Regardless of what Cinelytic’s technology is being used for, the deal is a step forward for Hollywood’s slow embrace of machine learning. As The Verge reported last year, Cinelytic is just one of a new crop of startups leveraging AI to forecast film performance, but the film world has historically been skeptical about their ability.

Andrea Scarso, a film investor and Cinelytic customer, told The Verge that the startup’s software hadn’t ever changed his mind, but “opens up a conversation about different approaches.” Said Scarso: “You can see how, sometimes, just one or two different elements around the same project could have a massive impact on the commercial performance.”

Cinelytic’s software lets customers play fantasy football with films. Users can model a pitch; inputting genre, budget, actors, and so on, and then see what happens when they tweak individual elements. Does replacing Tom Cruise with Keanu Reeves get better engagement with under-25s? Does it increase box office revenue in Europe? And so on.

Many AI experts are skeptical about the ability of algorithms to make predictions in a field as messy as filmmaking. Because machine learning applications are trained on historical data they tend to be conservative, focusing on patterns that led to past successes rather than predicting what will excite future audiences. Scientific studies also suggest algorithms only produce limited predictive gains, often repeating obvious insights (like “Scarlett Johansson is a bankable film star”) that can be discovered without AI.

But for those backing machine learning in filmmaking, the benefit is simply that such tools produce uncomplicated analysis faster than humans can. This can be especially useful at film festivals, notes THR, when studios can be forced into bidding wars for distribution rights, and have only a few hours to decide how much a film might be worth.

“We make tough decisions every day that affect what — and how — we produce and deliver films to theaters around the world, and the more precise our data is, the better we will be able to engage our audiences,” Warner Bros.’ senior vice president of distribution, Tonis Kiis, told THR.

Update January 8, 11:00AM ET: Story has been updated with additional information from a source at Warner Bros.

Neon CEO explains the tech behind his overhyped ‘artificial humans’

The most buzzed-about company at CES 2020 doesn’t make a gadget you can see or touch. It doesn’t even have a product yet. But for reasons I’m still not entirely sure I grasp, the lead-up to this week’s show in Las Vegas was dominated by discussion of a project called Neon, which has emerged from a previously unknown Samsung subsidiary known as STAR Labs.

What Neon has been promising is so ambitious that it’s easy to swing your expectations around full circle and assume the mundane. The project’s Twitter bio simply reads “Artificial Human,” which could mean anything from an AI chatbot to a full-on android. Promotional videos posted in the run-up to CES, however, suggested that Neon would very much be closer to the former.

Yesterday, we were finally able to see the technology for ourselves. And they are, indeed, just digital avatars, albeit impressively realistic ones. We weren’t able to interact with Neon ourselves, and the demonstration we did see was extremely rough. But the concept and the technology is ambitious enough that we’re still pretty intrigued. (To get a clear idea of the tech’s limitations, check out this interaction between a CNET journalist and a Neon avatar.)

After a low-key event on the CES show floor, we caught up with Neon CEO Pranav Mistry to chat about the project.

Even at a youthful-looking 38, Mistry is a tech industry veteran who’s worked on products like Xbox hardware at Microsoft and the original Galaxy Gear at Samsung. “It was completely my baby, from design to technology,” he recalls of the early smartwatch. As VP of research at Samsung he later moved on to projects like Gear VR, but with Neon he’s now spearheading an initiative without direct oversight from the parent company.

“Right now you can say that [STAR Labs is] owned by Samsung,” Mistry tells me. “But that won’t necessarily always be the case. There’s no technology relation or product relation between what STAR Labs does and Samsung. There’s no Samsung logos anywhere, there’s nothing to do with Bixby or any other product that’s part of Samsung. Even what we’re planning to show at CES — no-one at Samsung other than me knows about it or can tell me not to do it.”

Mistry speaks at a thousand miles an hour, and one day I would very much like to sit down with him for a longer chat conducted at a less breakneck pace. At various points he invoked Einstein, Sagan, and da Vinci in an attempt to convey the lofty goals he was aiming to achieve with Neon. It was never less than entertaining. My focus, however, was on figuring out how Neon works and what it actually is.

Neon CEO Pranav Mistry on stage at CES 2020.

The Neon project is — or as the company would say, “Neons are” — realistic human avatars that are computationally generated and can interact with you in real time. At this point, each Neon is created from footage of an actual person that is fed into a machine-learning model, although Mistry says Neon could ultimately just generate their appearances from scratch.

I asked how much video would be required to capture the likeness of a person, and Mistry said “nothing much.” The main limitation right now is the requirement for a large amount of local processing power to render each avatar live — the demo I saw at CES was running on an ultra-beefy PC with two 128-core CPUs. Mistry notes that commercial applications would likely run in the cloud, however, and doesn’t see latency as a major hurdle because there wouldn’t need to be a huge amount of data streamed at once.

The CES demo featured a Neon employee interacting with a virtual avatar of a woman with close-cropped hair and dressed in all-black. I’d seen video of this woman, among other people, playing around the Neon booth ahead of Mistry’s presentation — at least, I thought it was video. Mistry, however, swears that it was entirely computer-generated footage, albeit pre-rendered rather than captured in real time.

Well, okay. That’s not necessarily impressive — we’ve all seen what deepfakes can do with much less effort. What’s different about Neon is the promised real-time aspect, and the focus on intangible human-like behavior. Multiple times, the avatar I mentioned before was told to smile on command by the employee conducting the demonstration. But, according to Mistry, she’d no more produce the same identical smile each time than you would. Each expression, action, or phrase is calculated on the fly, based on the AI model that’s been built up for each Neon.

This is all by design, and Mistry even says Neon is willing to focus on humanity at the expense of functionality. For example, these avatars aren’t intended to be assistants at their owners’ beck and call — they’ll sometimes “get tired” and need time to themselves. According to Mistry, this cuts to the core of why Neon is using language like “artificial human” in the first place.

“I feel that if you call something a digital avatar or AI assistant or something like that, it means you’re calling them a machine already,” Mistry says. “You are not thinking in the terms of a friend. It can only happen when we start feeling the same kind of respect. I’ve been working on this for a long time. In order to design this thing I need to think in those terms. If they are human, what are the limits they will have? Can they work 24 hours and answer all your questions? A Neon can get tired. Programmatically, computationally, that will make you feel ‘Okay, let me only engage in certain discussions. This is my friend.’”

The obvious question, then, is what’s the use case for an artificial human AI with artificial flaws? On stage, Mistry mentioned possible implementations from personal assistants to foreign language tutors. Then again, he literally said “there is no business model” a few minutes later, so I had to follow up on that point.

“There are a lot of people in the world that people remember,” Mistry says. “I was an architect and a designer before, and there are a few people that are remembered like that like Einstein, or Picasso, or some musicians in India, and we know their names not because they were rich but because of what they contributed to the world. And that is what I want to end up being, because I have everything else. Do I have enough money to live with? Yeah, more than enough. What I want to give back to the world is something that’s remembered after I go. Because you don’t know how rich Michelangelo was — no-one cares!”

“But you’re going to be selling this technology to people, right?” I say, somewhat bewildered.

“Of course. What I’m pointing out is that we believe Neons will bring more human aspects and maybe we will license that technology, or not technology as a license but Neons [themselves] as a license. Just to make a point, of course we are not saying we’re a philanthropic company. But the goal is not to build around data and money and so on. Because I want to get a good night’s sleep after 20 years.”

Photo by Sam Byford / The Verge

The concept of ultra-realistic, entirely artificial humans with minds of their own raises obvious questions of nefarious use cases, particularly in a time of heightened fears about political misinformation, and very real examples of AI being used to create non-consensual pornography. I asked Mistry whether he’d considered the potential for negative side effects. “Of course,” he said, comparing Neon to how nuclear technology generates electricity while also being used for weapons of mass destruction. “Every technology has pros and cons — it’s up to us as humans how we look at that.”

Will Neon limit who it sells the tech to, then? Mistry says the company will “more than limit” the tech by encoding restrictions “in hardware.” But he’s not clear what restrictions would be encoded or how.

Neon still has a long way to go. Even allowing for the unfavorable network environment of a CES show floor, the demonstration’s responses were delayed and linguistically stilted. As someone with an interest in AI and natural language processing, I could see that there’s something to hype here. But I could also see that the average layperson would remain underwhelmed. It’s also worth reiterating that Neon isn’t allowing private demos at CES beyond its staged presentations, reinforcing the idea that the technology is far from ready.

Still, even if the “artificial human” pitch is a little over-egged, Neon is actually more ambitious than I’d assumed. And, despite the pre-CES hype, Mistry is entirely open about the fact that there’s basically no product to show. The message right now, in fact, is to come back in a year and see where Neon is then. If real progress has been made by CES 2021, then, maybe we’ll get excited.

White House encourages hands-off approach to AI regulation

While experts worry about AI technologies like intrusive surveillance and autonomous weaponry, the US government is advocating a hands-off approach to AI’s regulation.

The White House today unveiled 10 principles that federal agencies should consider when devising laws and rules for the use of artificial intelligence in the private sector, but stressed that a key concern was limiting regulatory “overreach.”

The public will have 90 days to submit feedback on the plans, reports Wired, after which federal agencies will have 180 days to work out how to implement the principles.

Any regulation devised by agencies should aim to encourage qualities like “fairness, non-discrimination, openness, transparency, safety, and security,” says the Office of Science and Technology Policy (OSTP). But the introduction of any new rules should also be preceded by “risk assessment and cost-benefit analyses,” and must incorporate “scientific evidence and feedback from the American public.”

The OSTP urged other nations to follow its lead, an uncertain prospect at a time when major international bodies like the Organisation for Economic Co-operation and Development (OECD), G7, and EU are considering more regulation for AI.

“Europe and our allies should avoid heavy handed innovation-killing models, and instead consider a similar regulatory approach,” said the OSTP. “The best way to counter authoritarian uses of AI is to make sure America and our international partners remain the global hubs of innovation, shaping the evolution of technology in a manner consistent with our common values.”

Michael Kratsios, chief technology officer of the United States, will formally announce the principles at CES on Wednesday. In a call with reporters yesterday, an administration official said that the US was already over-regulating AI technologies at “state and local levels.”

Huawei Germany Headquarters

Experts are worried about the spread of AI surveillance using technology like facial recognition.
Photo by Rolf Vennenbernd / picture alliance via Getty Images

“[W]hen particular states and localities make decisions like banning facial recognition across the board, you end up in tricky situations where civil servants may be breaking the law when they try to unlock their government-issued phone,” said an administration official, according to a report from VentureBeat.

The announcement of these 10 principles follows an agenda set by the White House in February 2019 when it launched the “American AI Initiative.” But at a time when experts warn about the damaging effects of unaccountable AI systems in government domains like housing and health care, and when the rise of facial recognition systems seem to be eroding civil liberties, many may question the wisdom of the White House’s approach.

And while the US follows a laissez-faire attitude domestically, it is also introducing new restrictions on the international stage. This Monday a new export ban came into place, forbidding US companies from selling software abroad that uses AI to analyze satellite imagery without a license. The ban is widely seen as America’s attempt to counter the rise of tech rivals like China, which is quickly catching up to the US in its development of AI.

Samsung’s ‘artificial humans’ are just digital avatars

Samsung subsidiary STAR Labs has officially unveiled its mysterious “artificial human” project, Neon. As far as we can tell, though, there’s no mystery here at all. Neon is just digital avatars — computer-animated human likenesses about as deserving of the “artificial human” moniker as Siri or the Tupac hologram.

In fairness to STAR Labs, the company does seem to be trying something new with its avatars. But exactly what it’s doing we can’t tell, as its first official press release today fails to explain the company’s underlying tech and instead relies solely on jargon and hype.

“Neon is like a new kind of life,” says STAR Labs CEO Pranav Mistry in the release. “There are millions of species on our planet and we hope to add one more.” (Because nothing says “grounded and reasonable” like a tech executive comparing his work to the creation of life.)

Even more annoyingly, it seems that the teaser images and leaked videos of the Neon avatars we’ve seen so far are fake. As the company explains (emphasis ours): “Scenarios shown at our CES Booth and in our promotional content are fictionalized and simulated for illustrative purposes only.” So really we have no idea what Neon’s avatars actually look like.

Sorting through the chaff in STAR Labs’ press release today, here’s what we know for sure.

Each Neon avatar is “computationally generated” and will hold conversations with users while displaying “emotions and intelligence,” says the company. Their likenesses are modeled after real humans, but have newly generated “expressions, dialogs, and emotion.” Each avatar (known individually as “NEONs”) can be customized for different tasks, and is able to respond to queries “with latency of less than a few milliseconds.” They’re not intended to be just visual skins for AI assistants, but put to more varying uses instead:

“In the near future, one will be able to license or subscribe to a NEON as a service representative, a financial advisor, a healthcare provider, or a concierge. Over time, NEONs will work as TV anchors, spokespeople, or movie actors; or they can simply be companions and friends.”

So far, so good. It’s no secret that CGI humans have become more lifelike in recent years, and are already being used in some of the scenarios outlined above. If STAR Labs can make these avatars more realistic, then they might be adopted more widely. Fine.

But if you’ve ever interacted with, say, a virtual greeter at an airport or museum, you’ll know how paper-thin the “humanity” of these avatars are. At best, they’re Siri or Alexa with a CGI face, and it’s not clear if STAR Labs has created anything more convincing.

STAR Labs’ “Core R3” technology. Strong buzzwords, weak explanations.
Image: STAR Labs

In its PR, the company veers into strange territory is in its description of the avatars’ underlying technology. It says it’s using proprietary software called “Core R3” to create the avatars, and that its approach is “fundamentally different from deepfake or other facial reanimation techniques.” But it doesn’t say how the software does work, and instead relies on wishy-washy assurances that Core R3 “creates new realities.” We’d much rather know if the company is using, say, high-resolution video captures pinned onto 3D models or AI to generate facial movements — whatever the case may be.

We’ve reached out to STAR Labs with our questions, but it seems we’ll have to wait to see the technology in person to get a better understanding. The firm is offering private demos of its avatars at CES this week, and The Verge is scheduled to check out the technology.

We look forward to giving you our hands-on impressions later this week, but until then, don’t worry about any “AI android uprising” — these aren’t the artificial humans you’re looking for.

Facebook bans deepfake videos ahead of the 2020 election

With a presidential election campaign underway in the United States, Facebook announced Monday that it has banned manipulated videos and photos, often called deepfakes, from its platforms.

The policy change was announced through a blog post late Monday night, confirming an earlier report from The Washington Post. In the post, Facebook said that it would begin removing content that has been edited “in ways that aren’t apparent to an average person and would likely mislead someone” and are created by artificial intelligence or machine learning algorithms.

But the policy does include content that is parody or satire, or video that has been edited to remove words or change the order in which they appear, the company said.

This policy change comes ahead of a House Energy and Commerce hearing on manipulated media that is scheduled for Wednesday. The author of Monday’s post — Monika Bickert, Facebook’s vice president of global policy management — is set to represent Facebook in front of lawmakers at this week’s hearing.

The deepfakes ban comes after an altered video of House Speaker Nancy Pelosi (D-CA) went viral on social media platforms last summer. This video was widely viewed on Facebook, and when reached for comment by The Verge at the time, the company said that it did not violate any of the company’s policies. And Monday’s ban against deepfakes doesn’t appear to cover videos like the viral Pelosi clip, either. That video wasn’t created by AI, but was likely edited using readily available software to slur her speech.

Other platforms were also caught in the crossfire following the Pelosi video, including Twitter. In November, Twitter began crafting its own deepfakes policy and requested feedback from users concerning the platform’s future rules. The company has yet to issue any new guidance on manipulated media.