Warner Bros. signs AI startup that claims to predict film success

Storied film company Warner Bros. has signed a deal with Cinelytic, an LA startup that uses machine learning to predict film success. A story from The Hollywood Reporter claims that Warner Bros. will use Cinelytic’s algorithms “to guide decision-making at the greenlight stage,” but a source at the studio told The Verge that the software would only be used to help with marketing and distribution decisions made by Warner Bros. Pictures International.

In an interview with THR, Cinelytic’s CEO Tobias Queisser stressed that AI was only an assistive tool. “Artificial intelligence sounds scary. But right now, an AI cannot make any creative decisions,” Queisser told the publication. “What it is good at is crunching numbers and breaking down huge data sets and showing patterns that would not be visible to humans. But for creative decision-making, you still need experience and gut instinct.”

Regardless of what Cinelytic’s technology is being used for, the deal is a step forward for Hollywood’s slow embrace of machine learning. As The Verge reported last year, Cinelytic is just one of a new crop of startups leveraging AI to forecast film performance, but the film world has historically been skeptical about their ability.

Andrea Scarso, a film investor and Cinelytic customer, told The Verge that the startup’s software hadn’t ever changed his mind, but “opens up a conversation about different approaches.” Said Scarso: “You can see how, sometimes, just one or two different elements around the same project could have a massive impact on the commercial performance.”

Cinelytic’s software lets customers play fantasy football with films. Users can model a pitch; inputting genre, budget, actors, and so on, and then see what happens when they tweak individual elements. Does replacing Tom Cruise with Keanu Reeves get better engagement with under-25s? Does it increase box office revenue in Europe? And so on.

Many AI experts are skeptical about the ability of algorithms to make predictions in a field as messy as filmmaking. Because machine learning applications are trained on historical data they tend to be conservative, focusing on patterns that led to past successes rather than predicting what will excite future audiences. Scientific studies also suggest algorithms only produce limited predictive gains, often repeating obvious insights (like “Scarlett Johansson is a bankable film star”) that can be discovered without AI.

But for those backing machine learning in filmmaking, the benefit is simply that such tools produce uncomplicated analysis faster than humans can. This can be especially useful at film festivals, notes THR, when studios can be forced into bidding wars for distribution rights, and have only a few hours to decide how much a film might be worth.

“We make tough decisions every day that affect what — and how — we produce and deliver films to theaters around the world, and the more precise our data is, the better we will be able to engage our audiences,” Warner Bros.’ senior vice president of distribution, Tonis Kiis, told THR.

Update January 8, 11:00AM ET: Story has been updated with additional information from a source at Warner Bros.

White House encourages hands-off approach to AI regulation

While experts worry about AI technologies like intrusive surveillance and autonomous weaponry, the US government is advocating a hands-off approach to AI’s regulation.

The White House today unveiled 10 principles that federal agencies should consider when devising laws and rules for the use of artificial intelligence in the private sector, but stressed that a key concern was limiting regulatory “overreach.”

The public will have 90 days to submit feedback on the plans, reports Wired, after which federal agencies will have 180 days to work out how to implement the principles.

Any regulation devised by agencies should aim to encourage qualities like “fairness, non-discrimination, openness, transparency, safety, and security,” says the Office of Science and Technology Policy (OSTP). But the introduction of any new rules should also be preceded by “risk assessment and cost-benefit analyses,” and must incorporate “scientific evidence and feedback from the American public.”

The OSTP urged other nations to follow its lead, an uncertain prospect at a time when major international bodies like the Organisation for Economic Co-operation and Development (OECD), G7, and EU are considering more regulation for AI.

“Europe and our allies should avoid heavy handed innovation-killing models, and instead consider a similar regulatory approach,” said the OSTP. “The best way to counter authoritarian uses of AI is to make sure America and our international partners remain the global hubs of innovation, shaping the evolution of technology in a manner consistent with our common values.”

Michael Kratsios, chief technology officer of the United States, will formally announce the principles at CES on Wednesday. In a call with reporters yesterday, an administration official said that the US was already over-regulating AI technologies at “state and local levels.”

Huawei Germany Headquarters

Experts are worried about the spread of AI surveillance using technology like facial recognition.
Photo by Rolf Vennenbernd / picture alliance via Getty Images

“[W]hen particular states and localities make decisions like banning facial recognition across the board, you end up in tricky situations where civil servants may be breaking the law when they try to unlock their government-issued phone,” said an administration official, according to a report from VentureBeat.

The announcement of these 10 principles follows an agenda set by the White House in February 2019 when it launched the “American AI Initiative.” But at a time when experts warn about the damaging effects of unaccountable AI systems in government domains like housing and health care, and when the rise of facial recognition systems seem to be eroding civil liberties, many may question the wisdom of the White House’s approach.

And while the US follows a laissez-faire attitude domestically, it is also introducing new restrictions on the international stage. This Monday a new export ban came into place, forbidding US companies from selling software abroad that uses AI to analyze satellite imagery without a license. The ban is widely seen as America’s attempt to counter the rise of tech rivals like China, which is quickly catching up to the US in its development of AI.

Samsung’s ‘artificial humans’ are just digital avatars

Samsung subsidiary STAR Labs has officially unveiled its mysterious “artificial human” project, Neon. As far as we can tell, though, there’s no mystery here at all. Neon is just digital avatars — computer-animated human likenesses about as deserving of the “artificial human” moniker as Siri or the Tupac hologram.

In fairness to STAR Labs, the company does seem to be trying something new with its avatars. But exactly what it’s doing we can’t tell, as its first official press release today fails to explain the company’s underlying tech and instead relies solely on jargon and hype.

“Neon is like a new kind of life,” says STAR Labs CEO Pranav Mistry in the release. “There are millions of species on our planet and we hope to add one more.” (Because nothing says “grounded and reasonable” like a tech executive comparing his work to the creation of life.)

Even more annoyingly, it seems that the teaser images and leaked videos of the Neon avatars we’ve seen so far are fake. As the company explains (emphasis ours): “Scenarios shown at our CES Booth and in our promotional content are fictionalized and simulated for illustrative purposes only.” So really we have no idea what Neon’s avatars actually look like.

Sorting through the chaff in STAR Labs’ press release today, here’s what we know for sure.

Each Neon avatar is “computationally generated” and will hold conversations with users while displaying “emotions and intelligence,” says the company. Their likenesses are modeled after real humans, but have newly generated “expressions, dialogs, and emotion.” Each avatar (known individually as “NEONs”) can be customized for different tasks, and is able to respond to queries “with latency of less than a few milliseconds.” They’re not intended to be just visual skins for AI assistants, but put to more varying uses instead:

“In the near future, one will be able to license or subscribe to a NEON as a service representative, a financial advisor, a healthcare provider, or a concierge. Over time, NEONs will work as TV anchors, spokespeople, or movie actors; or they can simply be companions and friends.”

So far, so good. It’s no secret that CGI humans have become more lifelike in recent years, and are already being used in some of the scenarios outlined above. If STAR Labs can make these avatars more realistic, then they might be adopted more widely. Fine.

But if you’ve ever interacted with, say, a virtual greeter at an airport or museum, you’ll know how paper-thin the “humanity” of these avatars are. At best, they’re Siri or Alexa with a CGI face, and it’s not clear if STAR Labs has created anything more convincing.

STAR Labs’ “Core R3” technology. Strong buzzwords, weak explanations.
Image: STAR Labs

In its PR, the company veers into strange territory is in its description of the avatars’ underlying technology. It says it’s using proprietary software called “Core R3” to create the avatars, and that its approach is “fundamentally different from deepfake or other facial reanimation techniques.” But it doesn’t say how the software does work, and instead relies on wishy-washy assurances that Core R3 “creates new realities.” We’d much rather know if the company is using, say, high-resolution video captures pinned onto 3D models or AI to generate facial movements — whatever the case may be.

We’ve reached out to STAR Labs with our questions, but it seems we’ll have to wait to see the technology in person to get a better understanding. The firm is offering private demos of its avatars at CES this week, and The Verge is scheduled to check out the technology.

We look forward to giving you our hands-on impressions later this week, but until then, don’t worry about any “AI android uprising” — these aren’t the artificial humans you’re looking for.

Samsung has made an invisible AI-powered keyboard for your phone

We all work from our phones more than ever, but typing out emails and long messages is still slow compared to using a keyboard. Samsung has a solution: use AI and your phone’s camera to track your hands as they type on an invisible keyboard right in front of you.

It’s called SelfieType, and although it’s only a demo at the moment, it’s a very intriguing one. It’s been developed under the aegis of Samsung’s C-Lab program, a sort of in-house incubator for weird tech ideas, some of which eventually become commercial products.

We don’t know what plans Samsung has for SelfieType, but if it works as smoothly as this promo suggests, it could be a powerful tool. Samsung is demoing SelfieType as CES in Las Vegas this week, so we’ll try to get a hands-on (if that term makes sense in this context).

The software uses machine learning to track the movement of your hands through your phone’s camera.

The proof will definitely be in the typing, though. “Invisible keyboards” already exist using laser projection, but they’re a novelty rather than a serious tool. They tend to be slow and inaccurate, and if you’re carrying around a little brick of a laser projector you may as well go the whole hog and swap that for a decent Bluetooth keyboard.

Because SelfieType works using your phone’s camera, it’ll at least eliminate the problem of carrying around accessories. But using machine vision to track the individual movement of your fingers sounds tricky, and you’ll presumably have to keep your hands in one place for everything to work correctly — a tough ask without the physical feedback of a keyboard.

Still, it’s a solution we’ve not seen before, and one we’re curious to try. Over the years, engineers have attempted to replace the tried-and-tested keyboard with many weird designs. Perhaps, with the help of AI, we can finally crack this problem.

US announces AI software export restrictions

The US will impose new restrictions on the export of certain AI programs overseas, including to rival China.

The ban, which comes into force on Monday, is the first to be applied under a 2018 law known as the Export Control Reform Act or ECRA. This requires the government to examine how it can restrict the export of “emerging” technologies “essential to the national security of the United States” — including AI. News of the ban was first reported by Reuters.

When ECRA was announced in 2018, some in the tech industry feared it would harm the field of artificial intelligence, which benefits greatly from the exchange of research and commercial programs across borders. Although the US is generally considered to be the world leader in AI, China is a strong second place and gaining fast.

But the new export ban is extremely narrow. It applies only to software that uses neural networks (a key component in machine learning) to discover “points of interest” in geospatial imagery; things like houses or vehicles. The ruling, posted by the Bureau of Industry and Security, notes that the restriction only applies to software with a graphical user interface — a feature that makes programs easier for non-technical users to operate.

Reuters reports that companies will have to apply for licenses to export such software apart from when it is being sold to Canada.

The US has previously imposed other trade restrictions affecting the AI world, including a ban on American firms from doing business with Chinese companies that produce software and hardware that powers AI surveillance.

AI-powered software, like this program from Descartes Labs, can automatically tag objects and areas of interest.
Credit: Descartes Labs

Using machine learning to process geospatial imagery is an extremely common practice. Satellites that photograph the Earth from space produce huge amounts of data, which machine learning can quickly sort to flag interesting images for human overseers.

Such programs are useful to many customers. Environmentalists can use the technology to monitor the spread of wildfires, for example, while financial analysts can use it to track the movements of cargo ships out of a port, creating a proxy metric for trading volume.

But such software is of growing importance to military intelligence, too. The US, for example, is developing an AI analysis tool named Sentinel, which is supposed to highlight “anomalies” in satellite imagery. It might flag troop and missile movements, for example, or suggest areas that human analysts should examine in detail.

Regardless of the importance of this software it’s unlikely an export ban will have much of an effect on China or other rivals’ development of these tools. Although certain programs may be restricted, it’s often the case that the underlying research is freely available online, allowing engineers to recreate any software for themselves.

Reuters notes that although the restriction will only affect US exports, American authorities could try and encourage other countries to follow suit, as they have with restrictions on Huawei’s 5G technology. Future export bans could also affect more types of AI software.

Samsung’s ‘artificial human’ project definitely looks like a digital avatar

On Friday we wrote about Samsung’s mysterious “artificial human” project Neon, speculating that the company was building realistic human avatars that could be used for entertainment and business purposes, acting as guides, receptionists, and more.

Now, a tweet from the project’s lead and some leaked videos pretty much confirm this — although they don’t give us nearly enough information to judge how impressive Neon is.

The lead of Neon, computer-human interaction researcher Pranav Mistry, tweeted the image below, apparently showing one of the project’s avatars. Mistry says the company’s “Core R3” technology can now “autonomously create new expressions, new movements, new dialog (even in Hindi), completely different from the original captured data.”

Unlisted videos taken from the source code on Neon’s homepage revealed even more of these same human figures. The videos were originally posted on Reddit, but have now been taken down. You can see them in the YouTube video below, though, and they do look extremely lifelike. In fact, they look just like videos — not computer-generated graphics.

And that’s the key question we have about Neon at this point: to what degree are these avatars computer-generated? Or are they based on high-fidelity video capture that’s animated after the fact? And, even more importantly, how good are these avatars at talking and emoting like humans? A big claim associated with Neon is that these avatars can be mistaken for real humans — but that would be a huge leap forward over current technology.

In a recent interview, Mistry made clear he thinks “digital humans” will be a major technology in the 2020s. “Movies are full of examples where AI is brought into our world,” Mistry told LiveMint. “In Blade Runner 2049, Officer K develops a relationship with his AI hologram companion, Joi. While films may disrupt our sense of reality, ‘virtual humans’ or ‘digital humans’ will be reality. A digital human could extend its role to become a part of our everyday lives: a virtual news anchor, virtual receptionist, or even an AI-generated film star.”

But we’ll have to wait and see if Neon’s avatars can live up to these expectations. So far, the company is mainly offering us hype. (Just look at the red “ALIVE” text in the top right corner of the images Mistry tweeted — it’s a bit hammy.) Whatever the case, Neon will be showcased at CES in less than 48 hours, and we’ll be there to report on what we see and hear.

Samsung’s next flagship Galaxy phone will probably be announced February 11

If you’re eagerly awaiting Samsung’s next flagship phone, the Galaxy S11, then go ahead and mark February 11 in your diary. According to a newly-leaked promo, this looks to be the date for Samsung’s next “Unpacked” press event, which last year was the occasion for the company to unveil the Galaxy S10.

The promo was downloaded as an unlisted video on Samsung’s official Vimeo account by Max Weinbach of XDA Developers and first spotted by Twitter user @water8192.

The 15-second clip doesn’t give much away. There are just two mysterious oblong shapes pressing through a sheet of material in place of the letter A’s in the word “Galaxy.” One shape is more rectangular and the other more square, perhaps suggesting that Samsung will unveil both a Galaxy S10 and a rumored Galaxy Fold 2, but that’s just a guess for now.

More definite is the date: February 11, 2020. The event will also be live-streamed at www.samsung.com, and — if the date is confirmed — you can expect The Verge to be there as well, providing photos, videos, and hands-on impressions of any new devices.

We haven’t heard a lot about what to expect from Samsung’s next flagship. Recreated renders suggest it’ll have a rectangular camera module on the back (which matches the shape that appears on the left in the promo); and that it’ll incorporate a 120Hz refresh rate display, a 108-megapixel sensor for its main camera, and a Qualcomm Snapdragon 865. There’s also a chance it’ll be called the Samsung Galaxy S20, rather than the S11.

We know even less about a possible Galaxy Fold 2, the successor to the company’s first, and troublesome, foldable smartphone. There have been some leaked images of what could be the Fold 2 with a clamshell design, and signs point to a lower, more mass-market price (under $1,000) sporting a new and more resilient flexible glass display, but that’s it for now.

If the date for this Unpacked event is confirmed, we should know a whole lot more come February 11.

What the hell is Samsung’s ‘artificial human’ project?

For the past few weeks, a Samsung subsidiary named STAR Labs has been teasing what it calls “Neon” — an “artificial human” that will be unveiled at CES 2020 next week.

But what exactly is Neon, and what is an artificial human? So far, we have very few official details, but most signs point toward the release of some sort of digital avatar technology: a realistic CGI human that users can interact with. It could be used for entertainment purposes or by businesses to create digital receptionists, customer service, and so on.

Whatever it is, though, it’s being hyped to death before it’s even been announced.

Neon has a social media presence a mile wide and just a few GIFs deep. There are Twitter, Facebook, and Instagram accounts for Neon, all sharing the same vague and extremely futuristic-looking images. Posts pose questions like “Have you ever met an ‘artificial’?” and tease technology called “Core R3,” which stands for “reality, realtime, responsive.” They also make clear that, whatever Neon is, it has nothing to do with Samsung’s AI assistant Bixby.

The project is led by Pranav Mistry, a human-computer interaction researcher and former senior vice president at Samsung Electronics. According to his LinkedIn profile, Mistry is now CEO of STAR Labs (which stands for Samsung Technology & Advanced Research) and new company Neon. On his Twitter page, he’s been stoking hype for the project, retweeting appreciative comments from people apparently given early previews. One describes Neon as “Artifical Intelligence that will make you wonder which one of you is real.”

Unofficial clues also point to digital avatar tech. US trademarks for “NEON Artificial Human,” “NEON.Life,” and “Core R3” have been registered by Samsung Research America (and spotted by LetsGoDigital). They describe Samsung NEON as offering:

Entertainment services, namely, production of special effects including model-making services, computer-generated imagery and computer-generated graphics for the production of motion pictures, videos and movie trailers; augmented reality video production; creating computer generated characters; design and development of computer-modeled versions of human beings using computer animation for use in movies, television, internet and other applications; design and development of software for virtual characters; creating for others custom computer-generated imagery, animations, simulations and models used for entertainment.

A job listing for STAR Labs looking for a senior media streaming engineer focuses on similar themes. It says the company is undertaking “independent initiatives to create new end-to-end businesses and expand growth areas for Samsung” and that employees are “building new immersive and intelligent services that are making science fiction a reality.”

In a recent interview, Mistry describes “digital humans” as a key technology for the 2020s. “Movies are full of examples where AI is brought into our world,” Mistry told LiveMint. “In Blade Runner 2049, Officer K develops a relationship with his AI hologram companion, Joi. While films may disrupt our sense of reality, ‘virtual humans’ or ‘digital humans’ will be reality. A digital human could extend its role to become a part of our everyday lives: a virtual news anchor, virtual receptionist, or even an AI-generated film star.”

None of this is that groundbreaking, though.

Thanks to advances in AI and CGI, digital human avatars have certainly become increasingly lifelike, but they’re functionally very limited. In 2018, China’s state-run press agency launched what it called an “AI news anchor“ that can read the headlines. Films like Star Wars have resurrected dead actors using CGI, and AI-generated Instagram influencers are also a thing. But all of these examples are of preprogrammed experiences.

While some efforts have been made to integrate avatars with chatbot technology, the end results are not too impressive. Conversation is slow, limited, and stilted, and none of these bots could be mistaken for humans. Last month, for example, New Age author Deepak Chopra was turned into a digital avatar. But “digital Deepak” looked more like a faction leader from the Civilization video game series than an artificial human.

It’s possible that Samsung has made some leaps forward in this regard and that Neon’s technology will be truly game-changing. But let’s wait and see what the company has to offer, and ignore the ample hype. The company will be announcing more early next week.

Two new Pokémon games launch on Facebook Gaming

A pair of new Pokémon games have launched exclusively on Facebook Gaming: Pokémon Tower Battle and Pokémon Medallion Battle.

Pokémon Tower Battle is available worldwide, and pits two players against one another. You take it in turns to drop pokémon out of the sky like Tetris blocks, trying to build a stable tower. If you knock the tower over or your pokémon tumbles off the platform, you lose.

We tried the game ourselves and it seems pretty mindless, though the press release promises some new features as you play: “As players discover, catch and level-up rare pokémon, they can compete in real-time against friends or across a global leaderboard. It might seem like a simple physics-based puzzler at first, but the strategic choices in where and how players stack pokémon will determine the true Tower Battle masters.”

Pokémon Medallion Battle, by comparison, sounds like it has a little more depth, but it’s only available to play in the Philippines right now.

It’s a digital card battle game that allows you collect pokémon in the form of medallions. You can level them up, win gym badges, and try to fill out your Pokédex, with new pokémon being released every month, according to Variety. Judging by the screenshots below the game uses the usual element-based combat system, and even offers some social features.

Both titles were built using Facebook’s Instant Games platform and come as the company makes more of an effort to attract gamers. Earlier this year, it launched a dedicated gaming tab, and Facebook now says that more than 700 million of its users play games, watch gaming videos, or take part in gaming groups each month. The company also recently acquired Spanish cloud gaming company PlayGiga.

In a press release, Pokémon Company CEO Tsunekazu Ishihara welcomed the launch of the new titles: “Launching these games through Facebook will allow people all over the world to experience Pokémon in digital form, and we are especially thrilled to collaborate with Facebook Gaming in enabling new audiences to enjoy Pokémon games online.”

Popular chat app ToTok is reportedly secret United Arab Emirates spying tool

A report from The New York Times has revealed that messaging app ToTok, popular in the United Arab Emirates, is in fact a government spy tool, created for the benefit of UAE intelligence officials and used to track citizens’ conversations and movements.

ToTok launched earlier this year and has been downloaded by millions in the UAE, a nation where Western messaging apps like WhatsApp and Skype are partially blocked. It promised “fast, free, and secure” messages and calls, and attracted users across the Middle East and beyond, even becoming one of the most downloaded social apps in the US last week.

But, citing classified briefings from US intelligence officials and its own analysis, the NYT reports that ToTok is really a way for the UAE government to spy directly on its people. Citizens who used the app were sharing messages, pictures and videos, and even their location (supposedly being tracked to provide weather updates) with Emirati intelligence.

ToTok offered users “fast and secure messaging.”
Image via The New York Times

The Times notes that this is something of a new development in the history of digital spying by authoritarian regimes. Although many governments routinely hack citizens’ phones, not many set up an ostensibly legitimate app and simply ask for access to their data.

“There is a beauty in this approach,” security researcher Patrick Wardle, who conducted an independent forensic analysis of ToTok, told the Times. “You don’t need to hack people to spy on them if you can get people to willingly download this app to their phone. By uploading contacts, video chats, location, what more intelligence do you need?”

The Times reports that the company that runs ToTok, Breej Holding, is most likely a front for Abu Dhabi-based cybersecurity firm DarkMatter. The app is also connected to UAE data-mining firm Pax AI, which shares offices with the Emirates’ signals intelligence agency.

Breej Holding, DarkMatter, and the UAE government have yet to comment on the Times report, but both Google and Apple have removed ToTok from the Play Store and App Store. The FBI also refused to comment, but a spokesperson for the bureau told the Times: “[W]hile the FBI does not comment on specific apps, we always want to make sure to make users aware of the potential risks and vulnerabilities that these mechanisms can pose.”