Clayton Christensen, who coined the term ‘disruptive innovation,’ dies at 67

Clayton Christensen, the business scholar who coined the term “disruptive innovation,” died at a Boston hospital this week, the Deseret News reports. He was 67. You may not immediately recognize his name, but the tech industry — and every resulting industry — is built on the framework of technology disruption and innovation that Christensen devised.

The crux of Christensen’s theory is that big, successful companies that neglect potential customers at the lower end of their markets (mainframe computers, in his famous example) are ripe for disruption from smaller, more efficient, more nimble competitors that can do almost as good a job more cheaply (like personal computers). One need look no further than the biggest names in Silicon Valley to find evidence of successful disrupters, from Napster to Amazon to Uber to Airbnb and so on.

And scores of notable tech leaders have for years cited Christensen’s 1997 book The Innovator’s Dilemma as a major influence. It’s the only business book on the late Steve Jobs’ must-read list; Netflix CEO Reed Hastings read it with his executive team when he was developing the idea for his company; and the late Andy Grove, CEO of Intel, said the book and Christensen’s theory were responsible for that company’s turnaround. After summoning Christensen to his office to explain why he thought Intel was going to get killed, Grove was able to grok what to do, Christensen recalled:

They made the Celeron Processor. They blew Cyrix and AMD out of the water, and the Celeron became the highest-volume product in the company. The book came out in 1997, and the next year Grove gave the keynote at the annual conference for the Academy of Management. He holds up my book and basically says, “I don’t mean to be rude, but there’s nothing any of you have published that’s of use to me except this.”

As Jill Lepore wrote (in a piece critical of Christensen’s theory) for The New Yorker in 2014, “Ever since ‘The Innovator’s Dilemma,’ everyone is either disrupting or being disrupted. There are disruption consultants, disruption conferences, and disruption seminars.” While his initial theory suggested that it was extremely difficult to recover from such disruption, it was possible, and there are examples, Christensen later wrote:

Develop a disruption of your own before it’s too late to reap the rewards of participation in new, high-growth markets—as Procter & Gamble did with Swiffer, Dow Corning with Xiameter, and Apple with the iPod, iTunes, the iPad, and (most spectacularly) the iPhone.

He later refined his thinking on disruption, introducing the concept of “jobs to be done,” which stressed the need to focus on customers’ needs, and acknowledged that disruption was a great way to start a company, but not a good way to grow a company. “It’s not a manual for how to grow or how to predict what customers want. [Jobs to be done] is the second side of the same coin: How can I be sure that competitors won’t kill me and how can I be sure customers will want to buy the product? So it’s actually a very important compliment to disruption.”

Anyone who’s listened to a tech exec on a podcast in the past five years has heard someone mention “jobs to be done” — it has gone beyond management framework to conventional wisdom.

Christensen was born in the Salt Lake City area, received a bachelor’s degree from Brigham Young University, a master’s from Oxford University, and an MBA and a doctorate from Harvard, where he later became a professor. He started the Clayton Christensen Institute for Disruptive Innovation, and founded venture capital firm Rose Park Advisors.

He wrote numerous books and hundreds of articles, and while he’s best known for his writing on disruption, not all of his books were specific to business. As a devout Mormon, his faith was intertwined with his thinking on how companies and people should conduct themselves. His 2012 TEDx talk titled “How Will You Measure Your Life” was based on an address he gave to Harvard Business School’s 2010 graduating class, which he gave while he was battling cancer. Christensen took the principles of his business theories and used them as a basis for how to achieve personal happiness.

“When I have my interview with God at the end of my life, he’s not going to ask me to show how high I went in anybody’s org chart or how much money I left behind in the bank when I died,” Christensen said. “It’s actually really important you succeed at what you’re succeeding at, but that isn’t going to be the measure of life.”

The patent that would have cost Nintendo $10 million is worthless, judge rules

A Dallas company that successfully sued Nintendo for $10 million for patent infringement may not have invented anything in the first place, a federal court has ruled. Although the novelty of Wii Tennis and other Wii Remote games has long since been replaced with newer models, Nintendo faced a slew of patent lawsuits challenging its motion-sensor technology when it introduced the Wii in the early 2000s, and in 2017 a jury actually ruled in favor of one of them, iLife Technologies.

But federal Judge Barbara G. Lynn ruled Friday that iLife’s patent is not valid. If you’re going to patent something, you have to explain what you’ve actually invented, what’s new that competitors could rip off. Judge Lynn seems to suggest in her ruling that what iLife’s patent application describes isn’t unique or new, but more of an abstract idea.

In 2015, iLife’s then-CEO described technology that could monitor sudden death syndrome in babies or prevent falls among the elderly. But Judge Lynn’s ruling says that the key patent’s primary claim basically boils down to “we use an accelerometer and processor to transmit motion sensing data somehow,” with no indication that iLife has invented any new way to do that thing better than before.

“Overall, claim 1 encompasses a sensor that senses data, a processor that processes data, and a communications device that communicates data, and no further inventive concept is recited to transform the abstract idea into a patent-eligible invention,” Lynn writes in her ruling. It’s not clear whether iLife Technologies even exists any more; but as Ars Technica notes, its website has been taken down.

”Nintendo has a long history of developing new and unique products, and we are pleased that, after many years of litigation, the court agreed with Nintendo,” Nintendo of America spokesman Ajay Singh said in a statement. “We will continue to vigorously defend our products against companies seeking to profit off of technology they did not invent.”

Correction: The judge in the case is named Barbara M. G. Lynn; an earlier version of this post misspelled her name as Glynn.

It’s not just you: Google added annoying icons to search on desktop

Google added tiny favicon icons to its search results this week for some reason, creating more clutter in what used to be a clean interface, and seemingly without actually improving the results or the user experience. The company says it’s part of a plan to make clearer where information is coming from, but how?

To give you an idea of how minimal the change is, here’s what it looked like when Google made the same tweak last year to the browsing experience on phones:

In my Chrome desktop browser, it feels like an aggravating, unnecessary change that doesn’t actually help the user determine how good, bad, or reputable an actual search result might be. Yes, ads are still clearly marked with the word “ad,” which is a good thing. But do I need to see Best Buy’s logo or AT&T’s blue circle when I search for “Samsung Fold” to know they’re trying to sell me something?

Search results for “Galaxy Fold” are not clearer for having the favicons

The company tweeted that the change to desktop results were rolling out this week, “helping searchers better understand where information is coming from, more easily scan results & decide what to explore.” But though the logos have been visible in search results on Google’s mobile browser since last year, Google’s statement doesn’t address how successful or irrelevant the favicons might have been for mobile users.

When Google first launched, its sparse, almost blank search page and minimalist results were an extremely welcome change, compared to the detritus on other search home pages at the time (which persists on sites like Yahoo). Adding favicons makes Google’s search results look a little cartoonish, and if we think Facebook users who can’t determine a reputable news source from their racist uncle’s favorite blog are going to be assisted by tiny pictures on Google, well, we’re likely to be disappointed.

Google does often make changes to search that actually do improve user experience or results, though. In the past few months, Google changed its search algorithm so it doesn’t see a search query as a “bag of words,” improved its results to prioritize reputable news sources, and even added augmented reality results to searches.

If you’re intrigued by the new logos in your search results, Google provided instructions on how to change or add a favicon in search results for those who don’t know. Lifehacker also provided instructions on how to apply filters to undo the favicon nonsense and revert back to how the search results used to look. You can decide which how-to is the more useful.

Google parent Alphabet is now a $1 trillion company

Google’s parent company Alphabet ($GOOG) is now the fourth US company to hit a market cap of $1 trillion. It hit the number just before markets closed on Thursday, ending the day’s trading at $1,451.70 per share, up 0.87 percent.

Google CEO Sundar Pichai took over as CEO of Alphabet in December, after Google co-founders Larry Page and Sergey Brin relinquished control of Alphabet. It’s been a bumpy couple of years at the company that included allegations of sexual misconduct by executives and a 20,000-person Google Walkout employee protest.

Alphabet is slated to report fourth-quarter earnings on February 3rd, and Wall Street analysts are expecting it to report revenue of $46.9 billion, a year-over-year uptick of almost 20 percent.

Apple was the first US company to hit a $1 trillion cap in 2018, followed later that year by Amazon (which has since dropped below that figure), and Microsoft hit the $1 trillion mark in April 2019. The first company ever to hit a $1 trillion market cap (briefly) was PetroChina in 2007. And late last year, Saudi Aramco became the first $2 trillion company shortly after its debut on the Riyadh stock exchange in December.

Of course, a trillion-dollar valuation doesn’t tell the complete story of the overall economic health of a company, and isn’t used in any meaningful way by investors; it’s mostly a cool-looking vanity metric. The trillion-dollar companies were still among the most-profitable companies in the world last year according to Fortune, however, with Saudi Aramco at the top of the list, Apple second, and Alphabet 7th.

The next company expected to hit the $1 trillion market cap is Facebook, which, as of the closing bell Thursday, was at about $620 billion.

Twitter allowed ad targeting based on ‘neo-Nazi’ keyword

In the latest “keyword targeting gone awry” experiment, the BBC was able to use terms like “neo-Nazi” and “white supremacist” in a Twitter ad campaign, despite the social media platform’s policy that advertisers “may not select keywords that target sensitive categories.”

According to Twitter’s policies, those sensitive categories include genetic or biometric data, health, commission of a crime, sex life, religious affiliation or beliefs, and racial or ethnic origin, among others.

The BBC ran an ad and says it was able to target users who were interested in the words “white supremacist,” “transphobic,” and “anti-gay,” among others. It wasn’t clear whether the news organization was reaching users who were interested in those terms (such as for research) or people who identified as such, only noting that “Twitter allows ads to be directed at users who have posted about or searched for specific topics.”

The ad, which cost £3.84 (about $5) was only live for a couple of hours, the BBC reports, during which time 37 people saw it and two people clicked on it. A second version of the ad was targeted at users aged 13 to 24 using “anorexia,” “anorexic,” “bulimia,” and “bulimic” as keywords. It was seen by 255 users with 14 clicks before the BBC took it down. But according to Twitter’s tool, it had the potential to reach 20,000 people.

In an emailed statement to The Verge, Twitter seems to suggest the words tested by the BBC may not have been on its sensitive words list:

Twitter has specific policies related to keyword targeting, which exist to protect the public conversation. Preventative measures include banning certain sensitive or discriminatory terms, which we update on a continuous basis. In this instance, some of these terms were permitted for targeting purposes. This was an error. We’re very sorry this happened and as soon as we were made aware of the issue, we rectified it.

The company says it continues to enforce its ad policies, “including restricting the promotion of content in a wide range of areas, including inappropriate content targeting minors.”

Ad-targeting on social media platforms has come under increased scrutiny, raising questions about the potential for discrimination. ProPublica found that it was possible to run ads on Facebook that essentially discriminated against groups protected by federal law. In 2018, The Guardian found Facebook ads could be used to target users based on sensitive topics, which is in violation of since-implemented privacy laws in Europe.

Microsoft tries to improve child abuse detection by opening its Xbox chat tool to other companies

Microsoft has released a new tool for identifying child predators who groom children for abuse in online chats. Project Artemis, based on a technique Microsoft has been using on the Xbox, will now be made available to other online companies with chat functions. It comes at a time when multiple platforms are dealing with child predators targeting kids for sexual abuse by striking up conversations in chat windows.

Artemis works by recognizing specific words and speech patterns and flagging suspicious messages for review by a human moderator. The moderator then determines whether to escalate the situation by contacting police or other law enforcement officials. If a moderator finds a request for child sexual exploitation or images of child abuse, the National Center for Missing and Exploited Children will be notified for further action.

“Sometimes we yell at the platforms — and there is abuse on every platform that has online chat — but we should applaud them for putting mechanisms in place,” says Julie Cordua, CEO of nonprofit tech organization Thorn, which works to prevent online sexual abuse of children. “If someone says, ‘oh we don’t have abuse’ I’ll say to them, ‘well, are you looking?’”

In December, The New York Times found that online chat platforms were fertile “hunting grounds” for child predators who groom their victims by first befriending them and then insinuating themselves into a child’s life, both online and off. Most major platforms are dealing with some measure of abuse by child predators, including Microsoft’s Xbox Live. In 2017, as the Times noted, a man was sentenced to 15 years in prison for threatening children with rape and murder over the Xbox Live chat.

Detection of online child sexual abuse and policies for handling it can vary greatly from company to company, with many of the companies involved wary of potential privacy violations, the Times reported. In 2018, Facebook announced a system to catch predators that looks at whether someone quickly contacts many children and how often they’re blocked. But Facebook also has access to much more data about its users than other platforms might.

Microsoft’s tool is important, according to Thorn, because it’s available to any company using chat and helps to set an industry standard for what detection and monitoring of predators should look like, helping with the development of future prevention tools. Chats are difficult to monitor for potential child abuse because there can be so much nuance in a conversation, Cordua says.

Child predators can lurk in online chat rooms to find victims much like they would offline, but with much more immediate access, says Elizabeth Jeglic, a professor of psychology at John Jay College of Criminal Justice in New York who has written extensively about protecting children from online sexual abuse, in particular, the often subtle practice of grooming. “Within 30 minutes they may be talking sexually with a child,” she says. “In person it’s harder to get access to a child, but online a predator is able to go in, test the waters and if it doesn’t work, go ahead and move on to the next victim.”

It doesn’t stop with one platform, Cordua adds. “They’ll try to isolate the child and will follow them across multiple platforms, so they can have multiple exploitation points,” she says. A predator may ask a child for a photo, then ratchet up the demands to videos, increasing the level of sexual content. “The child is racked with guilt and fear, and this is why the predator goes across platforms: he can say ‘oh I know all your friends on Facebook, if you don’t send me a video I’ll send that first photo to everyone at your junior high.’”

Artemis has been in development for more than 14 months, Microsoft says, beginning in November 2018 at a Microsoft “360 Cross-Industry Hackathon,” which was co-sponsored by two children’s protection groups, the WePROTECT Global Alliance and the Child Dignity Alliance. A team including Roblox, Kik, Thorn, and The Meet Group worked with Microsoft on the project. It was led by Hany Farid who developed the PhotoDNA tool for detecting and reporting images of child sexual exploitation online.

Some of the details about how the Artemis tool will work in practice are unclear, however, and are likely to vary depending on which platform is using it. It’s not stated whether Artemis would work with chat programs that use end-to-end encryption, or what steps will be taken to prevent potential PTSD among moderators.

Thorn will be administering the program and handling licensing and support to get participating companies onboarded, Microsoft says.

Cordua says while Artemis has some initial limitations — it currently only works in English — the tool is a huge step in the right direction. Since each company that uses the tool can customize it for its own audience (chats on gaming platforms will obviously vary from those on social apps), there will be ample opportunity to adapt and refine how the tool works. And, she says, it’s about time platforms move away from the failed practices of self-policing and toward pro-active prevention of child grooming and abuse.

In its blog post, Microsoft adds that the Artemis tool is “by no means a panacea” but is a first step toward detecting online grooming of children by sexual predators, which it terms “weighty” problems.

“The first step is we need to get better at identifying where this is happening,” Cordua says. “But all companies that host any chat or video should be doing this or they are complicit in allowing the abuse of children on their platforms.”

Yahoo parent Verizon promises it won’t track you with OneSearch, its new privacy-focused search engine

Verizon and its subsidiaries, including Yahoo, have become known for massive data breaches, privacy blunders, and oddly named web entities, but now the internet service provider has launched a whole new search engine without Yahoo branding, one that it says will definitely not share your search results with advertisers or tailor results based on your search history.

On its ad-supported OneSearch platform, users can “search the internet with increased confidence, knowing your personal and search data isn’t being tracked, stored, or shared with advertisers,” according to a statement from Michael Albers, head of consumer product at Verizon Media.

Ads on OneSearch will be generated based on keywords, not cookies, and there will be a self-destruct option for search results to be purged after a certain period. Search results will be generated by Microsoft’s Bing browser.

As consumers tire of having their every move tracked online, there are a growing number of browsers that claim to preserve users’ privacy, including Brave and DuckDuckGo, and ad- and tracker-blocking extensions like Ghostery.

If Verizon’s track record with search and privacy wasn’t so spotty, this might be a welcome addition to the growing field of privacy-based browsers. When it combined AOL and Yahoo into Oath in 2017, Verizon was clear about its plans to use its network to target ads. And in 2016, the company paid a $1.3 million fine to the Federal Communications Commission for its use of “super cookies” that tracked users on their networks via their cellphones without asking for permission or providing an opt-out option. And don’t forget Yahoo’s famous hack, where all 3 billion of its customers’ accounts were breached in 2013.

Why Verizon is introducing a new search engine brand when it already owns Yahoo is not clear, but as VentureBeat notes, Yahoo owned the “oneSearch” name long before it became part of Verizon.

According to the OneSearch privacy policy, search results will only be personalized based on location, which it will collect from IP addresses. OneSearch says that it will separate IP addresses from users and their search results.

OnePlus 8 reportedly coming to Verizon with support for 5G

OnePlus’ next phone, the OnePlus 8, will reportedly launch on Verizon in the US, with 5G connectivity, Android Police reports. It would be the first OnePlus phone available on Verizon, as currently T-Mobile is the only US carrier for OnePlus phones.

The Chinese phone maker showed off its Concept One phone at CES 2020, which included a disappearing camera. And according to Chinese media reports, OnePlus plans to demonstrate new “screen technology” at an event in China on January 13th. What this new tech might be remains a mystery, but a graphic the company shared shows glass panels stacked on top of one another. It’s not clear whether the OnePlus 8 will make use of any of this new tech, but we do know it may swap its signature pop-up selfie cam for a hole-punch display.

The OnePlus 7 Pro launched last May at a price of $699, notably less than other phones in its class. A few months later, however, T-Mobile stopped selling the 7T Pro. The less expensive OnePlus 7T debuted on T-Mobile in October.

OnePlus has suffered two data breaches in the past two years, most recently in November when some customer data was apparently exposed. A January 2018 breach saw some 40,000 OnePlus customers’ credit card information stolen.

We reached out to Verizon to try to confirm availability for the upcoming OnePlus 8 and we’ll update when we hear back.

Huawei says it’s selling 100,000 foldable phones a month

Huawei says its foldable Mate X phone is selling at a clip of 100,000 per month since it launched in China in November, according to Android Central. The Mate X, which is only for sale in China, sells for 16,999 yuan, or about $2,400.

That’s just shy of the sales estimates for rival Samsung’s Galaxy Fold, although it’s not totally clear exactly how shy. Samsung executive Young Sohn incorrectly claimed the Galaxy Fold had sold 1 million units, but later updated that figure at CES 2020 to “400,000 to 500,000” since its September launch.

Huawei and rival Samsung were the two front-runners in the race to bring a foldable phone to market first in 2019. At CES 2019, foldable prototypes were plentiful. But Samsung beat most top competitors when the Galaxy Fold released, then rereleased after a bumpy launch. Microsoft’s foldable Duo and Motorola’s foldable Razr are due out later this year.

Unveiled at Mobile World Congress 2019, Huawei’s Mate X was originally scheduled to launch in July of that year. The company pushed the launch to November in order to refine and improve its foldable screen, after Samsung delayed the launch of its Galaxy Fold following production problems.

Huawei has not officially announced a launch date for the Mate X outside of China, but previously said it would debut in Europe in the first quarter of this year. Half of the Mate X screen flips around back, so that when it is folded closed, it has a screen on both sides. The second iteration of the Mate X is expected to be unveiled at MWC 2020 next month.

At this year’s CES, the foldable trend expanded beyond phones to laptops. The Verge’s editors named the Lenovo ThinkPad X1 Fold its best in show for CES 2020, with expectations that there will be more foldable laptops in the coming months.

Microsoft says Skype audio is now reviewed in ‘secure facilities’ after a worrying report

Microsoft says Skype calls are now transcribed in “secure facilities in a small number of countries,” following a new report in The Guardian about the company’s use of contractors in China to listen to some calls to make sure the company’s transcription software is working properly. The company confirmed to The Verge that China is not currently one of the countries where transcription takes place.

A former contractor who lived in Beijing told The Guardian that he transcribed Skype calls with little cybersecurity protection from potential state interference. The unidentified former contractor told The Guardian that he reviewed thousands of audio recordings from Skype and Cortana on his personal laptop from his home in Beijing over a two-year period.

Workers who were part of the review process accessed the recordings via a web app in a Chrome browser over the internet in China. There was little vetting of employees and no security measures in place to protect the audio recordings from state or criminal interference, according to The Guardian.

The contractor told The Guardian he heard “all kinds of unusual conversations” while performing the transcription. “It sounds a bit crazy now, after educating myself on computer security, that they gave me the URL, a username and password sent over email.”

A Microsoft spokesperson told The Verge in an email that “If there is questionable behavior or possible violation by one of our suppliers, we investigate and take action.” The audio “snippets” that contractors get to review are ten seconds long or shorter, according to the spokesperson, “and no one reviewing these snippets would have access to longer conversations.”

“We’ve always disclosed this to customers and operate to the highest privacy standards set out in laws like Europe’s GDPR,” the spokesperson added.

The existence of the Skype transcription program was first detailed in a report from Motherboard in August. Although Skype’s terms of service indicated at the time that the company analyzed call audio, this was the first report showing how much of the analysis was done by humans. And unlike competitors who publicly declared that they would end the practice of having humans transcribing audio from virtual assistants, Microsoft continued the practice, apparently updating its privacy policy to admit it was doing so.

Microsoft says it reviewed its processes and communications with customers over the summer. “As a result, we’ve updated our privacy statement to be even more clear about this work, and since then we’ve significantly enhanced the process including by moving these reviews to secure facilities in a small number of countries,” the company said in its statement to The Verge. “We will continue to take steps to give customers greater transparency and control over how we manage their data.”

Microsoft did not elaborate on what these “steps” entailed.

Microsoft is not the only company to face blowback for how it’s handled audio recordings of customers. The practices of data annotation, where humans help AI learn by interpreting audio and other information, have come under intense scrutiny as people weigh the convenience of having on-demand answers from virtual assistants with the discomfort of relinquishing chunks of their private lives often to people they didn’t know were listening.

An April report from Bloomberg highlighted how Amazon used full-time employees and contractors to “listen” to customers’ conversations with Alexa. The report found the company wasn’t clear about how long such recordings are stored, or whether employees or even third parties have accessed or would be able to access the information for nefarious purposes. And both Apple and Google reportedly suspended their programs that used humans to review audio recordings of their Siri and Assistant virtual assistant programs.

Here’s how to prevent audio assistants from retaining audio recordings.