BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

AI For Everyone: Startups Democratize Deep Learning So Google And Facebook Don't Own It All

This article is more than 9 years old.

When I arrived at a Stanford University auditorium Tuesday night for what I thought would be a pretty nerdy panel on deep learning, a fast-growing branch of artificial intelligence, I figured I must be in the wrong place--maybe a different event for all the new Stanford students and their parents visiting the campus. Nope. Despite the highly technical nature of deep learning, some 600 people had shown up for the sold-out AI event, presented by VLAB, a Stanford-based chapter of the MIT Enterprise Forum.

The turnout was a stark sign of the rising popularity of deep learning, an approach to AI that tries to mimic the activity of the brain in so-called neural networks. In just the last couple of years, deep learning software from giants like Google , Facebook, and China's Baidu as well as a raft of startups, has led to big advances in image and speech recognition, medical diagnostics, stock trading, and more. “There’s quite a bit of excitement in this area,” panel moderator Steve Jurvetson, a partner with the venture firm DFJ, said with uncustomary understatement.

In the past year or two, big companies have been locked in a land grab for talent, paying big bucks for startups and even hiring away deep learning experts from each other. But this event, focused mostly on startups, including several that demonstrated their products before the panel, also revealed there's still a lot of entrepreneurial activity. In particular, several companies aim to democratize deep learning by offering it as a service or coming up with cheaper hardware to make it more accessible to businesses.

Jurvetson explained why deep learning has pushed the boundaries of AI so much further recently. For one, there’s a lot more data around because of the Internet, there's metadata such as tags and translations, and there's even services such as Amazon’s Mechanical Turk, which allows for cheap labeling or tagging. There are also algorithmic advances, especially for using unlabeled data. And computing has advanced enough to allow much larger neural networks with more synapses--in the case of Google Brain, for instance, 1 billion synapses (though that’s still a very long way form the 100 trillion synapses in the adult human brain).

Adam Berenzweig, cofounder and CTO of image recognition firm Clarifai and former engineer at Google for 10 years, made the case that deep learning is "adding a new primary sense to computing" in the form of useful computer vision. "Deep learning is forming that bridge between the physical world and the world of computing," he said.

And it's allowing that to happen in real time. "Now we’re getting into a world where we can take measurements of the physical world, like pixels in a picture, and turn them into symbols that we can sort," he said. Clarifai has been working on taking an image and producing a meaningful description very quickly, like 80 milliseconds, and show very similar images.

One interesting application relevant to advertising and marketing, he noted: Once you can recognize key objects in images, you can target ads not just on keywords but on objects in an image.

DFJ's Steve Jurvetson led a panel of AI experts at a Stanford event Sept. 16.

Even more sweeping, said Naveen Rao, cofounder and CEO of deep-learning hardware and software startup Nervana Systems and former researcher in neuromorphic computing at Qualcomm, deep learning is "that missing link between computing and what the brain does." Instead of doing specific computations very fast, as conventional computers do, "we can start building new hardware to take computer processing in a whole new direction," assessing probabilities, like the brain does. "Now there’s actually a business case for this kind of computing," he said.

And not just for big businesses. Elliot Turner, founder and CEO of AlchemyAPI, a deep-learning platform in the cloud, said his company’s mission is to "democratize deep learning." The company is working in 10 industries from advertising to business intelligence, helping companies apply it to their businesses. "I look forward to the day that people actually stop talking about deep learning, because that will be when it has really succeeded," he added.

Despite the obvious advantages of large companies such as Google, which have untold amounts of both data and computer power that deep learning requires to be useful, startups can still have a big impact, a couple of the panelists said. "There’s data in a lot of places. There’s a lot of nooks and crannies that Google doesn’t have access to," Berenzweig said hopefully. "Also, you can trade expertise for data. There’s also a question of how much data is enough."

Turner agreed. "It’s not just a matter of stockpiling data," he said. "Better algorithms can help an application perform better." He noted that even Facebook, despite its wealth of personal data, found this in its work on image recognition.

Those algorithms may have broad applicability, too. Even if they're initially developed for specific applications such as speech recognition, it looks like they can be used on a wide variety of applications. "These algorithms are extremely fungible," said Rao. And he said companies such as Google aren't keeping them as secret as expected, often publishing them in academic journals and at conferences--though Berenzweig noted that "it takes more than what they publish to do what they do well."

For all that, it's not yet clear how much deep learning will actually emulate the brain, even if they are intelligent. But Ilya Sutskever, research scientist at Google Brain and a protege of Geoffrey Hinton, the University of Toronto deep learning guru since the 1980s who’s now working part-time at Google, said it almost doesn't matter. "You can still do useful predictions" using them. And while the learning principles for dealing with all the unlabeled data out there remain primitive, he said he and many others are working on this and likely will make even more progress.

Rao said he's unworried that we'll end up creating some kind of alien intelligence that could run amok if only because advances will be driven by market needs. Besides, he said, "I think a lot of the similarities we’re seeing in computation and brain functions is coincidental. It’s driven that way because we constrain it that way."

OK, so how are these companies planning to make money on this stuff? Jurvetson wondered. Of course, we've already seen improvements in speech and image recognition that make smartphones and apps more useful, leading more people to buy them. "Speech recognition is useful enough that I use it," said Sutskever. "I’d be happy if I didn’t press a button ever again. And language translation could have a very large impact."

Beyond that, Berenzweig said, "we’re looking for the low-hanging fruit," common use cases such as visual search for shopping, organizing your personal photos, and various business niches such as security.

Follow me on LinkedInCheck out my website