Three pioneers in artificial intelligence win Turing Award

Geoffrey Hinton, a computer scientist at the University of Toronto
Geoffrey Hinton, a computer scientist at the University of Toronto, May 25, 2017. Hinton, who is now at Google, and two other researchers have won the Turing Award, perhaps the most prestigious award in computing, for their work on neural networks. NYT PHOTO 

SAN FRANCISCO — In 2004, Geoffrey Hinton doubled down on his pursuit of a technological idea called a neural network.

It was a way for machines to see the world around them, recognize sounds and even understand natural language. But scientists had spent more than 50 years working on the concept of neural networks, and machines couldn’t really do any of that.

Backed by the Canadian government, Hinton, a computer science professor at the University of Toronto, organised a new research community with several academics who also tackled the concept. They included Yann LeCun, a professor at New York University, and Yoshua Bengio at the University of Montreal.

On Wednesday, the Association for Computing Machinery, the world’s largest society of computing professionals, announced that Hinton, LeCun and Bengio had won this year’s Turing Award for their work on neural networks. The Turing Award, which was introduced in 1966, is often called the Nobel Prize of computing, and it includes a $1 million prize, which the three scientists will share.

Over the past decade, the big idea nurtured by these researchers has reinvented the way technology is built, accelerating the development of face-recognition services, talking digital assistants, warehouse robots and self-driving cars. Hinton is now at Google, and LeCun works for Facebook. Bengio has inked deals with IBM and Microsoft.


“What we have seen is nothing short of a paradigm shift in the science,” said Oren Etzioni, the chief executive officer of the Allen Institute for Artificial Intelligence in Seattle and a prominent voice in the AI community. “History turned their way, and I am in awe.”

Loosely modelled on the web of neurons in the human brain, a neural network is a complex mathematical system that can learn discrete tasks by analysing vast amounts of data. By analysing thousands of old phone calls, for example, it can learn to recognize spoken words.

This allows many artificial intelligence technologies to progress at a rate that was not possible in the past. Rather than coding behaviour into systems by hand — one logical rule at a time — computer scientists can build technology that learns behaviour largely on its own.

The London-born Hinton, 71, first embraced the idea as a graduate student in the early 1970s, a time when most AI researchers turned against it. Even his own Ph.D. adviser questioned the choice.

“We met once a week,” Hinton said in an interview. “Sometimes it ended in a shouting match, sometimes not.”

Neural networks had a brief revival in the late 1980s and early 1990s. After a year of postdoctoral research with Hinton in Canada, Paris-born LeCun moved to AT&T’s Bell Labs in New Jersey, where he designed a neural network that could read handwritten letters and numbers. An AT&T subsidiary sold the system to banks, and at one point it read about 10 percent of all checks written in the United States.

Although a neural network could read handwriting and help with some other tasks, it could not make much headway with big AI tasks, like recognizing faces and objects in photos, identifying spoken words, and understanding the natural way people talk.

“They worked well only when you had lots of training data, and there were few areas that had lots of training data,” LeCun, 58, said.

But some researchers persisted, including Paris-born Bengio, 55, who worked alongside LeCun at Bell Labs before taking a professorship at the University of Montreal.

In 2004, with less than $400,000 in funding from the Canadian Institute for Advanced Research, Hinton created a research program dedicated to what he called “neural computation and adaptive perception.” He invited Bengio and LeCun to join him.

By the end of the decade, the idea had caught up with its potential. In 2010, Hinton and his students helped Microsoft, IBM, and Google push the boundaries of speech recognition. Then they did much the same with image recognition.

“He is a genius and knows how to create one impact after another,” said Li Deng, a former speech researcher at Microsoft who brought Hinton’s ideas into the company.

Hinton’s image recognition breakthrough was based on an algorithm developed by LeCun. In late 2013, Facebook hired the NYU professor to build a research lab around the idea. Bengio resisted offers to join one of the big tech giants, but the research he oversaw in Montreal helped drive the progress of systems that aim to understand natural language and technology that can generate fake photos that are indistinguishable from the real thing.

Although these systems have undeniably accelerated the progress of artificial intelligence, they are still a very long way from true intelligence. But Hinton, LeCun and Bengio believe that new ideas will come.

“We need fundamental additions to this toolbox we have created to reach machines that operate at the level of true human understanding,” Bengio said.