Amid widespread efforts to help computers teach themselves, a group of Indiana University researchers is looking for guidance from some of the best learners on the planet: infants and toddlers.
Twelve professors from two schools at Indiana University have joined forces for a research initiative called “Learning: Brains, Machines and Children,” which recently won a four-year, $3 million grant from the university. The multidisciplinary group—which includes neurologists, developmental psychologists and computer scientists—plans to study how children learn to classify faces, objects, symbols and more, then apply those tactics to machine learning.
Today, software powered by artificially intelligent algorithms can teach itself to become better at a variety of tasks, from recognizing faces to recommending news articles.
But some cognitive tasks that computer programs struggle with, such as classifying objects seen only once, young children can handle with ease.
The IU researchers have set out to explore how computers can emulate such skills, which could ultimately streamline how machines learn about and interact with the natural world.
The researchers and outside observers acknowledged that the effort is ambitious, but they also said a breakthrough could have significant implications for self-driving cars, robotics, augmented reality and more.
“That’s the interesting question for me—can we build better fundamental building blocks for machine-learning algorithms that will more tightly emulate what the brain does?” said Dean Abbott, chief data scientist at Indianapolis-based SmarterHQ, which sells predictive marketing software to retailers.
“Which has phenomenal implications, because, if we can do that, then these algorithms will get more flexible and will be able to adapt more quickly and more reliably in ways that are very difficult to do now,” said Abbott, who is not involved in the research project.
The $3 million grant was bestowed by IU’s Emerging Areas of Research program, which aims to enhance the quality and impact of the research on the school’s Bloomington campus.
“Learning” is its first initiative. Over the next five years, the school said, it anticipates funding up to six such initiatives, each with up to a $3 million investment and up to three faculty hires.
The child-and-machine-learning effort is led by Linda Smith, distinguished professor in the Department of Psychological and Brain Sciences in the College of Arts and Sciences. She’s joined by colleagues in the Department of Informatics and the Department of Computer Science in the School of Informatics, Computing and Engineering.
Smith said some of the most advanced machines cannot accomplish cognitive tasks that a 2-year-old can, but the researchers’ work might help change that.
“If we can bring in what we know about human learning, if we can bring in what we know about neural mechanisms, and if we can bring in the advanced theoretical stuff that underlies machine learning,” she said, “maybe, if we put them together, we can break these kinds of barriers.”
Lessons in learning
The effort has three projects, Smith said, each aiming to see if machines can adopt specific learning tactics children exhibit. The project deals with processing visual information only, not sounds or other inputs.
The first explores this idea of so-called “one-shot” learning. To be able to classify an object such as a tractor, computers need to be fed thousands or millions of images of tractors.
But researchers said children as young as 2 years old need to see only one tractor to learn what it is. After being told its name, they can identify one no matter its color, size or other physical characteristics.
“It seems to be that [children] learn 300 or so official categories,” Smith said, out of tens of thousands of object categories, “and that learning seems to prepare them for rapid, one-shot learning.”
Observers said one-shot learning could greatly reduce the amount of information computers need to be fed about something before understanding what it is. David Crandall, one of the computer science professors on the team, said the skill could be used to help a self-driving car classify objects in its surroundings that could influence its course.
The second project relates to “re-use,” which deals with machines obtaining foundational knowledge in one task and using it to do a different task better. For instance, Smith said, something about playing with blocks makes children better at learning math, and something about understanding letters helps in learning how to read.
SmarterHQ’s Abbott said it would be fascinating if machines could demonstrate foundational learning.
“For machine learning, that could drive—what kinds of things do we want them to learn and what kind of building blocks should they obtain,” he said, “so that they can accomplish learning more efficiently and more comprehensively when we give those algorithms different problems to solve?”
The last project deals with “self-generated training,” which is aimed at seeing if machines can learn to explore information without prompt. Traditionally, computers are fed data sets and instructed to solve problems. If they can, like children, explore information at will and tap into it later, that would have a broad range of useful applications. For example, it would help robots on Mars gather information about its environment with little human guidance.
Tyler Foxworthy, chief data scientist at marketing-software firm DemandJump, said if machines can practice such curiosity, they could, for instance learn which features in a data set are important for making predictions without human guidance.
“It might have a general idea of what the goal might be and be able to poke around and figure out various angles at how it could get there,” said Foxworthy, who is not involved in the study. “For me, it would cut out a lot of the complexity in trying to structure data.”
Transferring human behaviors into machines presents a few challenges. Foxworthy said one might deal with collecting and interpreting the findings of neuroscience research.
Foxworthy did doctoral work in neuroscience at IUPUI, and he said that, while brain imaging technology continues to improve, it still has shortcomings. The “measurement apparatus for fine-grained neurological activity is still emerging,” he said, so the data-collection aspect of this kind of study is “probably going to be extremely challenging in and of itself.”
He added: “They’ve got to get the measurement aspect right. But if they can do that, hopefully it means that there will be some meat on the bones in the data and they’ll be able to separate the signals from the noise.”
Abbott said technologists long have sought to model computer brains off human brains—which is the essence of neural networks within some machines that are designed like brain neurons.
But he added that building off biology is not always the best route for solving a challenge.
“With airplanes, the early designs of flying were all around birds and flapping wings,” Abbott said. “But in the end, that was not the solution for efficient air travel.”
Karin James, an associate professor in the Department of Psychological and Brain Sciences who specializes in neurology, said computer science will not be the only discipline that benefits from the research.
She said computer programs could be used to quantify how much information children process, which is difficult to measure today.
She also said computers could be used as models for experimenting with what aids and restricts learning, as well as other tests that might be unethical to practice on children.
Smith said the group received its funding in May and is still in the early stages, so researchers have yet to report any notable findings. She said that as IU pursues its research objectives, it wants to become a center of excellence on the topic.
“We’re beginning to hire new faculty to build a community at IU so that Indiana can become a leader in an area you’re going to be hearing more about.”•