
Researchers say they’ve developed an algorithm that can teach a new concept to a computer using just one example, rather than the thousands of examples that are traditionally required for machine learning.
The algorithm takes advantage of a probabilistic approach the researchers call “Bayesian Program Learning,” or BPL. Essentially, the computer generates its own additional examples, and then determines which ones fit the pattern best.
The researchers behind BPL say they’re trying to reproduce the way humans catch on to a new task after seeing it done once – whether it’s a child recognizing a horse, or a mechanic replacing a head gasket.
“The gap between machine learning and human learning capacities remains vast,” said MIT’s Joshua Tenenbaum, one of the authors of a research paper published today in the journal Science. “We want to close that gap, and that’s the long-term goal.”