Of 4,675 fully labeled bear faces on DSLR photos taken from research and bear viewing sites in Brooks River, Ala., And Knight Inlet, they randomly split images into training and testing records. After training 3,740 bear faces, the deep learning went “unsupervised” to work, said Dr. Clapham to see how well it could tell differences between known bears from 935 photographs.
First, the deep learning algorithm finds the bear's face using distinctive landmarks such as eyes, tip of the nose, ears and forehead tip. The app then rotates the face to extract, encode, and classify facial features.
The system identified bears with an accuracy of 84 percent and correctly differentiated between known bears such as Lucky, Toffee, Flora and Steve.
But how does it actually make these bears different? Before the era of deep learning, “we tried to imagine how people perceive faces and how we differentiate between individuals,” said Alexander Loos, a research engineer at the Fraunhofer Institute for Digital Media Technology in Germany who was not involved in the study in the past with Dr. Clapham worked together. Programmers would manually enter face descriptions into a computer.
However, with deep learning, programmers feed the images into a neural network that works out how best to identify people. "The network itself extracts the functions," said Dr. Loos, which is a huge plus.
He also warned, “It's basically a black box. You don't know what it's doing.” If the dataset being examined is unintentionally biased, certain errors can occur.