Home → News → Walk This Way: A Better Way to Identify Gait Differences → Full Text

Walk This Way: A Better Way to Identify Gait Differences

By Osaka University

November 13, 2017

[article image]


Researchers at Osaka University in Japan are developing new input/output architectures for convolutional neural network-based (CNN) cross-view gait recognition by utilizing a Siamese network for verification, in which an input is a pair of gait features for matching, and an output is genuine, indicating the same subjects, or imposter, indicating different subjects.

"Current CNN-based approaches are missing the aspects on verification versus identification, and the trade-off between spatial displacement, that is, when the subject moves from one location to another," says Osaka's Noriko Takemura.

The researchers say the Siamese network architectures resist spatial displacement, as the difference between a matching pair is calculated at the last layer after passing through the convolution and max pooling layers. "We conducted experiments for cross-view gait recognition and confirmed that the proposed architectures outperformed the state-of-the-art benchmarks in accordance with their suitable situations of verification/identification tasks and view differences," says Osaka's Yasushi Makihara.

From "Walk This Way: A Better Way to Identify Gait Differences"
View Full Article

 

Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA

0 Comments

No entries found