Primate Vocalization Classification

Deep BiLSTM classifies primate vocalizations for acoustic wildlife monitoring.

Primate Vocalization Classification

Acoustic monitoring offers a powerful, non-invasive tool for wildlife conservation, enabling the study and tracking of animal populations through their vocalizations.

This research focuses on improving the automated classification of primate vocalizations, a challenging task due to call variability and environmental noise.

Workflow diagram for recurrent neural network primate vocalization classification
Overall workflow of the deep recurrent neural network for primate vocalization classification.

We propose a novel deep, recurrent neural network architecture specifically designed for this purpose. The core of the model utilizes bidirectional Long Short-Term Memory (BiLSTM) networks, which are adept at capturing temporal dependencies within the audio signals (represented, for example, as spectrograms or MFCCs).

To further enhance classification performance, particularly in potentially imbalanced datasets common in bioacoustics, the architecture incorporates advanced techniques:

  • Normalized Softmax: Improves calibration and potentially robustness.
  • Focal Loss: Addresses class imbalance by focusing training on hard-to-classify examples.

Hyperparameter tuning, a critical step for optimizing deep learning models, was systematically performed using Bayesian optimization.

Graph showing classification accuracy for primate calls
Performance results demonstrating classification accuracy of the deep recurrent model on primate calls.

The model's effectiveness was evaluated on a challenging real-world dataset comprising diverse primate calls recorded at an African wildlife sanctuary. The results demonstrate the capability of the proposed deep recurrent architecture for accurate primate vocalization classification, underscoring the potential of advanced deep learning techniques combined with automated acoustic monitoring for practical wildlife conservation efforts. (Muller et al., 2021)