Multi-modal Sensor Data Fusion for In-situ Classification of Animal Behavior Using Accelerometry and GNSS Data

06/24/2022
by   Reza Arablouei, et al.
9

We examine using data from multiple sensing modes, i.e., accelerometry and global navigation satellite system (GNSS), for classifying animal behavior. We extract three new features from the GNSS data, namely, the distance from the water point, median speed, and median estimated horizontal position error. We consider two approaches for combining the information available from the accelerometry and GNSS data. The first approach is based on concatenating the features extracted from both sensor data and feeding the concatenated feature vector into a multi-layer perceptron (MLP) classifier. The second approach is based on fusing the posterior probabilities predicted by two MLP classifiers each taking the features extracted from the data of one sensor as input. We evaluate the performance of the developed multi-modal animal behavior classification algorithms using two real-world datasets collected via smart cattle collar and ear tags. The leave-one-animal-out cross-validation results show that both approaches improve the classification performance appreciably compared with using the data from only one sensing mode, in particular, for the infrequent but important behaviors of walking and drinking. The algorithms developed based on both approaches require rather small computational and memory resources hence are suitable for implementation on embedded systems of our collar and ear tags. However, the multi-modal animal behavior classification algorithm based on posterior probability fusion is preferable to the one based on feature concatenation as it delivers better classification accuracy, has less computational and memory complexity, is more robust to sensor data failure, and enjoys better modularity.

READ FULL TEXT

page 6

page 10

page 12

page 13

page 15

page 21

page 22

research
10/03/2018

Image and Encoded Text Fusion for Multi-Modal Classification

Multi-modal approaches employ data from multiple input streams such as t...
research
02/06/2018

Efficient Large-Scale Multi-Modal Classification

While the incipient internet was largely text-based, the modern digital ...
research
05/16/2019

Utilizing Deep Learning Towards Multi-modal Bio-sensing and Vision-based Affective Computing

In recent years, the use of bio-sensing signals such as electroencephalo...
research
05/30/2020

Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity Prediction

Diabetic Retinopathy (DR) is one of the major causes of visual impairmen...
research
03/03/2020

Deep Multi-Modal Sets

Many vision-related tasks benefit from reasoning over multiple modalitie...
research
11/15/2013

Deterministic Bayesian Information Fusion and the Analysis of its Performance

This paper develops a mathematical and computational framework for analy...
research
04/17/2021

Spherical Multi-Modal Place Recognition for Heterogeneous Sensor Systems

In this paper, we propose a robust end-to-end multi-modal pipeline for p...

Please sign up or login with your details

Forgot password? Click here to reset