Fusing Detected Humans in Multiple Perception Sensors Network
Abstract
A fusion method is proposed to keep a correct number of humans from all humans detected by the robot operating system based perception sensor network (PSN) which includes multiple partially overlapped field of view (FOV) Kinects. To this end, the fusion rules are based on the parallel and orthogonal configurations of Kinects in PSN system. For the parallel configuration, the system will decide whether the detected humans staying in FOV of single Kinect or in overlapped FOV of multiple Kinects by evaluating the angles formed between their locations and Kinect original point on top view (x, z plane) of 3D coordination. Then, basing on the angles, the PSN system will keep the person stay in only one FOV or keep the one with biggest ROI if they stay in overlapped FOV of Kinects. In the case of Kinects with orthogonal configuration, 3D Euclidian distances between detected humans are used to determine the group of humans supported to be same human but detected by different Kinects. Then the system, keep the human with a bigger region of interest (ROI) among this group. The experimental results demonstrate the outperforming of the proposed method in various scenarios.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Keywords
Full Text:
PDFTime cited: 0
DOI: http://dx.doi.org/10.25073/jaec.201712.61
Refbacks
- There are currently no refbacks.
Copyright (c) 2017 Journal of Advanced Engineering and Computation