An improved pedestrian multi-target tracking system 2019-01-1055
Nowadays, the research of intelligent vehicle driverless technology becomes a very hot topic. However, road safety is always an important factor of inhibiting the development of intelligent vehicle technology. As a solution of road safety problem, the multi-target tracking method has drawn more and more attention in recent study because it will directly affect the performance of the real-time detection and the environment recognition. A novel multi-target tracking system based on deep neural network is proposed in this paper to improve the ability of environment recognition of the unmanned vehicle, so as to help improve the road safety performance in auto-driving. The multi-target tracking system proposed in this paper mainly consists of two parts: pedestrian detector and target tracker. The system uses SSD as the target detector and modifies it. It transforms the image convolution operation in space-time domain into the product operation in complex frequency domain, replacing the maximum pooling and mean pooling operation in the traditional depth model, in which the system retains more useful image information. The tracker is based on the similarity of the appearance features between the front and back frames, training the corresponding appearance model and calculating the cosine distance between the appearance features of the target to compare the similarity between the two targets, leading to a completed data association of the contextual targets. We evaluate our proposed system on the MOT16 data set and a large number of real scene experiments. The detector can improve tracking accuracy by up to 8.9% and reduce the number of identity switches by 15%. The processing rate of the tracker can arrive at 130fps. The experimental results demonstrate that the proposed multi-target tracking system provides better accuracy and real-time performance than the state-of-the-art methods, which can improve the safety of auto-driving system as a result.
Gong Yuan, Jianning Chi PhD, Wu Chengdong PhD, Yu Xiaosheng PhD, Zhang Yifei, Gao Na