Refine Your Search

Search Results

Viewing 1 to 7 of 7
Technical Paper

Vision Based Traffic Measuring System

Traffic information is very useful in planning and designing of road transport, ensuring efficient administration of road traffic, transportation agencies as well as for the convenience of road users. Traffic can be measured in terms of speed, density and flow. In this paper, we propose two different methods to measure traffic in terms of density and flow. The set up for the proposed traffic monitoring system includes a camera placed at a height from ground looking downward on the road, such that its field of view is perpendicular to the direction of motion of the traffic. The images of the road are continuously captured by the camera and processed to determine the traffic. The first method uses Gaussian Mixture Modeling (GMM) to detect vehicles. Density is calculated in terms of area occupied by the vehicles on the road. Another method of measuring the traffic flow is proposed that is based on calculation of edge points on a horizontal line drawn in the image.
Technical Paper

Vision Based Face Expression Recognition

Facial expression, a significant way of nonverbal communication, effectively conveys humans' mental state, emotions and intentions. Understanding of emotions through these expressions is an easy task for human beings. However, when it comes to Human Computer Interface (HCI), it is a developing research field that enables humans' to interact with computers through touch, voice, and gestures. Communication through expression in HCI is still a challenge. In addition, there are a variety of fields such as automotive, biometric, surveillance, teleconferencing etc. in which expression recognition system can be applied. In recent years, several different approaches have been proposed fr facial expression recognition, but most of them work only under definite environmental conditions. The proposed framework aims to recognize expressions (by analyzing the facial features extracted) based on the Active Shape Model (ASM).
Technical Paper

Local Scene Depth Estimation Using Rotating Monocular Camera

Dense depth estimation is a critical application in the field of robotics and machine vision where the depth perception is essential. Unlike traditional approaches which use expensive sensors such as LiDAR (Light Detection and Ranging) devices or stereo camera setup, the proposed approach for depth estimation uses a single camera mounted on a rotating platform. This proposed setup is an effective replacement to usage of multiple cameras, which provide around view information required for some operations in the domain of autonomous vehicles and robots. Dense depth estimation of local scene is performed using the proposed setup. This is a novel, however challenging task because baseline distance between camera positions inversely affect common regions between images. The proposed work involves dense two view reconstruction and depth map merging to obtain a reliable large dense depth map.
Technical Paper

A Review on Day-Time Pedestrian Detection

In view of the continuous efforts by the automotive fraternity, for achieving traffic safety, detecting pedestrians from image/video has become an extensively researched topic in recent times. The task of detecting pedestrians in the urban traffic scene is complicated by the considerations involving pedestrian figure size, articulation, fast dynamics, background clutter, etc. A number of methods using different sensor technologies have been proposed in the past for the problem of pedestrian detection. To limit the scope, this paper reviews the techniques involved in day-time detection of pedestrians, with emphasis on the methods making use of a monocular visible-spectrum sensor. The paper achieves its objective by discussing the basic framework involved in detecting a pedestrian, while elaborating the requisites and the existing methodologies for implementing each stage of the basic framework.
Journal Article

A Novel Method for Day Time Pedestrian Detection

This paper presents a vision based pedestrian detection system. The presented algorithm is a novel method that accurately segments the pedestrian regions in real time. The fact that the pedestrians are always vertically aligned is taken into consideration. As a result, the edge image is scanned from bottom to top and left to right. Both the color and edge data is combined in order to form the segments. The segmentation is highly dependent on the edge map. Even a single pixel dis-connectivity would lead to incorrect segments. To improve this, a novel edge linking method is performed prior to segmentation. The segmentation would consist of foreground and background segments as well. The background clutter is removed based on certain predefined conditions governed by the camera features. A novel edge based head detection method is proposed for increasing the probability of pedestrian detection. The combination of head and leg pattern will determine the presence of pedestrians.
Technical Paper

A Context Aware Automatic Image Enhancement Method Using Color Transfer

Advanced Driver Assistance Systems (ADAS) have become an inevitable part of most of the modern cars. Their use is mandated by regulations in some cases; and in other cases where vehicle owners have become more safety conscious. Vision / camera based ADAS systems are widely in use today. However, it is to be noted that the performance of these systems is depends on the quality of the image/video captured by the camera. Low illumination is one of the most important factors which degrades image quality. In order to improve the system performance under low illumination, it is required to first enhance the input images/frames. In this paper, we propose an image enhancement algorithm that would automatically enhance images to a near ideal condition. This is accomplished by mapping features taken from images acquired under ideal illumination conditions on to the target low illumination images/frames.
Technical Paper

A Compressed Sensing and Sparsity Based Approach for Estimating an Equivalent NIR Image from a RGB Image

Camera sensors that are made of silicon photodiodes and used in ordinary digital cameras are sensitive to visible as well as Near-Infrared (NIR) wavelength. However, since the human vision is sensitive only in the visible region, a hot mirror/infrared blocking filter is used in cameras. Certain complimentary attributes of NIR data are, therefore, lost in this process of image acquisition. However, RGB and NIR images are captured entirely in two different spectra/wavelengths; thus they retain different information. Since NIR and RGB images compromise complimentary information, we believe that this can be exploited for extracting better features, localization of objects of interest and in multi-modal fusion. In this paper, an attempt is made to estimate the NIR image from a given optical image. Using a normal optical camera and based on the compressed sensing framework, the NIR data estimation is formulated as an image recovery problem.