Contextual action recognition in multi-sensor nighttime video sequences
- Authors: Anwaar-Ul, Haq , Gondal, Iqbal , Murshed, Manzur
- Date: 2011
- Type: Text , Conference paper
- Relation: Proceedings of the 2011 Digital Image Computing: Techniques and Applications (DICTA 2011), Noosa 6th-8th Dec, 2011 p. 256-261
- Full Text: false
- Reviewed:
- Description: Contextual information is important for interpreting human actions especially when actions exhibit interactive relationship with their context. Contextual clues become even more crucial when videos are captured in unfavorable conditions like extreme low light nighttime scenarios. These conditions encourage the use of multi-senor imagery and context enhancement. In this paper, we explore the importance of contextual knowledge for recognizing human actions in multi-sensor nighttime videos. Information fusion is utilized for encapsulating visual information about actions and their context. Space-time action information is contained using 3D fourier transform of fused action silhouette volume. In parallel, SIFT context images are extracted and fused using principal component analysis based feature fusion for each action class. Contextual dissimilarity is penalized by minimizing context SIFT flow energy. The action dataset comprises multi-sensor night vision video data from infra-red and visible spectrum. Experimental results show that fused contextual action information boost action recognition performance as compared to the baseline action recognition approac
On dynamic scene geometry for view-invariant action matching
- Authors: Ul-Haq, Anwaar , Gondal, Iqbal , Murshed, Manzur
- Date: 2011
- Type: Text , Conference paper
- Relation: 24th IEEE Conference on Computer Vision and Pattern Recognition (CVPR) p. 3305-3312
- Full Text: false
- Reviewed:
- Description: Variation in viewpoints poses significant challenges to action recognition. One popular way of encoding view-invariant action representation is based on the exploitation of epipolar geometry between different views of the same action. Majority of representative work considers detection of landmark points and their tracking by assuming that motion trajectories for all landmark points on human body are available throughout the course of an action. Unfortunately, due to occlusion and noise, detection and tracking of these landmarks is not always robust. To facilitate it, some of the work assumes that such trajectories are manually marked which is a clear drawback and lacks automation introduced by computer vision. In this paper, we address this problem by proposing view invariant action matching score based on epipolar geometry between actor silhouettes, without tracking and explicit point correspondences. In addition, we explore multi-body epipolar constraint which facilitates to work on original action volumes without any pre-processing. We show that multi-body fundamental matrix captures the geometry of dynamic action scenes and helps devising an action matching score across different views without any prior segmentation of actors. Extensive experimentation on challenging view invariant action datasets shows that our approach not only removes long standing assumptions but also achieves significant improvement in recognition accuracy and retrieval.
Scarf : Semi-automatic colorization and reliable image fusion
- Authors: Ul-Haq, Anwaar , Gondal, Iqbal , Murshed, Manzur
- Date: 2010
- Type: Text , Conference paper
- Relation: 2010 Digital Image Computing: Techniques and Applications p. 435-440
- Full Text: false
- Reviewed:
- Description: Nighttime imagery poses significant challenges to its enhancement due to loss of color information and limitation of single sensor to capture complete visual information at night. To cope with this challenge, multiple sensors are used to capture reliable nighttime imagery which presents additional demands for reliable visual information fusion. In this paper, we present a system, Scarf, which proposes reliable image fusion using advanced feature extraction techniques and a novel semi-automatic colorization based on optimization conformal to human visual system. Subjective and objective quality evaluation proves the effectiveness of proposed system.