View-invariant action recognition is one of the most challenging problems in computer vision. Various representations are being devised for matching actions across different viewpoints to achieve view invariance. In this paper, we explore the invariance property of temporal order of action instances during action execution and utilize it for devising a new view-invariant action recognition approach. To ensure temporal order during matching, we utilize spatiotemporal features, feature fusion and temporal order consistency constraint. We start by extracting spatiotemporal cuboid features from video sequences and applying feature fusion to encapsulate within-class similarity for the same viewpoints. For each action class, we construct a feature fusion table to facilitate feature matching across different views. An action matching score is then calculated based on global temporal order constraint and number of matching features. Finally, the action label of the class with the maximum value of the matching score is assigned to the query action. Experimentation is performed on multiple view Inria Xmas motion acquisition sequences and West Virginia University action datasets, with encouraging results, that are comparable to the existing view-invariant action recognition techniques.
Gaussian mixture models (GMM) is used to represent the dynamic background in a surveillance video to detect the moving objects automatically. All the existing GMM based techniques inherently use the proportion by which a pixel is going to observe the background in any operating environment. In this paper we first show that such a proportion not only varies widely across different scenarios but also forbids using very fast learning rate. We then propose a dynamic background generation technique in conjunction with basic background subtraction which detected moving objects with improved stability and superior detection quality on a wide range of operating environments in two sets of benchmark surveillance sequences.