In this paper we propose a novel technique of using a hybrid evolutionary method, which uses a combination of genetic algorithm and matrix based solution methods such as QR factorization. The training of the model is based on a layer based hierarchical structure for the architecture and the weights for the Artificial Neural Network classifier. The architecture for the classifier is found using a binary search type procedure. The hierarchical structured algorithm (EALS-BT) is also a hybrid, because it combines the Genetic Algorithm based method with the Matrix based solution method for finding weights. A heuristic segmentation algorithm is initially used to over segment each word. Then the segmentation points are passed through the rule-based module to discard the incorrect segmentation points and include any missing segmentation points. Following the segmentation the contour is extracted between two correct segmentation points. The contour is passed through the feature extraction module that extracts the angular features, after which the EALS-BT algorithm finds the architecture and the weights for the classifier network. These recognized characters are grouped into words and passed to a variable length lexicon that retrieves words that have the highest confidence value.
View-invariant action recognition is one of the most challenging problems in computer vision. Various representations are being devised for matching actions across different viewpoints to achieve view invariance. In this paper, we explore the invariance property of temporal order of action instances during action execution and utilize it for devising a new view-invariant action recognition approach. To ensure temporal order during matching, we utilize spatiotemporal features, feature fusion and temporal order consistency constraint. We start by extracting spatiotemporal cuboid features from video sequences and applying feature fusion to encapsulate within-class similarity for the same viewpoints. For each action class, we construct a feature fusion table to facilitate feature matching across different views. An action matching score is then calculated based on global temporal order constraint and number of matching features. Finally, the action label of the class with the maximum value of the matching score is assigned to the query action. Experimentation is performed on multiple view Inria Xmas motion acquisition sequences and West Virginia University action datasets, with encouraging results, that are comparable to the existing view-invariant action recognition techniques.