Your selections:

2Chetty, Madhu
2Gao, David
2Karmakar, Gour
2Murshed, Manzur
2Paul, Manoranjan
2Youseph, Ahammed
1Ali, Mortuza
1Balbuena, Camino
1Barker, Ewan
1Billups, Stephen
1Das, K. C.
1Easterling, David
1Fang, Shucherng
1Ghosh, Ranadhir
1Hoque, M.H.
1Khan, M.A.K.
1Lin, Yuqing
1Liu, Fei
1Miller, Mirka
1Nine, M.S.Q.Z.

Show More

Show Less

20102 Applied Mathematics
20103 Numerical and Computational Mathematics
20801 Artificial Intelligence and Image Processing
20802 Computation Theory and Mathematics
2Canonical duality theories
2Dual problem
2Global minimizers
2Global optimization
2Motion estimation
2Optimization
2Set theory
10101 Pure Mathematics
10606 Physiology
108 Information and Computing Sciences
109 Engineering
11116 Medical Physiology
117 Psychology and Cognitive Sciences
11702 Cognitive Science
1Algorithm VTDIRECT

Show More

Show Less

Format Type

Threshold-free pattern-based low bit rate video coding

- Paul, Manoranjan, Murshed, Manzur

**Authors:**Paul, Manoranjan , Murshed, Manzur**Date:**2008**Type:**Text , Conference paper**Relation:**2008 15th IEEE International Conference on Image Processing p. 1584-1587**Full Text:**false**Reviewed:****Description:**Pattern-based video coding (PVC) has already established its superiority over recent video coding standard H.264, at low bit rate because of an extra pattern-mode to segment out the arbitrary shape of the moving region within the macroblock (MB). To determine the pattern-mode, the PVC however uses three thresholds to reduce the number of MBs coded using the pattern- mode. By setting these content-sensitive thresholds to any predefined values, the technique risks ignoring some MBs that would otherwise be selected by the rate-distortion optimization function for this mode. Consequently, the ultimate achievable performance is sacrificed to save motion estimation times. In this paper, a novel PVC scheme is proposed by removing all thresholds to determine this mode and hence more efficient performance is achieved without knowing the content of the video sequences. To keep computational complexity in check, pattern motion is approximated from the motion vector of the MB. In addition, efficient pattern similarity metric and new Lagrangian multipliers are also developed. The experimental results confirm that this new scheme improves the image quality by at least 0.5 dB and 1.0 dB compared to the existing PVC and the H.264 respectively

- Liu, Fei, Ting, Kaiming, Zhou, Zhi-Hua

**Authors:**Liu, Fei , Ting, Kaiming , Zhou, Zhi-Hua**Date:**2008**Type:**Text , Conference paper**Relation:**Proceedings of the Eighth IEEE International Conference on Data Mining p. 413-422**Full Text:**false**Reviewed:****Description:**Most existing model-based approaches to anomaly detection construct a profile of normal instances, then identify instances that do not conform to the normal profile as anomalies. This paper proposes a fundamentally different model-based method that explicitly isolates anomalies instead of profiles normal points. To our best knowledge, the concept of isolation has not been explored in current literature. The use of isolation enables the proposed method, iForest, to exploit sub-sampling to an extent that is not feasible in existing methods, creating an algorithm which has a linear time complexity with a low constant and a low memory requirement. Our empirical evaluation shows that iForest performs favourably to ORCA, a near-linear time complexity distance-based method, LOF and random forests in terms of AUC and processing time, and especially in large data sets. iForest also works well in high dimensional problems which have a large number of irrelevant attributes, and in situations where training set does not contain any anomalies.

Connection topologies for combining genetic and least square methods for neural learning

**Authors:**Ghosh, Ranadhir**Date:**2004**Type:**Text , Journal article**Relation:**Journal of Intelligent Systems Vol. 13, no. 3 (2004), p. 199-232**Full Text:**false**Reviewed:****Description:**In the last few years, there have been many works in the area of hybrid neural learning algorithms combining a global and local based method for training artificial neural networks. In this paper, we discuss various connection strategies that can be applied to a special kind of a hybrid neural learning algorithm group, one that combines a genetic algorithm-based method with various least square-based methods like QR factorization. The relative advantages and disadvantages of the different connection types are studied to find a suitable connection topology for combining the two different learning methods. The methodology also finds the optimum number of hidden neurons using a hierarchical combination methodology structure for weights and architecture. We have tested our proposed approach on XOR, 10 bit odd parity, and some other real-world benchmark data sets, such as the hand-writing character dataset from CEDAR, Breast cancer, and Heart Disease from the UCI machine learning repository.**Description:**C1

Decoupled modeling of gene regulatory networks using Michaelis-Menten kinetics

- Youseph, Ahammed, Chetty, Madhu, Karmakar, Gour

**Authors:**Youseph, Ahammed , Chetty, Madhu , Karmakar, Gour**Date:**2015**Type:**Text , Conference proceedings**Full Text:**false**Description:**A set of genes and their regulatory interactions are represented in a gene regulatory network (GRN). Since GRNs play a major role in maintaining the cellular activities, inferring these networks is significant for understanding biological processes. Among the models available for GRN reconstruction, our recently developed nonlinear model [1] using Michaelis-Menten kinetics is considered to be more biologically relevant. However, the model remains coupled in the current form making the process computationally expensive, especially for large GRNs. In this paper, we enhance the existing model leading to a decoupled form which not only speeds up the computation, but also makes the model more realistic by representing the strength of each regulatory arc by a distinct Michaelis-Menten constant. The parameter estimation is carried out using differential evolution algorithm. The model is validated by inferring two synthetic networks. Results show that while the accuracy of reconstruction is similar to the coupled model, they are achieved at a faster speed. Â© Springer International Publishing Switzerland 2015.

On the degrees of a strongly vertex-magic graph

- Balbuena, Camino, Barker, Ewan, Das, K. C., Lin, Yuqing, Miller, Mirka, Ryan, Joe, Slamin,, Sugeng, Kiki Ariyanti, Tkac, M.

**Authors:**Balbuena, Camino , Barker, Ewan , Das, K. C. , Lin, Yuqing , Miller, Mirka , Ryan, Joe , Slamin, , Sugeng, Kiki Ariyanti , Tkac, M.**Date:**2006**Type:**Text , Journal article**Relation:**Discrete Mathematics Vol. 306, no. 6 (2006), p. 539-551**Full Text:**false**Reviewed:****Description:**Let G=(V,E) be a finite graph, where |V|=n≥2 and |E|=e≥1. A vertex-magic total labeling is a bijection λ from V∪E to the set of consecutive integers {1,2,...,n+e} with the property that for every v∈V, λ(v)+∑w∈N(v)λ(vw)=h for some constant h. Such a labeling is strong if λ(V)={1,2,...,n}. In this paper, we prove first that the minimum degree of a strongly vertex-magic graph is at least two. Next, we show that if 2e≥10n2-6n+1, then the minimum degree of a strongly vertex-magic graph is at least three. Further, we obtain upper and lower bounds of any vertex degree in terms of n and e. As a consequence we show that a strongly vertex-magic graph is maximally edge-connected and hamiltonian if the number of edges is large enough. Finally, we prove that semi-regular bipartite graphs are not strongly vertex-magic graphs, and we provide strongly vertex-magic total labeling of certain families of circulant graphs. © 2006 Elsevier B.V. All rights reserved**Description:**C1**Description:**2003001603

A novel motion classification based intermode selection strategy for HEVC performance improvement

- Podder, Pallab, Paul, Manoranjan, Murshed, Manzur

**Authors:**Podder, Pallab , Paul, Manoranjan , Murshed, Manzur**Date:**2015**Type:**Text , Journal article**Relation:**Neurocomputing Vol. 173, no. Part 3 (2015), p. 1211-1220**Relation:**http://purl.org/au-research/grants/arc/DP130103670**Full Text:**false**Reviewed:****Description:**High Efficiency Video Coding (HEVC) standard adopts several new approaches to achieve higher coding efficiency (approximately 50% bit-rate reduction) compared to its predecessor H.264/AVC with same perceptual image quality. Huge computational time has also increased due to the algorithmic complexity of HEVC compared to H.264/AVC. However, it is really a demanding task to reduce the encoding time while preserving the similar quality of the video sequences. In this paper, we propose a novel efficient intermode selection technique and incorporate into HEVC framework to predict motion estimation and motion compensation modes between current and reference blocks and perform faster inter mode selection based on three dissimilar motion types in divergent video sequences. Instead of exploring and traversing all the modes exhaustively, we merely select a subset of candidate modes and the final mode from the selected subset is determined based on their lowest Lagrangian cost function. The experimental results reveal that average encoding time can be downscaled by 40% with similar rate-distortion performance compared to the exhaustive mode selection strategy in HEVC.**Description:**High Efficiency Video Coding (HEVC) standard adopts several new approaches to achieve higher coding efficiency (approximately 50% bit-rate reduction) compared to its predecessor H.264/AVC with same perceptual image quality. Huge computational time has also increased due to the algorithmic complexity of HEVC compared to H.264/AVC. However, it is really a demanding task to reduce the encoding time while preserving the similar quality of the video sequences. In this paper, we propose a novel efficient intermode selection technique and incorporate into HEVC framework to predict motion estimation and motion compensation modes between current and reference blocks and perform faster inter mode selection based on three dissimilar motion types in divergent video sequences. Instead of exploring and traversing all the modes exhaustively, we merely select a subset of candidate modes and the final mode from the selected subset is determined based on their lowest Lagrangian cost function. The experimental results reveal that average encoding time can be downscaled by 40% with similar rate-distortion performance compared to the exhaustive mode selection strategy in HEVC. Â© 2015 Elsevier B.V.

PCA based population generation for genetic network optimization

- Youseph, Ahammed, Chetty, Madhu, Karmakar, Gour

**Authors:**Youseph, Ahammed , Chetty, Madhu , Karmakar, Gour**Date:**2018**Type:**Text , Journal article**Relation:**Cognitive Neurodynamics Vol. 12, no. 4 (2018), p. 417-429**Full Text:**false**Reviewed:****Description:**A gene regulatory network (GRN) represents a set of genes and its regulatory interactions. The inference of the regulatory interactions between genes is usually carried out using an appropriate mathematical model and the available gene expression profile. Among the various models proposed for GRN inference, our recently proposed Michaelis–Menten based ODE model provides a good trade-off between the computational complexity and biological relevance. This model, like other known GRN models, also uses an evolutionary algorithm for parameter estimation. Considering various issues associated with such population based stochastic optimization approaches (e.g. diversity, premature convergence due to local optima, accuracy, etc.), it becomes important to seed the initial population with good individuals which are closer to the optimal solution. In this paper, we exploit the inherent strength of principal component analysis (PCA) in a novel manner to initialize the population for GRN optimization. The benefit of the proposed method is validated by reconstructing in silico and in vivo networks of various sizes. For the same level of accuracy, the approach with PCA based initialization shows improved convergence speed.

Large dataset complexity reduction for classification: An optimization perspective

**Authors:**Yatsko, Andrew**Date:**2012**Type:**Text , Thesis , PhD**Full Text:****Description:**Doctor of Philosophy**Description:**Computational complexity in data mining is attributed to algorithms but lies hugely with the data. Different algorithms may exist to solve the same problem, but the simplest is not always the best. At the same time, data of astronomical proportions is rather common, boosted by automation, and the fuller the data, the better resolution of the concept it projects. Paradoxically, it is the computing power that is lacking. Perhaps a fast algorithm can be run on the data, but not the optimal. Even then any modeling is much constrained, involving serial application of many algorithms. The only other way to relieve the computational load is via making the data lighter. Any representative subset has to preserve the data essence suiting, ideally, any algorithm. The reduction should minimize the error of approximation, while trading precision for performance. Data mining is a wide field. We concentrate on classification. In the literature review we present a variety of methods, emphasizing the effort of past decade. Two major objects of reduction are instances and attributes. The data can be also recast into a more economical format. We address sampling, noise reduction, class domain binarization, feature ranking, feature subset selection, feature extraction, and also discretization of continuous features. Achievements are tremendous, but so are possibilities. We improve an existing technique of data cleansing and suggest a way of data condensing as the extension. We also touch on noise reduction. Instance similarity, excepting the class mix, prompts a technique of feature selection. Additionally, we consider multivariate discretization, enabling a compact data representation without the size change. We compare proposed methods with alternative techniques which we introduce new, implement or use available.

**Authors:**Yatsko, Andrew**Date:**2012**Type:**Text , Thesis , PhD**Full Text:****Description:**Doctor of Philosophy**Description:**Computational complexity in data mining is attributed to algorithms but lies hugely with the data. Different algorithms may exist to solve the same problem, but the simplest is not always the best. At the same time, data of astronomical proportions is rather common, boosted by automation, and the fuller the data, the better resolution of the concept it projects. Paradoxically, it is the computing power that is lacking. Perhaps a fast algorithm can be run on the data, but not the optimal. Even then any modeling is much constrained, involving serial application of many algorithms. The only other way to relieve the computational load is via making the data lighter. Any representative subset has to preserve the data essence suiting, ideally, any algorithm. The reduction should minimize the error of approximation, while trading precision for performance. Data mining is a wide field. We concentrate on classification. In the literature review we present a variety of methods, emphasizing the effort of past decade. Two major objects of reduction are instances and attributes. The data can be also recast into a more economical format. We address sampling, noise reduction, class domain binarization, feature ranking, feature subset selection, feature extraction, and also discretization of continuous features. Achievements are tremendous, but so are possibilities. We improve an existing technique of data cleansing and suggest a way of data condensing as the extension. We also touch on noise reduction. Instance similarity, excepting the class mix, prompts a technique of feature selection. Additionally, we consider multivariate discretization, enabling a compact data representation without the size change. We compare proposed methods with alternative techniques which we introduce new, implement or use available.

- Yuan, Y. B., Fang, Shucherng, Gao, David

**Authors:**Yuan, Y. B. , Fang, Shucherng , Gao, David**Date:**2012**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 52, no. 2 (2012), p. 195-209**Full Text:**false**Reviewed:****Description:**This paper studies the canonical duality theory for solving a class of quadri- nomial minimization problems subject to one general quadratic constraint. It is shown that the nonconvex primal problem in Rn can be converted into a concave maximization dual problem over a convex set in R2 , such that the problem can be solved more efficiently. The existence and uniqueness theorems of global minimizers are provided using the triality theory. Examples are given to illustrate the results obtained. © 2011 Springer Science+Business Media, LLC.

Vendor selection using fuzzy C means algorithm and analytic hierarchy process

- Nine, M.S.Q.Z., Khan, M.A.K., Hoque, M.H., Ali, Mortuza, Shil, N.C., Sorwar, Golam

**Authors:**Nine, M.S.Q.Z. , Khan, M.A.K. , Hoque, M.H. , Ali, Mortuza , Shil, N.C. , Sorwar, Golam**Date:**2009**Type:**Text , Conference paper**Relation:**Fuzzy Systems, 2009. FUZZ-IEEE 2009. IEEE International Conference**Full Text:**false**Reviewed:****Description:**Vendor selection is a strategic issue in supply chain management for any organization to identify the right supplier. Such selection in most cases is based on the analysis of some specific criteria. Most of the researches so far concentrate on multi-criteria decision-making analysis. Though many approaches have been proposed, analytic hierarchy process (AHP) is the most well known as it can deal with a very complex criteria structure. In AHP, the selected criteria are ranked and organized in a hierarchical order from generic to specific to formulate the problem. Though this order of ranking is acceptably logical, it incurs a huge computational complexity when a large number of alternatives are considered as the selection criteria. Moreover, the AHP may generate wrong selection due to computational error. To address these limitations, a novel model namely vendor selection using fuzzy c-means algorithm and analytic hierarchy process (VFA) is presented in this paper by integrating the fuzzy c-means clustering (FCM) algorithm with analytic hierarchy process (AHP). The outcome of the proposed VFA algorithm is compared with the basic AHP algorithm and VFA outperforms the basic AHP and reduces the computational complexity of AHP by a factor of 7.

- Gao, David, Watson, Layne, Easterling, David, Thacker, William, Billups, Stephen

**Authors:**Gao, David , Watson, Layne , Easterling, David , Thacker, William , Billups, Stephen**Date:**2013**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 28, no. 2 (2013), p. 313-326**Full Text:**false**Reviewed:****Description:**This paper presents a massively parallel global deterministic direct search method (VTDIRECT) for solving nonconvex quadratic minimization problems with either box or1 integer constraints. Using the canonical dual transformation, these well-known NP-hard problems can be reformulated as perfect dual stationary problems (with zero duality gap). Under certain conditions, these dual problems are equivalent to smooth concave maximization over a convex feasible space. Based on a perturbation method proposed by Gao, the integer programming problem is shown to be equivalent to a continuous unconstrained Lipschitzian global optimization problem. The parallel algorithm VTDIRECT is then applied to solve these dual problems to obtain global minimizers. Parallel performance results for several nonconvex quadratic integer programming problems are reported. © 2013 Copyright Taylor and Francis Group, LLC.**Description:**2003010580

- «
- ‹
- 1
- ›
- »

Are you sure you would like to clear your session, including search history and login status?