Application of artificial intelligence to improve quality of service in computer networks
- Authors: Ahmad, Iftekhar , Kamruzzaman, Joarder , Habibi, Daryoush
- Date: 2012
- Type: Text , Journal article
- Relation: Neural Computing & Applications Vol. 21, no. 1 (2012), p. 81-90
- Full Text: false
- Reviewed:
- Description: Resource sharing between book-ahead (BA) and instantaneous request (IR) reservation often results in high preemption rates for ongoing IR calls in computer networks. High IR call preemption rates cause interruptions to service continuity, which is considered detrimental in a QoS-enabled network. A number of call admission control models have been proposed in the literature to reduce preemption rates for ongoing IR calls. Many of these models use a tuning parameter to achieve certain level of preemption rate. This paper presents an artificial neural network (ANN) model to dynamically control the preemption rate of ongoing calls in a QoS-enabled network. The model maps network traffic parameters and desired operating preemption rate by network operator providing the best for the network under consideration into appropriate tuning parameter. Once trained, this model can be used to automatically estimate the tuning parameter value necessary to achieve the desired operating preemption rates. Simulation results show that the preemption rate attained by the model closely matches with the target rate.
Pattern recognition in bioinformatics : Girls lose out
- Authors: Ahmad, Shandar , Chetty, Madhu , Schmidt, Bertil
- Date: 2010
- Type: Text , Journal article
- Relation: Pattern Recognition Letter Vol. 31, no. 14 (2010), p. 2071-2072
- Full Text: false
- Reviewed:
- Description: Editorial- With the advent of high speed computers, in-silico studies on biological patterns in recent years have been significantly impacted by the pattern recognition techniques. In this special issue, ‘Pattern Recognition in Bioinformatics’, we present various sophisticated algorithms for a wide range of pattern recognition problems from the world of complex biological systems, whether these are specific sequence signatures – motifs that stand out in discovering its partner – or substructures in an interaction network that determines an organisms’ response to external stimuli. The 12 high-quality articles included in this special issue are essentially based on significant extensions of the selected papers presented at the Third International Conference on Pattern Recognition in Bioinformatics (PRIB 2008) held in Melbourne, Australia. All these selected papers for special issue have again undergone a thorough review by at least three reviewers who are experts in the field. The fresh review process was followed to ensure that the papers met the high standards of scientific and technical merit of the Pattern Recognition Letters journal. The issue is broadly divided into three sections of four papers each, namely (1) Section 1: Interaction Networks and Feature-based Predictions (2) Section 2: Microarray and Transcription Data Analysis (3) Section 3: Sequence Analysis and Motif Discovery
Detection and separation of generic-shaped objects by fuzzy clustering
- Authors: Ali, Mohammad , Karmakar, Gour , Dooley, Laurence
- Date: 2010
- Type: Text , Journal article
- Relation: International Journal of Intelligent Computing and Cybernetics Vol. 3, no. 3 (2010 2010), p. 365-390
- Full Text: false
- Reviewed:
- Description: Image segmentation involves the separation of mutually exclusive regions/objects of interest (Gonzalez and Woods, 2002), and is integral to the image processing, coding and interpretation domains, with examples of some of the eclectic range of applications including: image analysis, robot vision, automatic car assembly, security surveillance systems, object recognition and medical imaging (Gonzalez and Woods, 2002; Hoppner et al., 1999; Pham and Prince, 1999; Gath and Geva, 1989; Pal and Pal, 1993). As there are potentially a very large number of perceptual objects in an image, with subtle variations between them, this makes generalised object-based segmentation an especially challenging task.
Exploiting spatial smoothness to recover undecoded coefficients for transform domain distributed video coding
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2013
- Type: Text , Conference paper
- Relation: IEEE International Conference on Image Processing; Melbourne, Australia; 15th-18th September 2013, p. 1782-1786
- Relation: http://purl.org/au-research/grants/arc/DP1095487
- Full Text: false
- Reviewed:
- Description: In a transform domain distributed video coding scheme, the correlation between the current encoding unit, e.g. block and slice, and the corresponding side-information is modeled using a virtual channel. This correlation model is then used for rate allocation, quantization, and Wyner-Ziv coding. Since the encoder can only have an estimate of the correlation instead of the exact knowledge of the side-information, the decoder will fail to recover the quantized transformed coeffi- cients with a nonzero probability. In this paper, we propose to integrate a scheme at the decoder to recover the undecoded coefficients using the spatial smoothness property of individual video frames. Simulation results demonstrated that, at different decoding failure probabilities, a transformed coeffi- cient recovery scheme can significantly improve the quality of videos in terms of both PSNR and SSIM.
- Description: In a transform domain distributed video coding scheme, the correlation between the current encoding unit, e.g. block and slice, and the corresponding side-information is modeled using a virtual channel. This correlation model is then used for rate allocation, quantization, and Wyner-Ziv coding. Since the encoder can only have an estimate of the correlation instead of the exact knowledge of the side-information, the decoder will fail to recover the quantized transformed coeffi- cients with a nonzero probability. In this paper, we propose to integrate a scheme at the decoder to recover the undecoded coefficients using the spatial smoothness property of individual video frames. Simulation results demonstrated that, at different decoding failure probabilities, a transformed coeffi- cient recovery scheme can significantly improve the quality of videos in terms of both PSNR and SSIM
A parametric approach to list decoding of Reed-Solomon codes using interpolation
- Authors: Ali, Mortuza , Kiujper, Margreta
- Date: 2011
- Type: Text , Journal article
- Relation: IEEE Transaction on Information Theory Vol. 57, no. 10 (2011), p. 6718-6728
- Full Text: false
- Reviewed:
- Description: Abstract—In this paper, we present a minimal list decoding algorithm for Reed-Solomon (RS) codes. Minimal list decoding for a code refers to list decoding with radius , where is the minimum of the distances between the received word and any codeword in . We consider the problem of determining the value of as well as determining all the codewords at distance . Our approach involves a parametrization of interpolating polynomials of a minimal Gröbner basis . We present two efficient ways to compute . We also show that so-called re-encoding can be used to further reduce the complexity. We then demonstrate how our parametric approach can be solved by a computationally feasible rational curve fitting solution from a recent paper by Wu. Besides, we present an algorithm to compute the minimum multiplicity as well as the optimal values of the parameters associated with this multiplicity, which results in overall savings in both memory and computation
Predictive coding of integers with real-valued predictions
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2013
- Type: Text , Conference paper
- Relation: DCC 2013 Data Compression Conference; Snowbird, USA; 20th-22nd March 2013; p. 431-440
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text: false
- Reviewed:
- Description: In this paper, we have extended the Rice-Golomb code so that it can operate at fractional precision to efficiently exploit the real-valued predictions. Coding at infinitesimal precision allows the residuals to be modeled with the Lap lace distribution. Unlike the Rice-Golomb code, which maps equally probable opposite-signed residuals to different integers, the proposed coding scheme is symmetric in the sense that, at infinitesimal precision, it assigns code words of equal length to equally probable residual intervals. The symmetry of both the Lap lace distribution and the coding scheme facilitates the analysis of the proposed coding scheme to determine the average code-length and the optimal value of the associated coding parameter.
Motion compensation for block-based lossless video coding using lattice-based binning
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2010
- Type: Text , Conference paper
- Full Text: false
- Reviewed:
- Description: Abstract— A block-based lossless video coding scheme using the notion of binning has been proposed in [1]. To further improve the compression and reduce the complexity, in this paper we investigate the impact of two sub-optimal motion search algorithms on the performance of this lattice-based scheme. While one of the algorithm tries avoiding motion vectors, the other tries to reduce complexity. Our experimental results have demonstrated that the loss due to sub-optimal motion search outweighs the gain when motion vectors are avoided. However, experimental results have shown that there is negligible performance loss when lowcomplexity sub-optimal three step search is used.
A count data model for heart rate variability forecasting and premature ventricular contraction detection
- Authors: Allami, Ragheed , Stranieri, Andrew , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2017
- Type: Text , Journal article
- Relation: Signal Image and Video Processing Vol. 11, no. 8 (2017), p. 1427-1435
- Full Text:
- Reviewed:
- Description: Heart rate variability (HRV) measures including the standard deviation of inter-beat variations (SDNN) require at least 5 min of ECG recordings to accurately measure HRV. In this paper, we predict, using counts data derived from a 3-min ECG recording, the 5-min SDNN and also detect premature ventricular contraction (PVC) beats with a high degree of accuracy. The approach uses counts data combined with a Poisson-generated function that requires minimal computational resources and is well suited to remote patient monitoring with wearable sensors that have limited power, storage and processing capacity. The ease of use and accuracy of the algorithm provide opportunity for accurate assessment of HRV and reduce the time taken to review patients in real time. The PVC beat detection is implemented using the same count data model together with knowledge-based rules derived from clinical knowledge.
Unitary anomaly detection for ubiquitous safety in machine health monitoring
- Authors: Amar, Muhammad , Gondal, Iqbal , Wilson, Campbell
- Date: 2012
- Type: Text , Conference paper
- Relation: 19th International Conference on Neural Information Processing (INCONIP) p. 361-368
- Full Text: false
- Reviewed:
- Description: Safety has always been of vital concern in both industrial and home applications. Ensuring safety often requires certain quantifications regarding the inclusive behavior of the system under observation in order to determine deviations from normal behavior. In machine health monitoring, the vibration signal is of great importance for such measurements because it includes abundant information from several machine parts and surroundings that can influence machine behavior. This paper proposes a unitary anomaly detection technique (UAD) that, upon observation of abnormal behavior in the vibration signal, can trigger an alarm with an adjustable threshold in order to meet different safety requirements. The normalized amplitude of spectral contents of the quasi stationary time vibration signal are divided into frequency bins, and the summed amplitudes frequencies over bin are used as features. From a training set consisting of normal vibration signals, Gaussian distribution models are obtained for each feature, which are then used for anomaly detection.
A novel color image fusion QoS measure for multi-sensor night vision applications
- Authors: Anwaar, Ul-Haq , Gondal, Iqbal , Murshed, Manzur
- Date: 2010
- Type: Text , Conference proceedings
- Full Text: false
- Description: Color image fusion of visible and infra-red imagery can play an important role in multi-sensor night vision systems that are an integral part of modern warfare. Image fusion minimizes the amount of required bandwidth by transmitting the fused image rather than multiple sensor images. Color image fusion can be achieved by combining inputs from original colored sensors or by employing pseudo colorization and color transfer to grayscale images. Various quality measures have been proposed for multi-sensor grayscale image fusion techniques; but no appropriate quality measure has been devised for the quality evaluation of multi-sensor color image fusion. In this paper, we propose a novel color image fusion quality measure, Color Fusion Objective Index (CFOI) based on colorfulness, gradient similarity and mutual information techniques. Experimental results show the effectiveness of CFOI to evaluate the color and salient feature extraction introduced by color fusion techniques into the final fused imagery as well as its consistency with subjective evaluation.
Enhanced polyphonic music genre classification using high level features
- Authors: Arabi, Arash , Lu, Guojun
- Date: 2009
- Type: Text , Conference paper
- Relation: Proceedings of the 2009 IEEE International Conference on Signal and Image Processing Applications p. 1-6
- Full Text: false
- Reviewed:
- Description: The task of classifying the genre of polyphonic music signals is traditionally done using only low level features of the signal. In this paper high level features have been applied to improve the task of music genre classification. The use of statistical chord features and chord progression information in conjunction with low level features are proposed in this paper. The chord progression information is manifested in genre probability descriptors calculated using a pattern matching algorithm. Our proposed method provides an improvement of 12.4% in the classification results over a commonly compared technique.
MassBayes: a new generative classifier with multi-dimensional likelihood estimation
- Authors: Aryal, Sunil , Ting, Kaiming
- Date: 2013
- Type: Text , Conference paper
- Relation: Advances in Knowledge Discovery and Data Mining: 17th Pacific-Asia Conference p. 136-148
- Full Text: false
- Reviewed:
- Description: Existing generative classifiers (e.g., BayesNet and AnDE) make independence assumptions and estimate one-dimensional likelihood. This paper presents a new generative classifier called MassBayes that estimates multi-dimensional likelihood without making any explicit assumptions. It aggregates the multi-dimensional likelihoods estimated from random subsets of the training data using varying size random feature subsets. Our empirical evaluations show that MassBayes yields better classification accuracy than the existing generative classifiers in large data sets. As it works with fixed-size subsets of training data, it has constant training time complexity and constant space complexity, and it can easily scale up to very large data sets.
A generic ensemble approach to estimate multidimensional likelihood in Bayesian classifier learning
- Authors: Aryal, Sunil , Ting, Kaiming
- Date: 2016
- Type: Text , Journal article
- Relation: Computational Intelligence Vol. 32, no. 3 (2016), p. 458-479
- Full Text: false
- Reviewed:
- Description: In Bayesian classifier learning, estimating the joint probability distribution (,) or the likelihood (|) directly from training data is considered to be difficult, especially in large multidimensional data sets. To circumvent this difficulty, existing Bayesian classifiers such as Naive Bayes, BayesNet, and ADE have focused on estimating simplified surrogates of (,) from different forms of one‐dimensional likelihoods. Contrary to the perceived difficulty in multidimensional likelihood estimation, we present a simple generic ensemble approach to estimate multidimensional likelihood directly from data. The idea is to aggregate (|) estimated from a random subsample of data . This article presents two ways to estimate multidimensional likelihoods using the proposed generic approach and introduces two new Bayesian classifiers called and that estimate (|) using a nearest‐neighbor density estimation and a probability estimation through feature space partitioning, respectively. Unlike the existing Bayesian classifiers, ENNBayes and MassBayes have constant training time and space complexities and they scale better than existing Bayesian classifiers in very large data sets. Our empirical evaluation shows that ENNBayes and MassBayes yield better predictive accuracy than the existing Bayesian classifiers in benchmark data sets.
Data-dependent dissimilarity measure : An effective alternative to geometric distance measures
- Authors: Aryal, Sunil , Ting, Kaiming , Washio, Takashi , Haffari, Gholamreza
- Date: 2017
- Type: Text , Journal article
- Relation: Knowledge and Information Systems Vol. 53, no. 2 (2017), p. 479-506
- Full Text: false
- Reviewed:
- Description: Nearest neighbor search is a core process in many data mining algorithms. Finding reliable closest matches of a test instance is still a challenging task as the effectiveness of many general-purpose distance measures such as ℓp -norm decreases as the number of dimensions increases. Their performances vary significantly in different data distributions. This is mainly because they compute the distance between two instances solely based on their geometric positions in the feature space, and data distribution has no influence on the distance measure. This paper presents a simple data-dependent general-purpose dissimilarity measure called ‘ mp -dissimilarity’. Rather than relying on geometric distance, it measures the dissimilarity between two instances as a probability mass in a region that encloses the two instances in every dimension. It deems two instances in a sparse region to be more similar than two instances of equal inter-point geometric distance in a dense region. Our empirical results in k-NN classification and content-based multimedia information retrieval tasks show that the proposed mp -dissimilarity measure produces better task-specific performance than existing widely used general-purpose distance measures such as ℓp -norm and cosine distance across a wide range of moderate- to high-dimensional data sets with continuous only, discrete only, and mixed attributes.
Efficient and effective transformed image identification
- Authors: Awrangjeb, Mohammad , Lu, Guojun
- Date: 2008
- Type: Text , Conference proceedings
- Full Text: false
- Description: The SIFT (scale invariant feature transform) has demonstrated its superior performance in identifying transformed images over many other approaches. However, both of its detection and matching stages are expensive, because a large number of keypoints are detected in the scale-space and each keypoint is described using a 128-dimensional vector. We present two possible solutions for feature-point reduction. First is to down scale the image before the SIFT keypoint detection and second is to use corners (instead of SIFT keypoints) which are visually significant, more robust, and much smaller in number than the SIFT keypoints. Either the curvature descriptor or the highly distinctive SIFT descriptors at corner locations can be used to represent corners.We then describe a new feature-point matching technique, which can be used for matching both the down-scaled SIFT keypoints and corners. Experimental results show that two feature-point reduction solutions combined with the SIFT descriptors and the proposed feature-point matching technique not only improve the computational efficiency and decrease the storage requirement, but also improve the transformed image identification accuracy (robustness).
A comparative study on contour-based corner detectors
- Authors: Awrangjeb, Mohammad , Lu, Guojun , Fraser, Clive
- Date: 2010
- Type: Text , Conference paper
- Relation: Digital Image Computing: Techniques and Applications (DICTA), 2010 International Conference
- Full Text: false
- Reviewed:
- Description: Contour-based corner detectors directly or indirectly estimate a significance measure (e.g. curvature) on the points of a planar curve and select the curvature extrema points as corners. While an extensive number of contour-based corner detectors have been proposed over the last four decades, there is no comparative study of recently proposed promising detectors. This paper is an attempt to fill this gap. We present the general frame-work of the contour-based corner detection technique and discuss two major issues - curve smoothing and curvature estimation, which have major impacts on the corner detection performance. A number of promising detectors are compared using an automatic evaluation system on a common large dataset. It is observed that while the detectors using indirect curvature estimation techniques are more robust, the detectors using direct curvature estimation techniques are faster.
Performance comparisons of contour-based corner detectors
- Authors: Awrangjeb, Mohammad , Lu, Guojun , Fraser, Clive
- Date: 2012
- Type: Text , Journal article
- Relation: IEEE Transactions on Image Processing Vol. 21, no. 9 (2012), p. 4167-4179
- Full Text: false
- Reviewed:
- Description: Abstract— Corner detectors have many applications in computer vision and image identification and retrieval. Contour-based corner detectors directly or indirectly estimate a significance measure (e.g., curvature) on the points of a planar curve, and select the curvature extrema points as corners. While an extensive number of contour-based corner detectors have been proposed over the last four decades, there is no comparative study of recently proposed detectors. This paper is an attempt to fill this gap. The general framework of contour-based corner detection is presented, and two major issues – curve smoothing and curvature estimation, which have major impacts on the corner detection performance, are discussed. A number of promising detectors are compared using both automatic and manual evaluation systems on two large datasets. It is observed that while the detectors using indirect curvature estimation techniques are more robust, the detectors using direct curvature estimation techniques are faster.
A fast corner detector based on the chord-to-point distance accumulation technique
- Authors: Awrangjeb, Mohammad , Lu, Guojun , Fraser, Clive , Ravanbakhsh, Mehdi
- Date: 2009
- Type: Text , Conference paper
- Relation: Digital Image Computing: Techniques and Applications, 2009. DICTA '09.
- Full Text: false
- Reviewed:
- Description: Abstract—The previously proposed contour-based multi-scale corner detector based on the chord-to-point distance accumulation (CPDA) technique has proved its superior robustness over many other single- and multi-scale detectors. However, the original CPDA detector is computationally expensive since it calculates the CPDA discrete curvature on each point of the curve. The proposed improvement obtains a set of probable candidate points before the CPDA curvature estimation. The CPDA curvature is estimated on these chosen candidate points only. Consequently, the improved CPDA detector becomes faster, while retaining a similar robustness to the original CPDA detector.
An improved curvature scale-space corner detector and a robust corner matching approach for transformed image identification
- Authors: Awrangjeb, Mohammad , Lu, Guojun
- Date: 2008
- Type: Text , Journal article
- Relation: Image Processing, IEEE Transactions Vol. 17, no. 12 (2008), p. 2425-2441
- Full Text: false
- Reviewed:
- Description: There are many applications, such as image copyright protection, where transformed images of a given test image need to be identified. The solution to this identification problem consists of two main stages. In stage one, certain representative features, such as corners, are detected in all images. In stage two, the representative features of the test image and the stored images are compared to identify the transformed images for the test image. Curvature scale-space (CSS) corner detectors look for curvature maxima or inflection points on planar curves. However, the arc-length used to parameterize the planar curves by the existing CSS detectors is not invariant to geometric transformations such as scaling. As a solution to stage one, this paper presents an improved CSS corner detector using the affine-length parameterization which is relatively invariant to affine transformations. We then present an improved corner matching technique as a solution to the stage two. Finally, we apply the proposed corner detection and matching techniques to identify the transformed images for a given image and report the promising results.
Building roof plane extraction from LIDAR data
- Authors: Awrangjeb, Mohammad , Lu, Guojun
- Date: 2013
- Type: Text , Conference paper
- Relation: 2013 International Conference on Digital Image Computing: Techniques and Applications (DICTA)
- Full Text:
- Reviewed:
- Description: This paper presents a new segmentation technique to use LIDAR point cloud data for automatic extraction of building roof planes. The raw LIDAR points are first classified into two major groups: ground and non-ground points. The ground points are used to generate a 'building mask' in which the black areas represent the ground where there are no laser returns below a certain height. The non-ground points are segmented to extract the planar roof segments. First, the building mask is divided into small grid cells. The cells containing the black pixels are clustered such that each cluster represents an individual building or tree. Second, the non-ground points within a cluster are segmented based on their coplanarity and neighbourhood relations. Third, the planar segments are refined using a rule-based procedure that assigns the common points among the planar segments to the appropriate segments. Finally, another rule-based procedure is applied to remove tree planes which are generally small in size and randomly oriented. Experimental results on three Australian sites have shown that the proposed method offers high building detection and roof plane extraction rates.