Estimation of concentration-dependent diffusion coefficient in drying process from the space-averaged concentration versus time with experimental data
- Authors: Loulou, T. , Adhikari, Benu , Lecomte, D.
- Date: 2006
- Type: Text , Journal article
- Relation: Chemical Engineering Science Vol. 61, no. 22 (2006), p. 7185-7198
- Full Text: false
- Reviewed:
- Description: The estimation of a concentration-dependent diffusion coefficient in a drying process is known as an inverse coefficient problem. The solution is sought wherein the space-average concentration is known as function of time (mass loss monitoring). The problem is stated as the minimization of a functional and gradient-based algorithms are used to solve it. Many numerical and experimental examples that demonstrate the effectiveness of the proposed approach are presented. Thin slab drying was carried out in an isothermal drying chamber built in our laboratory. The diffusion coefficients of fructose obtained with the present method are compared with existing literature results. (c) 2006 Elsevier Ltd. All rights reserved.
A variable initialization approach to the EM algorithm for better estimation of the parameters of hidden Markov Model based acoustic modeling of speech signals
- Authors: Huda, Shamsul , Ghosh, Ranadhir , Yearwood, John
- Date: 2006
- Type: Text , Conference paper
- Relation: Paper presented at Artificial Intelligence, Advances in Data Mining, Applications in Medicine, Web Mining, Marketing, Image and Signal Mining Conference 2006, Leipzig, Germany : 14th July, 2006 p. 416-430
- Full Text: false
- Reviewed:
- Description: The traditional method for estimation of the parameters of Hidden Markov Model (HMM) based acoustic modeling of speech uses the Expectation-Maximization (EM) algorithm. The EM algorithm is sensitive to initial values of HMM parameters and is likely to terminate at a local maximum of likelihood function resulting in non-optimized estimation for HMM and lower recognition accuracy. In this paper, to obtain better estimation for HMM and higher recognition accuracy, several candidate HMMs are created by applying EM on multiple initial models. The best HMM is chosen from the candidate HMMs which has highest value for likelihood function. Initial models are created by varying maximum frame number in the segmentation step of HMM initialization process. A binary search is applied while creating the initial models. The proposed method has been tested on TIMIT database. Experimental results show that our approach obtains improved values for likelihood function and improved recognition accuracy.
- Description: E1
- Description: 2003001542
On graphs of maximum size with given girth and order
- Authors: Miller, Mirka , Lin, Yuqing , Brankovic, Ljiljana , Tang, Jianmin
- Date: 2006
- Type: Text , Conference paper
- Relation: Paper presented at AWOCA 2006, 17th Australasian Workshop on Combinatorial Algorithms, Uluru, Australia : 13th July, 2006
- Full Text: false
- Reviewed:
- Description: E1
- Description: 2003001918
A new global optimization algorithm based on a dynamical systems approach
- Authors: Mammadov, Musa
- Date: 2004
- Type: Text , Conference paper
- Relation: Paper presented at ICOTA6: 6th International Conference on Optimization - Techniques and Applications, Ballarat, Victoria : 9th December, 2004
- Full Text: false
- Reviewed:
- Description: The purpose of the paper is to develop and study new techniques for global optimization based on dynamical systems approach. This approach uses the notion of relationship between variables which describes influences of the changes of the variables to each other. A numerical algorithm for global optimization is introduced.
- Description: E1
- Description: 2003000892
Solving a system of nonlinear integral equations by an RBF network
- Authors: Golbabai, A. , Mammadov, Musa , Seifollahi, Sattar
- Date: 2009
- Type: Text , Journal article
- Relation: Computers & Mathematics with Applications Vol. 57, no. 10 (2009), p. 1651-1658
- Full Text: false
- Reviewed:
- Description: In this paper, a novel learning strategy for radial basis function networks (RBFN) is proposed. By adjusting the parameters of the hidden layer, including the RBF centers and widths, the weights of the output layer are adapted by local optimization methods. A new local optimization algorithm based on a combination of the gradient and Newton methods is introduced. The efficiency of some local optimization methods to Update the weights of RBFN is Studied in solving systems of nonlinear integral equations. (C) 2009 Elsevier Ltd. All rights reserved.
Optimal placement of access point in WLAN based on a new algorithm
- Authors: Kouhbor, Shahnaz , Ugon, Julien , Kruger, Alexander , Rubinov, Alex
- Date: 2005
- Type: Text , Conference paper
- Relation: Paper presented at ICMB 2005, International Conference on Mobile Business, Sydney, Australia, 11-13 July 2005, Sydney : 11th - 13th July, 2005
- Full Text:
- Reviewed:
- Description: When designing wireless communication systems, it is very important to know the optimum numbers and locations for the access points (APs). The impact of incorrect placement of APs is significant. If they are placed too far apart, they will generate a coverage gap, but if they are too close to each other, this will lead to excessive co-channel interferences. In this paper we describe a mathematical model developed to find the optimal number and location of APs. To solve the problem, we use the Discrete Gradient optimization algorithm developed at the University of Ballarat. Results indicate that our model is able to solve optimal coverage problems for different numbers of users.
- Description: 2003001377
A hybrid evolutionary algorithm for multi category feature selection in breast cancer recognition
- Authors: Ghosh, Ranadhir , Ghosh, Moumita , Yearwood, John
- Date: 2004
- Type: Text , Conference paper
- Relation: Paper presented at the Second International Conference on Software Computing and Intelligent Systems, Yokahama, Japan : 21st - 22nd September, 2004
- Full Text: false
- Reviewed:
- Description: E1
- Description: 2003000869
An introduction algorithm with selection significance based on a fuzzy deriviative
- Authors: Mammadov, Musa , Yearwood, John
- Date: 2002
- Type: Text , Conference paper
- Relation: Paper presented at Hybrid Information Systems (Advances in Soft Computing), Adelaide : 11th December, 2001
- Full Text: false
- Reviewed:
- Description: E1
- Description: 2003000076
Modified global k-means algorithm for clustering in gene expression data sets
- Authors: Bagirov, Adil , Mardaneh, Karim
- Date: 2006
- Type: Text , Conference paper
- Relation: Paper presented at Intelligent Systems for Bioinformatics 2006, proceedings of the AI 2006 Workshop on Intelligent Systems of Bioinformatics, Hobart, Tasmania : 4th December, 2006
- Full Text:
- Reviewed:
- Description: Clustering in gene expression data sets is a challenging problem. Different algorithms for clustering of genes have been proposed. However due to the large number of genes only a few algorithms can be applied for the clustering of samples. k-means algorithm and its different variations are among those algorithms. But these algorithms in general can converge only to local minima and these local minima are significantly different from global solutions as the number of clusters increases. Over the last several years different approaches have been proposed to improve global search properties of k-means algorithm and its performance on large data sets. One of them is the global k-means algorithm. In this paper we develop a new version of the global k-means algorithm: the modified global k-means algorithm which is effective for solving clustering problems in gene expression data sets. We present preliminary computational results using gene expression data sets which demonstrate that the modified k-means algorithm improves and sometimes significantly results by k-means and global k-means algorithms.
- Description: E1
- Description: 2003001713
Comments on : Optimization and data mining in medicine
- Authors: Bagirov, Adil
- Date: 2009
- Type: Text , Journal article
- Relation: Top Vol. 17, no. 2 (2009), p. 1-3
- Full Text: false
- Reviewed:
New algorithm to find a shape of a finite set of points
- Authors: Sukhorukova, Nadezda , Ugon, Julien
- Date: 2003
- Type: Text , Conference paper
- Relation: Paper presented at the Symposium on Industrial Optimisation and the 9th Australian Optimisation Day, Perth : 30th September, 2002
- Full Text:
- Reviewed:
- Description: Very often in data classification problems we have to determine a shape of a finite set of points within datasets. One of the most common approaches to represent such sets is to determine them as collections of several groups of points. The goal of this project is to develop some algorithms to find a shape for each group. Numerical experiments using the Discrete Gradient method have been done. The results are presented.
- Description: E1
- Description: 2003000351
Evaluation of slug flow-induced flexural loading in pipelines using a surrogate model
- Authors: Sultan, Ibrahim , Reda, Ahmed , Forbes, Gareth
- Date: 2013
- Type: Text , Journal article
- Relation: Journal of Offshore Mechanics and Arctic Engineering Vol. 135, no. 3 (2013), p. 8
- Full Text:
- Reviewed:
- Description: Slug flow induces vibration in pipelines, which may, in some cases, result in fatigue failure. This can result from dynamic stresses, induced by the deflection and bending moment in the pipe span, growing to levels above the endurance limits of the pipeline material. As such, it is of paramount importance to understand and quantify the size of the pipeline response to slug flow under given speed and damping conditions. This paper utilizes the results of an optimization procedure to devise a surrogate closed-form model, which can be employed to calculate the maximum values of the pipeline loadings at given values of speed and damping parameters. The surrogate model is intended to replace the computationally costly numerical procedure needed for the analysis. The maximum values of the lateral deflection and bending moment, along with their locations, have been calculated using the optimization method of stochastic perturbation and successive approximations ( SPSA). The accuracy of the proposed surrogate model will be validated numerically, and the model will be subsequently used in a numerical example to demonstrate its applicability in industrial situations. An accompanying spreadsheet with this worked example is also given.
- Description: C1
Improving gene regulatory network inference using network topology information
- Authors: Nair, Ajay , Chetty, Madhu , Wangikar, Pramod
- Date: 2015
- Type: Text , Journal article
- Relation: Molecular BioSystems Vol. 11, no. 9 (2015), p. 2449-2463
- Full Text: false
- Reviewed:
- Description: Inferring the gene regulatory network (GRN) structure from data is an important problem in computational biology. However, it is a computationally complex problem and approximate methods such as heuristic search techniques, restriction of the maximum-number-of-parents (maxP) for a gene, or an optimal search under special conditions are required. The limitations of a heuristic search are well known but literature on the detailed analysis of the widely used maxP technique is lacking. The optimal search methods require large computational time. We report the theoretical analysis and experimental results of the strengths and limitations of the maxP technique. Further, using an optimal search method, we combine the strengths of the maxP technique and the known GRN topology to propose two novel algorithms. These algorithms are implemented in a Bayesian network framework and tested on biological, realistic, and in silico networks of different sizes and topologies. They overcome the limitations of the maxP technique and show superior computational speed when compared to the current optimal search algorithms.
Least square support vector and multi-linear regression for statistically downscaling general circulation model outputs to catchment streamflows
- Authors: Sachindra, D. A. , Huang, Fuchun , Barton, Andrew , Perera, Bimalka
- Date: 2013
- Type: Text , Journal article
- Relation: International Journal of Climatology Vol. 33, no. 5 (2013), p. 1087-1106
- Full Text: false
- Description: This study employed least square support vector machine regression (LS-SVM-R) and multi-linear regression (MLR) for statistically downscaling monthly general circulation model (GCM) outputs directly to monthly catchment streamflows. The scope of the study was limited to calibration and validation of the downscaling models. The methodology was demonstrated by its application to a streamflow site in the Grampian water supply system in northwestern Victoria, Australia. Probable predictors for the study were selected from the National Center for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis data set based on the past literature and hydrology. Probable variables that displayed the best significant correlations, consistently with the streamflows over the entire period of the study (1950-2010) and under three 20-year time slices (1950-1969, 1970-1989 and 1990-2010) were selected as potential predictors. To better capture seasonal variations of streamflows, downscaling models were developed for each calendar month. The standardized potential predictors were introduced to the LS-SVM-R and MLR models, starting from the best correlated three and then, others one by one, based on their correlations with the streamflows, until the model performance in validation was maximized. This stepwise model development enabled the identification of the optimum number of potential variables for each month. The model calibration was performed over the period 1950-1989 and validation was done for 1990-2010. LS-SVM-R model parameter optimization was achieved using simplex algorithm and leave-one-out cross-validation. The MLR models were optimized by minimizing the sum of squared errors. In both modelling techniques, validation was performed as an independent simulation. In calibration, LS-SVM-R and MLR models displayed equally good performances with a trend of under-predicting high flows. During validation, LS-SVM-R outperformed MLR, though both techniques over-predicted most of the streamflows. It was concluded that LS-SVM-R is a better technique for statistically downscaling GCM outputs to streamflows than MLR, but still MLR is a potential technique for the same task. Copyright © 2012 Royal Meteorological Society.
Eastern North Pacific tropical cyclone activity in historical and future CMIP5 experiments : Assessment with a model-independent tracking scheme
- Authors: Bell, Samuel , Chand, Savin , Tory, Kevin , Turville, Christopher , Ye, Harvey
- Date: 2019
- Type: Text , Journal article
- Relation: Climate Dynamics Vol. 53, no. 7-8 (2019), p. 4841-4855
- Full Text: false
- Reviewed:
- Description: The sensitivity of tropical cyclone (TC) projection results to different models and the detection and tracking scheme used is well established in the literature. Here, future climate projections of TC activity in the Eastern North Pacific basin (ENP, defined from 0 degrees to 40 degrees N and 180 degrees to similar to 75 degrees W) are assessed with a model- and basin-independent detection and tracking scheme that was trained in reanalysis data. The scheme is applied to models from the Coupled Model Intercomparison Project Phase 5 (CMIP5) experiments forced under the historical and Representative Concentration Pathway 8.5 (RCP8.5) conditions. TC tracks from the observed records and models are analysed simultaneously with a curve-clustering algorithm, allowing observed and model tracks to be projected onto the same set of clusters. The ENP is divided into three clusters, one in the Central North Pacific (CNP) and two off the Mexican coast, as in prior studies. After accounting for model biases and auto-correlation, projection results under RCP8.5 indicated TC genesis to be significantly suppressed east of 125 degrees W, and significantly enhanced west of 145 degrees W by the end of the twenty-first century. Regional TC track exposure was found to significantly increase around Hawaii (similar to 86%), as shown in earlier studies, owing to increased TC genesis, particularly to the south-east of the island nation. TC exposure to Southern Mexico was shown to decrease (similar to 4%), owing to a south-westward displacement of TCs and overall suppression of genesis near the Mexican coastline. The large-scale environmental conditions most consistent with these projected changes were vertical wind shear and relative humidity.
PFARS : Enhancing throughput and lifetime of heterogeneous WSNs through power-aware fusion, aggregation, and routing scheme
- Authors: Khan, Rahim , Zakarya, Muhammad , Tan, Zhiyuan , Usman, Muhammad , Jan, Mian , Khan, Mukhtaj
- Date: 2019
- Type: Text , Journal article
- Relation: International Journal of Communication Systems Vol. 32, no. 18 (Dec 2019), p. 21
- Full Text:
- Reviewed:
- Description: Heterogeneous wireless sensor networks (WSNs) consist of resource-starving nodes that face a challenging task of handling various issues such as data redundancy, data fusion, congestion control, and energy efficiency. In these networks, data fusion algorithms process the raw data generated by a sensor node in an energy-efficient manner to reduce redundancy, improve accuracy, and enhance the network lifetime. In literature, these issues are addressed individually, and most of the proposed solutions are either application-specific or too complex that make their implementation unrealistic, specifically, in a resource-constrained environment. In this paper, we propose a novel node-level data fusion algorithm for heterogeneous WSNs to detect noisy data and replace them with highly refined data. To minimize the amount of transmitted data, a hybrid data aggregation algorithm is proposed that performs in-network processing while preserving the reliability of gathered data. This combination of data fusion and data aggregation algorithms effectively handle the aforementioned issues by ensuring an efficient utilization of the available resources. Apart from fusion and aggregation, a biased traffic distribution algorithm is introduced that considerably increases the overall lifetime of heterogeneous WSNs. The proposed algorithm performs the tedious task of traffic distribution according to the network's statistics, ie, the residual energy of neighboring nodes and their importance from a network's connectivity perspective. All our proposed algorithms were tested on a real-time dataset obtained through our deployed heterogeneous WSN in an orange orchard and also on publicly available benchmark datasets. Experimental results verify that our proposed algorithms outperform the existing approaches in terms of various performance metrics such as throughput, lifetime, data accuracy, computational time, and delay.
Mobile malware detection : an analysis of deep learning model
- Authors: Khoda, Mahbub , Kamruzzaman, Joarder , Gondal, Iqbal , Imam, Tasadduq , Rahman, Ashfaqur , IEEE
- Date: 2019
- Type: Text , Book chapter
- Relation: 2019 IEEE International Conference on Industrial Technology p. 1161-1166
- Full Text: false
- Reviewed:
- Description: Due to its widespread use, with numerous applications deployed everyday, smartphones have become an inevitable target of the malware developers. This huge number of applications renders manual inspection of codes infeasible; as such, researchers have proposed several malware detection techniques based on automatic machine learning tools. Deep learning has gained a lot of attention from the malware researchers due to its ability of capture complex relationships among inputs and outputs. However, deep learning models depend largely on several hyper-parameters (i.e., learning rate, batch size, dropout rate). Hence, it is of utmost importance to analyze the effect of these parameters on classifier performance. In this paper, we systematically studied the effect of these parameters along with the effect of network architecture. We showed that building arbitrary deep networks does not always improve classifier performance. We also determined the combination of hyper-parameters that yields best result. This study will be useful in building better deep neural network based model for malware classification.
Effects of a proper feature selection on prediction and optimization of drilling rate using intelligent techniques
- Authors: Liao, Xiufeng , Khandelwal, Manoj , Yang, Haiqing , Koopialipoor, Mohammadreza , Murlidhar, Bhatawdekar
- Date: 2020
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 36, no. 2 (Apr 2020), p. 499-510
- Full Text:
- Reviewed:
- Description: One of the important factors during drilling times is the rate of penetration (ROP), which is controlled based on different variables. Factors affecting different drillings are of paramount importance. In the current research, an attempt was made to better recognize drilling parameters and optimize them based on an optimization algorithm. For this purpose, 618 data sets, including RPM, flushing media, and compressive strength parameters, were measured and collected. After an initial investigation, the compressive strength feature of samples, which is an important parameter from the rocks, was used as a proper criterion for classification. Then using intelligent systems, three different levels of the rock strength and all data were modeled. The results showed that systems which were classified based on compressive strength showed a better performance for ROP assessment due to the proximity of features. Therefore, these three levels were used for classification. A new artificial bee colony algorithm was used to solve this problem. Optimizations were applied to the selected models under different optimization conditions, and optimal states were determined. As determining drilling machine parameters is important, these parameters were determined based on optimal conditions. The obtained results showed that this intelligent system can well improve drilling conditions and increase the ROP value for three strength levels of the rocks. This modeling system can be used in different drilling operations.
Random walks : a review of algorithms and applications
- Authors: Xia, Feng , Liu, Jiaying , Nie, Hansong , Fu, Yonghao , Wan, Liangtian , Kong, Xiangjie
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Transactions on Emerging Topics in Computational Intelligence Vol. 4, no. 2 (2020), p. 95-107
- Full Text:
- Reviewed:
- Description: A random walk is known as a random process which describes a path including a succession of random steps in the mathematical space. It has increasingly been popular in various disciplines such as mathematics and computer science. Furthermore, in quantum mechanics, quantum walks can be regarded as quantum analogues of classical random walks. Classical random walks and quantum walks can be used to calculate the proximity between nodes and extract the topology in the network. Various random walk related models can be applied in different fields, which is of great significance to downstream tasks such as link prediction, recommendation, computer vision, semi-supervised learning, and network embedding. In this article, we aim to provide a comprehensive review of classical random walks and quantum walks. We first review the knowledge of classical random walks and quantum walks, including basic concepts and some typical algorithms. We also compare the algorithms based on quantum walks and classical random walks from the perspective of time complexity. Then we introduce their applications in the field of computer science. Finally we discuss the open issues from the perspectives of efficiency, main-memory volume, and computing time of existing algorithms. This study aims to contribute to this growing area of research by exploring random walks and quantum walks together. © 2017 IEEE.
A classification algorithm that derives weighted sum scores for insight into disease
- Authors: Quinn, Anthony , Stranieri, Andrew , Yearwood, John , Hafen, Gaudenz
- Date: 2009
- Type: Text , Conference paper
- Relation: Paper presented at Third Australasian Workshop on Health Informatics and Knowledge Management (HIKM 2009), Wellington, New Zealand : Vol. 97, p. 13-17
- Full Text:
- Description: Data mining is often performed with datasets associated with diseases in order to increase insights that can ultimately lead to improved prevention or treatment. Classification algorithms can achieve high levels of predictive accuracy but have limited application for facilitating the insight that leads to deeper understanding of aspects of the disease. This is because the representation of knowledge that arises from classification algorithms is too opaque, too complex or too sparse to facilitate insight. Clustering, association and visualisation approaches enable greater scope for clinicians to be engaged in a way that leads to insight, however predictive accuracy is compromised or non-existent. This research investigates the practical applications of Automated Weighted Sum, (AWSum), a classification algorithm that provides accuracy comparable to other techniques whilst providing some insight into the data. This is achieved by calculating a weight for each feature value that represents its influence on the class value. Clinicians are very familiar with weighted sum scoring scales so the internal representation is intuitive and easily understood. This paper presents results from the use of the AWSum approach with data from patients suffering from Cystic Fibrosis.