High activity and high functional connectivity are mutually exclusive in resting state zebrafish and human brains
- Authors: Zarei, Mahdi , Xie, Dan , Jiang, Fei , Bagirov, Adil , Huang, Bo , Raj, Ashish , Nagarajan, Srikantan , Guo, Su
- Date: 2022
- Type: Text , Journal article
- Relation: BMC Biology Vol. 20, no. 1 (2022), p. 84-84
- Full Text:
- Reviewed:
- Description: The structural connectivity of neurons in the brain allows active neurons to impact the physiology of target neuron types with which they are functionally connected. While the structural connectome is at the basis of functional connectome, it is the functional connectivity measured through correlations between time series of individual neurophysiological events that underlies behavioral and mental states. However, in light of the diverse neuronal cell types populating the brain and their unique connectivity properties, both neuronal activity and functional connectivity are heterogeneous across the brain, and the nature of their relationship is not clear. Here, we employ brain-wide calcium imaging at cellular resolution in larval zebrafish to understand the principles of resting state functional connectivity. We recorded the spontaneous activity of >12,000 neurons in the awake resting state forebrain. By classifying their activity (i.e., variances of ΔF/F across time) and functional connectivity into three levels (high, medium, low), we find that highly active neurons have low functional connections and highly connected neurons are of low activity. This finding holds true when neuronal activity and functional connectivity data are classified into five instead of three levels, and in whole brain spontaneous activity datasets. Moreover, such activity-connectivity relationship is not observed in randomly shuffled, noise-added, or simulated datasets, suggesting that it reflects an intrinsic brain network property. Intriguingly, deploying the same analytical tools on functional magnetic resonance imaging (fMRI) data from the resting state human brain, we uncover a similar relationship between activity (signal variance over time) and functional connectivity, that is, regions of high activity are non-overlapping with those of high connectivity. We found a mutually exclusive relationship between high activity (signal variance over time) and high functional connectivity of neurons in zebrafish and human brains. These findings reveal a previously unknown and evolutionarily conserved brain organizational principle, which has implications for understanding disease states and designing artificial neuronal networks.
Machine learning algorithms for analysis of DNA data sets
- Authors: Yearwood, John , Bagirov, Adil , Kelarev, Andrei
- Date: 2012
- Type: Text , Book chapter
- Relation: Machine Learning Algorithms for Problem Solving in Computational Applications: Intelligent Techniques p. 47-58
- Relation: http://purl.org/au-research/grants/arc/LP0990908
- Full Text: false
- Reviewed:
- Description: The applications of machine learning algorithms to the analysis of data sets of DNA sequences are very important. The present chapter is devoted to the experimental investigation of applications of several machine learning algorithms for the analysis of a JLA data set consisting of DNA sequences derived from non-coding segments in the junction of the large single copy region and inverted repeat A of the chloroplast genome in Eucalyptus collected by Australian biologists. Data sets of this sort represent a new situation, where sophisticated alignment scores have to be used as a measure of similarity. The alignment scores do not satisfy properties of the Minkowski metric, and new machine learning approaches have to be investigated. The authors' experiments show that machine learning algorithms based on local alignment scores achieve very good agreement with known biological classes for this data set. A new machine learning algorithm based on graph partitioning performed best for clustering of the JLA data set. Our novel k-committees algorithm produced most accurate results for classification. Two new examples of synthetic data sets demonstrate that the authors' k-committees algorithm can outperform both the Nearest Neighbour and k-medoids algorithms simultaneously.
Optimization methods and the k-committees algorithm for clustering of sequence data
- Authors: Yearwood, John , Bagirov, Adil , Kelarev, Andrei
- Date: 2009
- Type: Text , Journal article
- Relation: Applied and Computational Mathematics Vol. 8, no. 1 (2009), p. 92-101
- Relation: http://purl.org/au-research/grants/arc/DP0211866
- Relation: http://purl.org/au-research/grants/arc/DP0666061
- Full Text: false
- Description: The present paper is devoted to new algorithms for unsupervised clustering based on the optimization approaches due to [2], [3] and [4]. We consider a novel situation, where the datasets consist of nucleotide or protein sequences and rather sophisticated biologically significant alignment scores have to be used as a measure of distance. Sequences of this kind cannot be regarded as points in a finite dimensional space. Besides, the alignment scores do not satisfy properties of Minkowski metrics. Nevertheless the optimization approaches have made it possible to introduce a new k-committees algorithm and compare its performance with previous algorithms for two datasets. Our experimental results show that the k-committees algorithms achieves intermediate accuracy for a dataset of ITS sequences, and it can perform better than the discrete k-means and Nearest Neighbour algorithms for certain datasets. All three algorithms achieve good agreement with clusters published in the biological literature before and can be used to obtain biologically significant clusterings.
A novel hybrid neural learning algorithm using simulated annealing and quasisecant method
- Authors: Yearwood, John , Bagirov, Adil , Seifollahi, Sattar
- Date: 2011
- Type: Text , Conference proceedings
- Full Text: false
- Description: In this paper, we propose a hybrid learning algorithm for the single hidden layer feedforward neural networks (SLFNs) for data classification. The proposed hybrid algorithm is a two-phase learning algorithm and is based on the quasisecant and the simulated annealing methods. First, the weights between the hidden layer and the output layer nodes (output layer weights) are adjusted by the quasisecant algorithm. Then the simulated annealing is applied for global attribute weighting. The weights between the input layer and the hidden layer nodes are fixed in advance and are not included in the learning process. The proposed two-phase learning of the network is a novel idea and is different from that of the existing ones. The numerical results on some benchmark data sets are also reported and these results are promising. © 2011, Australian Computer Society, Inc.
- Description: 2003009507
Supervised data classification via max-min separability
- Authors: Ugon, Julien , Bagirov, Adil
- Date: 2005
- Type: Text , Book chapter
- Relation: Continuous Optimization: Current Trends and Modern Applications Chapter p. 175-208
- Full Text:
- Reviewed:
- Description: B1
- Description: 2003001268
Truncated codifferential method for linearly constrained nonsmooth optimization
- Authors: Tor, Ali , Karasozen, Bulent , Bagirov, Adil
- Date: 2010
- Type: Text , Conference proceedings
- Full Text: false
- Description: In this paper a new algorithm is developed to minimize linearly constrained non-smooth optimization problem for convex objective functions. The algorithm is based on the concept of codifferential. The convergence of the proposed minimization algorithm is proved and results of numerical experiments using a set of test problems with nonsmooth convex objective function are reported.
Aggregate codifferential method for nonsmooth DC optimization
- Authors: Tor, Ali , Bagirov, Adil , Karasozen, Bulent
- Date: 2014
- Type: Text , Journal article
- Relation: Journal of Computational and Applied Mathematics Vol. 259, no. Part B (2014), p. 851-867
- Full Text: false
- Reviewed:
- Description: A new algorithm is developed based on the concept of codifferential for minimizing the difference of convex nonsmooth functions. Since the computation of the whole codifferential is not always possible, we use a fixed number of elements from the codifferential to compute the search directions. The convergence of the proposed algorithm is proved. The efficiency of the algorithm is demonstrated by comparing it with the subgradient, the truncated codifferential and the proximal bundle methods using nonsmooth optimization test problems.
Multi-source cyber-attacks detection using machine learning
- Authors: Taheri, Sona , Gondal, Iqbal , Bagirov, Adil , Harkness, Greg , Brown, Simon , Chi, Chihung
- Date: 2019
- Type: Text , Conference proceedings , Conference paper
- Relation: 2019 IEEE International Conference on Industrial Technology, ICIT 2019; Melbourne, Australia; 13th-15th February 2019 Vol. 2019-February, p. 1167-1172
- Full Text:
- Reviewed:
- Description: The Internet of Things (IoT) has significantly increased the number of devices connected to the Internet ranging from sensors to multi-source data information. As the IoT continues to evolve with new technologies number of threats and attacks against IoT devices are on the increase. Analyzing and detecting these attacks originating from different sources needs machine learning models. These models provide proactive solutions for detecting attacks and their sources. In this paper, we propose to apply a supervised machine learning classification technique to identify cyber-attacks from each source. More precisely, we apply the incremental piecewise linear classifier that constructs boundary between sources/classes incrementally starting with one hyperplane and adding more hyperplanes at each iteration. The algorithm terminates when no further significant improvement of the separation of sources/classes is possible. The construction and usage of piecewise linear boundaries allows us to avoid any possible overfitting. We apply the incremental piecewise linear classifier on the multi-source real world cyber security data set to identify cyber-attacks and their sources.
- Description: Proceedings of the IEEE International Conference on Industrial Technology
Cyberattack triage using incremental clustering for intrusion detection systems
- Authors: Taheri, Sona , Bagirov, Adil , Gondal, Iqbal , Brown, Simon
- Date: 2020
- Type: Text , Journal article
- Relation: International Journal of Information Security Vol. 19, no. 5 (2020), p. 597-607
- Relation: http://purl.org/au-research/grants/arc/DP190100580
- Full Text:
- Reviewed:
- Description: Intrusion detection systems (IDSs) are devices or software applications that monitor networks or systems for malicious activities and signals alerts/alarms when such activity is discovered. However, an IDS may generate many false alerts which affect its accuracy. In this paper, we develop a cyberattack triage algorithm to detect these alerts (so-called outliers). The proposed algorithm is designed using the clustering, optimization and distance-based approaches. An optimization-based incremental clustering algorithm is proposed to find clusters of different types of cyberattacks. Using a special procedure, a set of clusters is divided into two subsets: normal and stable clusters. Then, outliers are found among stable clusters using an average distance between centroids of normal clusters. The proposed algorithm is evaluated using the well-known IDS data sets—Knowledge Discovery and Data mining Cup 1999 and UNSW-NB15—and compared with some other existing algorithms. Results show that the proposed algorithm has a high detection accuracy and its false negative rate is very low. © 2019, Springer-Verlag GmbH Germany, part of Springer Nature.
- Description: This research was conducted in Internet Commerce Security Laboratory (ICSL) funded by Westpac Banking Corporation Australia. In addition, the research by Dr. Sona Taheri and A/Prof. Adil Bagirov was supported by the Australian Government through the Australian Research Council’s Discovery Projects funding scheme (DP190100580).
Improving Naive Bayes classifier using conditional probabilities
- Authors: Taheri, Sona , Mammadov, Musa , Bagirov, Adil
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Naive Bayes classifier is the simplest among Bayesian Network classifiers. It has shown to be very efficient on a variety of data classification problems. However, the strong assumption that all features are conditionally independent given the class is often violated on many real world applications. Therefore, improvement of the Naive Bayes classifier by alleviating the feature independence assumption has attracted much attention. In this paper, we develop a new version of the Naive Bayes classifier without assuming independence of features. The proposed algorithm approximates the interactions between features by using conditional probabilities. We present results of numerical experiments on several real world data sets, where continuous features are discretized by applying two different methods. These results demonstrate that the proposed algorithm significantly improve the performance of the Naive Bayes classifier, yet at the same time maintains its robustness. © 2011, Australian Computer Society, Inc.
- Description: 2003009505
Capped K-NN Editing in definition lacking environments
- Authors: Stranieri, Andrew , Yatsko, Andrew , Golden, Isaac , Mammadov, Musa , Bagirov, Adil
- Date: 2013
- Type: Text , Journal article
- Relation: Journal of Pattern Recognition Research Vol. 8, no. 1 (2013), p. 39-58
- Full Text: false
- Reviewed:
- Description: While any input may be contributing, imprecise specification of class of data subdivided into classes identifies as rather common a source of noise. The misrepresentation may be characteristic of the data or be caused by forcing of a regression problem into the classification type. Consideration is given to examples of this nature, and an alternative is proposed. In the main part, the approach is based on a well-known technique of data treatment for noise using k-NN. The paper advances an editing technique designed around idea of variable number of authenticating instances. Test runs performed on publicly available and proprietary data demonstrate high retention ability of the new procedure without loss of classification accuracy. Noise reduction methods in a broader classification context are extensively surveyed.
A simulated annealing-based maximum-margin clustering algorithm
- Authors: Seifollahi, Sattar , Bagirov, Adil , Borzeshi, Ehsan , Piccardi, Massimo
- Date: 2019
- Type: Text , Journal article
- Relation: Computational Intelligence Vol. 35, no. 1 (2019), p. 23-41
- Full Text:
- Reviewed:
- Description: Maximum-margin clustering is an extension of the support vector machine (SVM) to clustering. It partitions a set of unlabeled data into multiple groups by finding hyperplanes with the largest margins. Although existing algorithms have shown promising results, there is no guarantee of convergence of these algorithms to global solutions due to the nonconvexity of the optimization problem. In this paper, we propose a simulated annealing-based algorithm that is able to mitigate the issue of local minima in the maximum-margin clustering problem. The novelty of our algorithm is twofold, ie, (i) it comprises a comprehensive cluster modification scheme based on simulated annealing, and (ii) it introduces a new approach based on the combination of k-means++ and SVM at each step of the annealing process. More precisely, k-means++ is initially applied to extract subsets of the data points. Then, an unsupervised SVM is applied to improve the clustering results. Experimental results on various benchmark data sets (of up to over a million points) give evidence that the proposed algorithm is more effective at solving the clustering problem than a number of popular clustering algorithms.
Optimization based clustering algorithms for authorship analysis of phishing emails
- Authors: Seifollahi, Sattar , Bagirov, Adil , Layton, Robert , Gondal, Iqbal
- Date: 2017
- Type: Text , Journal article
- Relation: Neural Processing Letters Vol. 46, no. 2 (2017), p. 411-425
- Relation: http://purl.org/au-research/grants/arc/DP140103213
- Full Text: false
- Reviewed:
- Description: Phishing has given attackers power to masquerade as legitimate users of organizations, such as banks, to scam money and private information from victims. Phishing is so widespread that combating the phishing attacks could overwhelm the victim organization. It is important to group the phishing attacks to formulate effective defence mechanism. In this paper, we use clustering methods to analyze and characterize phishing emails and perform their relative attribution. Emails are first tokenized to a bag-of-word space and, then, transformed to a numeric vector space using frequencies of words in documents. Wordnet vocabulary is used to take effects of similar words into account and to reduce sparsity. The word similarity measure is combined with the term frequencies to introduce a novel text transformation into numeric features. To improve the accuracy, we apply inverse document frequency weighting, which gives higher weights to features used by fewer authors. The k-means and recently introduced three optimization based algorithms: MS-MGKM, INCA and DCClust are applied for clustering purposes. The optimization based algorithms indicate the existence of well separated clusters in the phishing emails dataset. © 2017, Springer Science+Business Media New York.
Lagrange-type functions in constrained optimization
- Authors: Rubinov, Alex , Yang, Xiao , Bagirov, Adil , Gasimov, Rafail
- Date: 2003
- Type: Text , Journal article
- Relation: Journal of Mathematical Sciences Vol. 115, no. 4 (2003), p. 2437-2505
- Full Text: false
- Reviewed:
- Description: We examine various kinds of nonlinear Lagrange-type functions for constrained optimization problems. In particular, we study the weak duality, the zero duality gap property, and the existence of an exact parameter for these functions. The paper contains a detailed survey of results in these directions and comparison of different methods proposed by different authors. Some new results are also given.
- Description: C1
- Description: 2003000358
Penalty functions with a small penalty parameter
- Authors: Rubinov, Alex , Yang, Xiao , Bagirov, Adil
- Date: 2002
- Type: Text , Journal article
- Relation: Optimization Methods and Software Vol. 17, no. 5 (2002), p. 931-964
- Full Text: false
- Reviewed:
- Description: In this article, we study the nonlinear penalization of a constrained optimization problem and show that the least exact penalty parameter of an equivalent parametric optimization problem can be diminished. We apply the theory of increasing positively homogeneous (IPH) functions so as to derive a simple formula for computing the least exact penalty parameter for the classical penalty function through perturbation function. We establish that various equivalent parametric reformulations of constrained optimization problems lead to reduction of exact penalty parameters. To construct a Lipschitz penalty function with a small exact penalty parameter for a Lipschitz programming problem, we make a transformation to the objective function by virtue of an increasing concave function. We present results of numerical experiments, which demonstrate that the Lipschitz penalty function with a small penalty parameter is more suitable for solving some nonconvex constrained problems than the classical penalty function.
- Description: 2003000116
A comparative study of unsupervised classification algorithms in multi-sized data sets
- Authors: Quddus, Syed , Bagirov, Adil
- Date: 2019
- Type: Text , Conference paper
- Relation: 2nd Artificial Intelligence and Cloud Computing Conference, AICCC 2019, Kobe, 21-23 December 2019 p. 26-32
- Full Text: false
- Reviewed:
- Description: The ability to mine and extract useful information automatically, from large data sets, is a common concern for organizations, for the last few decades. Over the internet, data is vastly increasing gradually and consequently the capacity to collect and store very large data is significantly increasing. Existing clustering algorithms are not always efficient and accurate in solving clustering problems for large data sets. However, the development of accurate and fast data classification algorithms for very large scale data sets is still a challenge. In this paper, we present an overview of various algorithms and approaches which are recently being used for Clustering of large data and E-document. In this paper, a comparative study of the performance of various algorithms: the global kmeans algorithm (GKM), the multi-start modified global kmeans algorithm (MS-MGKM), the multi-start kmeans algorithm (MS-KM), the difference of convex clustering algorithm (DCA), the clustering algorithm based on the difference of convex representation of the cluster function and non-smooth optimization (DC-L2), is carried out using C++. © 2019 ACM.
An incremental piecewise linear classifier based on polyhedral conic separation
- Authors: Ozturk, Gurkan , Bagirov, Adil , Kasimbeyli, Refail
- Date: 2015
- Type: Text , Journal article
- Relation: Machine Learning Vol. 101, no. 1-3 (2015), p. 397-413
- Relation: http://purl.org/au-research/grants/arc/DP140103213
- Full Text: false
- Reviewed:
- Description: In this paper, a piecewise linear classifier based on polyhedral conic separation is developed. This classifier builds nonlinear boundaries between classes using polyhedral conic functions. Since the number of polyhedral conic functions separating classes is not known a priori, an incremental approach is proposed to build separating functions. These functions are found by minimizing an error function which is nonsmooth and nonconvex. A special procedure is proposed to generate starting points to minimize the error function and this procedure is based on the incremental approach. The discrete gradient method, which is a derivative-free method for nonsmooth optimization, is applied to minimize the error function starting from those points. The proposed classifier is applied to solve classification problems on 12 publicly available data sets and compared with some mainstream and piecewise linear classifiers. © 2014, The Author(s).
A heuristic algorithm for solving the minimum sum-of-squares clustering problems
- Authors: Ordin, Burak , Bagirov, Adil
- Date: 2015
- Type: Text , Journal article
- Relation: Journal of Global Optimization Vol. 61, no. 2 (2015), p. 341-361
- Relation: http://purl.org/au-research/grants/arc/DP140103213
- Full Text: false
- Reviewed:
- Description: Clustering is an important task in data mining. It can be formulated as a global optimization problem which is challenging for existing global optimization techniques even in medium size data sets. Various heuristics were developed to solve the clustering problem. The global k-means and modified global k-means are among most efficient heuristics for solving the minimum sum-of-squares clustering problem. However, these algorithms are not always accurate in finding global or near global solutions to the clustering problem. In this paper, we introduce a new algorithm to improve the accuracy of the modified global k-means algorithm in finding global solutions. We use an auxiliary cluster problem to generate a set of initial points and apply the k-means algorithm starting from these points to find the global solution to the clustering problems. Numerical results on 16 real-world data sets clearly demonstrate the superiority of the proposed algorithm over the global and modified global k-means algorithms in finding global solutions to clustering problems.
An incremental nonsmooth optimization algorithm for clustering using L1 and L∞ norms
- Authors: Ordin, Burak , Bagirov, Adil , Mohebi, Ehsam
- Date: 2020
- Type: Text , Journal article
- Relation: Journal of Industrial and Management Optimization Vol. 16, no. 6 (2020), p. 2757-2779
- Relation: http://purl.org/au-research/grants/arc/DP190100580
- Full Text: false
- Reviewed:
- Description: An algorithm is developed for solving clustering problems with the similarity measure defined using the L1and L∞ norms. It is based on an incremental approach and applies nonsmooth optimization methods to find cluster centers. Computational results on 12 data sets are reported and the proposed algorithm is compared with the X-means algorithm. ©
A server side solution for detecting webInject : A machine learning approach
- Authors: Moniruzzaman, Md , Bagirov, Adil , Gondal, Iqbal , Brown, Simon
- Date: 2018
- Type: Text , Conference proceedings , Conference paper
- Relation: 22nd Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2018; Melbourne, Australia; 3rd June 2018; published in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 11154 LNAI, p. 162-167
- Full Text: false
- Reviewed:
- Description: With the advancement of client-side on the fly web content generation techniques, it becomes easier for attackers to modify the content of a website dynamically and gain access to valuable information. A majority portion of online attacks is now done by WebInject. The end users are not always skilled enough to differentiate between injected content and actual contents of a webpage. Some of the existing solutions are designed for client side and all the users have to install it in their system, which is a challenging task. In addition, various platforms and tools are used by individuals, so different solutions needed to be designed. Existing server side solution often focuses on sanitizing and filtering the inputs. It will fail to detect obfuscated and hidden scripts. In this paper, we propose a server side solution using a machine learning approach to detect WebInject in banking websites. Unlike other techniques, our method collects features of a Document Object Model (DOM) and classifies it with the help of a pre-trained model.