Your selections:

10Taheri, Sona
5Karmitsa, Napsu
3Joki, Kaisa
3Sultanova, Nargiz
2Gondal, Iqbal
2Mäkelä, Marko
1Aliguliyev, Ramiz
1Asadi, Soodabeh
1Bai, Fusheng
1Black, Paul
1Brown, Simon
1Defterdarovic, J.
1Gaudioso, Manilo
1Hoseini Monjezi, Najmeh
1Hoseini-Monjezi, Najmeh
1Long, Qiang
1Mohebi, Ehsam
1Moniruzzaman, Md
1Ordin, Burak

Show More

Show Less

7Nonsmooth optimization
6Cluster analysis
4DC optimization
30102 Applied Mathematics
30103 Numerical and Computational Mathematics
3Incremental algorithm
3Nonconvex optimization
240 Engineering
246 Information and computing sciences
24602 Artificial intelligence
24605 Data management and data science
249 Mathematical sciences
24901 Applied mathematics
2Regression analysis
108 Information and Computing Sciences
10906 Electrical and Electronic Engineering
115 Commerce, Management, Tourism and Services
13509 Transportation, logistics and supply chains
14603 Computer vision and multimedia computation
14604 Cybersecurity and privacy

Show More

Show Less

Format Type

Malware variant identification using incremental clustering

- Black, Paul, Gondal, Iqbal, Bagirov, Adil, Moniruzzaman, Md

**Authors:**Black, Paul , Gondal, Iqbal , Bagirov, Adil , Moniruzzaman, Md**Date:**2021**Type:**Text , Journal article**Relation:**Electronics Vol. 10, no. 14 (2021), p.**Relation:**http://purl.org/au-research/grants/arc/DP190100580**Full Text:****Reviewed:**

**Authors:**Black, Paul , Gondal, Iqbal , Bagirov, Adil , Moniruzzaman, Md**Date:**2021**Type:**Text , Journal article**Relation:**Electronics Vol. 10, no. 14 (2021), p.**Relation:**http://purl.org/au-research/grants/arc/DP190100580**Full Text:****Reviewed:**

Cyberattack triage using incremental clustering for intrusion detection systems

- Taheri, Sona, Bagirov, Adil, Gondal, Iqbal, Brown, Simon

**Authors:**Taheri, Sona , Bagirov, Adil , Gondal, Iqbal , Brown, Simon**Date:**2020**Type:**Text , Journal article**Relation:**International Journal of Information Security Vol. 19, no. 5 (2020), p. 597-607**Relation:**http://purl.org/au-research/grants/arc/DP190100580**Full Text:****Reviewed:****Description:**Intrusion detection systems (IDSs) are devices or software applications that monitor networks or systems for malicious activities and signals alerts/alarms when such activity is discovered. However, an IDS may generate many false alerts which affect its accuracy. In this paper, we develop a cyberattack triage algorithm to detect these alerts (so-called outliers). The proposed algorithm is designed using the clustering, optimization and distance-based approaches. An optimization-based incremental clustering algorithm is proposed to find clusters of different types of cyberattacks. Using a special procedure, a set of clusters is divided into two subsets: normal and stable clusters. Then, outliers are found among stable clusters using an average distance between centroids of normal clusters. The proposed algorithm is evaluated using the well-known IDS data sets—Knowledge Discovery and Data mining Cup 1999 and UNSW-NB15—and compared with some other existing algorithms. Results show that the proposed algorithm has a high detection accuracy and its false negative rate is very low. © 2019, Springer-Verlag GmbH Germany, part of Springer Nature.**Description:**This research was conducted in Internet Commerce Security Laboratory (ICSL) funded by Westpac Banking Corporation Australia. In addition, the research by Dr. Sona Taheri and A/Prof. Adil Bagirov was supported by the Australian Government through the Australian Research Council’s Discovery Projects funding scheme (DP190100580).

**Authors:**Taheri, Sona , Bagirov, Adil , Gondal, Iqbal , Brown, Simon**Date:**2020**Type:**Text , Journal article**Relation:**International Journal of Information Security Vol. 19, no. 5 (2020), p. 597-607**Relation:**http://purl.org/au-research/grants/arc/DP190100580**Full Text:****Reviewed:****Description:**Intrusion detection systems (IDSs) are devices or software applications that monitor networks or systems for malicious activities and signals alerts/alarms when such activity is discovered. However, an IDS may generate many false alerts which affect its accuracy. In this paper, we develop a cyberattack triage algorithm to detect these alerts (so-called outliers). The proposed algorithm is designed using the clustering, optimization and distance-based approaches. An optimization-based incremental clustering algorithm is proposed to find clusters of different types of cyberattacks. Using a special procedure, a set of clusters is divided into two subsets: normal and stable clusters. Then, outliers are found among stable clusters using an average distance between centroids of normal clusters. The proposed algorithm is evaluated using the well-known IDS data sets—Knowledge Discovery and Data mining Cup 1999 and UNSW-NB15—and compared with some other existing algorithms. Results show that the proposed algorithm has a high detection accuracy and its false negative rate is very low. © 2019, Springer-Verlag GmbH Germany, part of Springer Nature.**Description:**This research was conducted in Internet Commerce Security Laboratory (ICSL) funded by Westpac Banking Corporation Australia. In addition, the research by Dr. Sona Taheri and A/Prof. Adil Bagirov was supported by the Australian Government through the Australian Research Council’s Discovery Projects funding scheme (DP190100580).

Aggregate subgradient method for nonsmooth DC optimization

- Bagirov, Adil, Taheri, Sona, Joki, Kaisa, Karmitsa, Napsu, Mäkelä, Marko

**Authors:**Bagirov, Adil , Taheri, Sona , Joki, Kaisa , Karmitsa, Napsu , Mäkelä, Marko**Date:**2021**Type:**Text , Journal article**Relation:**Optimization Letters Vol. 15, no. 1 (2021), p. 83-96**Relation:**http://purl.org/au-research/grants/arc/DP190100580**Full Text:****Reviewed:****Description:**The aggregate subgradient method is developed for solving unconstrained nonsmooth difference of convex (DC) optimization problems. The proposed method shares some similarities with both the subgradient and the bundle methods. Aggregate subgradients are defined as a convex combination of subgradients computed at null steps between two serious steps. At each iteration search directions are found using only two subgradients: the aggregate subgradient and a subgradient computed at the current null step. It is proved that the proposed method converges to a critical point of the DC optimization problem and also that the number of null steps between two serious steps is finite. The new method is tested using some academic test problems and compared with several other nonsmooth DC optimization solvers. © 2020, Springer-Verlag GmbH Germany, part of Springer Nature.

**Authors:**Bagirov, Adil , Taheri, Sona , Joki, Kaisa , Karmitsa, Napsu , Mäkelä, Marko**Date:**2021**Type:**Text , Journal article**Relation:**Optimization Letters Vol. 15, no. 1 (2021), p. 83-96**Relation:**http://purl.org/au-research/grants/arc/DP190100580**Full Text:****Reviewed:****Description:**The aggregate subgradient method is developed for solving unconstrained nonsmooth difference of convex (DC) optimization problems. The proposed method shares some similarities with both the subgradient and the bundle methods. Aggregate subgradients are defined as a convex combination of subgradients computed at null steps between two serious steps. At each iteration search directions are found using only two subgradients: the aggregate subgradient and a subgradient computed at the current null step. It is proved that the proposed method converges to a critical point of the DC optimization problem and also that the number of null steps between two serious steps is finite. The new method is tested using some academic test problems and compared with several other nonsmooth DC optimization solvers. © 2020, Springer-Verlag GmbH Germany, part of Springer Nature.

An augmented subgradient method for minimizing nonsmooth DC functions

- Bagirov, Adil, Hoseini Monjezi, Najmeh, Taheri, Sona

**Authors:**Bagirov, Adil , Hoseini Monjezi, Najmeh , Taheri, Sona**Date:**2021**Type:**Text , Journal article**Relation:**Computational Optimization and Applications Vol. 80, no. 2 (2021), p. 411-438**Relation:**http://purl.org/au-research/grants/arc/DP190100580**Full Text:**false**Reviewed:****Description:**A method, called an augmented subgradient method, is developed to solve unconstrained nonsmooth difference of convex (DC) optimization problems. At each iteration of this method search directions are found by using several subgradients of the first DC component and one subgradient of the second DC component of the objective function. The developed method applies an Armijo-type line search procedure to find the next iteration point. It is proved that the sequence of points generated by the method converges to a critical point of the unconstrained DC optimization problem. The performance of the method is demonstrated using academic test problems with nonsmooth DC objective functions and its performance is compared with that of two general nonsmooth optimization solvers and five solvers specifically designed for unconstrained DC optimization. Computational results show that the developed method is efficient and robust for solving nonsmooth DC optimization problems. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.

Robust piecewise linear L 1-regression via nonsmooth DC optimization

- Bagirov, Adil, Taheri, Sona, Karmitsa, Napsu, Sultanova, Nargiz, Asadi, Soodabeh

**Authors:**Bagirov, Adil , Taheri, Sona , Karmitsa, Napsu , Sultanova, Nargiz , Asadi, Soodabeh**Date:**2022**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 37, no. 4 (2022), p. 1289-1309**Relation:**http://purl.org/au-research/grants/arc/DP190100580**Full Text:**false**Reviewed:****Description:**Piecewise linear (Formula presented.) -regression problem is formulated as an unconstrained difference of convex (DC) optimization problem and an algorithm for solving this problem is developed. Auxiliary problems are introduced to design an adaptive approach to generate a suitable piecewise linear regression model and starting points for solving the underlying DC optimization problems. The performance of the proposed algorithm as both approximation and prediction tool is evaluated using synthetic and real-world data sets containing outliers. It is also compared with mainstream machine learning regression algorithms using various performance measures. Results demonstrate that the new algorithm is robust to outliers and in general, provides better predictions than the other alternative regression algorithms for most data sets used in the numerical experiments. © 2020 Informa UK Limited, trading as Taylor & Francis Group.

Limited Memory Bundle Method for Clusterwise Linear Regression

- Karmitsa, Napsu, Bagirov, Adil, Taheri, Sona, Joki, Kaisa

**Authors:**Karmitsa, Napsu , Bagirov, Adil , Taheri, Sona , Joki, Kaisa**Date:**2022**Type:**Text , Book chapter**Relation:**Intelligent Systems, Control and Automation: Science and Engineering p. 109-122**Relation:**http://purl.org/au-research/grants/arc/DP190100580**Full Text:**false**Reviewed:****Description:**A clusterwise linear regression problem consists of finding a number of linear functions each approximating a subset of the given data. In this paper, the limited memory bundle method is modified and combined with the incremental approach to solve this problem using its nonsmooth optimization formulation. The main contribution of the proposed method is to obtain a fast solution time for large-scale clusterwise linear regression problems. The proposed algorithm is tested on small and large real-world data sets and compared with other algorithms for clusterwise linear regression. Numerical results demonstrate that the proposed algorithm is especially efficient in data sets with large numbers of data points and input variables. © 2022, Springer Nature Switzerland AG.

An incremental nonsmooth optimization algorithm for clustering using L1 and L∞ norms

- Ordin, Burak, Bagirov, Adil, Mohebi, Ehsam

**Authors:**Ordin, Burak , Bagirov, Adil , Mohebi, Ehsam**Date:**2020**Type:**Text , Journal article**Relation:**Journal of Industrial and Management Optimization Vol. 16, no. 6 (2020), p. 2757-2779**Relation:**http://purl.org/au-research/grants/arc/DP190100580**Full Text:**false**Reviewed:****Description:**An algorithm is developed for solving clustering problems with the similarity measure defined using the L1and L∞ norms. It is based on an incremental approach and applies nonsmooth optimization methods to find cluster centers. Computational results on 12 data sets are reported and the proposed algorithm is compared with the X-means algorithm. ©

Methods and applications of clusterwise linear regression : a survey and comparison

- Long, Qiang, Bagirov, Adil, Taheri, Sona, Sultanova, Nargiz, Wu, Xue

**Authors:**Long, Qiang , Bagirov, Adil , Taheri, Sona , Sultanova, Nargiz , Wu, Xue**Date:**2023**Type:**Text , Journal article**Relation:**ACM Transactions on Knowledge Discovery from Data Vol. 17, no. 3 (2023), p.**Relation:**http://purl.org/au-research/grants/arc/DP190100580**Full Text:**false**Reviewed:****Description:**Clusterwise linear regression (CLR) is a well-known technique for approximating a data using more than one linear function. It is based on the combination of clustering and multiple linear regression methods. This article provides a comprehensive survey and comparative assessments of CLR including model formulations, description of algorithms, and their performance on small to large-scale synthetic and real-world datasets. Some applications of the CLR algorithms and possible future research directions are also discussed. © 2023 Association for Computing Machinery.

A novel optimization approach towards improving separability of clusters

- Bagirov, Adil, Hoseini-Monjezi, Najmeh, Taheri, Sona

**Authors:**Bagirov, Adil , Hoseini-Monjezi, Najmeh , Taheri, Sona**Date:**2023**Type:**Text , Journal article**Relation:**Computers and Operations Research Vol. 152, no. (2023), p.**Relation:**http://purl.org/au-research/grants/arc/DP190100580**Full Text:**false**Reviewed:****Description:**The objective functions in optimization models of the sum-of-squares clustering problem reflect intra-cluster similarity and inter-cluster dissimilarities and in general, optimal values of these functions can be considered as appropriate measures for compactness of clusters. However, the use of the objective function alone may not lead to the finding of separable clusters. To address this shortcoming in existing models for clustering, we develop a new optimization model where the objective function is represented as a sum of two terms reflecting the compactness and separability of clusters. Based on this model we develop a two-phase incremental clustering algorithm. In the first phase, the clustering function is minimized to find compact clusters and in the second phase, a new model is applied to improve the separability of clusters. The Davies–Bouldin cluster validity index is applied as an additional measure to compare the compactness of clusters and silhouette coefficients are used to estimate the separability of clusters. The performance of the proposed algorithm is demonstrated and compared with that of four other algorithms using synthetic and real-world data sets. Numerical results clearly show that in comparison with other algorithms the new algorithm is able to find clusters with better separability and similar compactness. © 2022

Finding compact and well-separated clusters : clustering using silhouette coefficients

- Bagirov, Adil, Aliguliyev, Ramiz, Sultanova, Nargiz

**Authors:**Bagirov, Adil , Aliguliyev, Ramiz , Sultanova, Nargiz**Date:**2023**Type:**Text , Journal article**Relation:**Pattern Recognition Vol. 135, no. (2023), p.**Relation:**http://purl.org/au-research/grants/arc/DP190100580**Full Text:**false**Reviewed:****Description:**Finding compact and well-separated clusters in data sets is a challenging task. Most clustering algorithms try to minimize certain clustering objective functions. These functions usually reflect the intra-cluster similarity and inter-cluster dissimilarity. However, the use of such functions alone may not lead to the finding of well-separated and, in some cases, compact clusters. Therefore additional measures, called cluster validity indices, are used to estimate the true number of well-separated and compact clusters. Some of these indices are well-suited to be included into the optimization model of the clustering problem. Silhouette coefficients are among such indices. In this paper, a new optimization model of the clustering problem is developed where the clustering function is used as an objective and silhouette coefficients are used to formulate constraints. Then an algorithm, called CLUSCO (CLustering Using Silhouette COefficients), is designed to construct clusters incrementally. Three schemes are discussed to reduce the computational complexity of the algorithm. Its performance is evaluated using fourteen real-world data sets and compared with that of three state-of-the-art clustering algorithms. Results show that the CLUSCO is able to compute compact clusters which are significantly better separable in comparison with those obtained by other algorithms. © 2022 Elsevier Ltd

Bundle enrichment method for nonsmooth difference of convex programming problems

- Gaudioso, Manilo, Taheri, Sona, Bagirov, Adil, Karmitsa, Napsu

**Authors:**Gaudioso, Manilo , Taheri, Sona , Bagirov, Adil , Karmitsa, Napsu**Date:**2023**Type:**Text , Journal article**Relation:**Algorithms Vol. 16, no. 8 (2023), p.**Relation:**http://purl.org/au-research/grants/arc/DP190100580**Full Text:****Reviewed:****Description:**The Bundle Enrichment Method (BEM-DC) is introduced for solving nonsmooth difference of convex (DC) programming problems. The novelty of the method consists of the dynamic management of the bundle. More specifically, a DC model, being the difference of two convex piecewise affine functions, is formulated. The (global) minimization of the model is tackled by solving a set of convex problems whose cardinality depends on the number of linearizations adopted to approximate the second DC component function. The new bundle management policy distributes the information coming from previous iterations to separately model the DC components of the objective function. Such a distribution is driven by the sign of linearization errors. If the displacement suggested by the model minimization provides no sufficient decrease of the objective function, then the temporary enrichment of the cutting plane approximation of just the first DC component function takes place until either the termination of the algorithm is certified or a sufficient decrease is achieved. The convergence of the BEM-DC method is studied, and computational results on a set of academic test problems with nonsmooth DC objective functions are provided. © 2023 by the authors.

**Authors:**Gaudioso, Manilo , Taheri, Sona , Bagirov, Adil , Karmitsa, Napsu**Date:**2023**Type:**Text , Journal article**Relation:**Algorithms Vol. 16, no. 8 (2023), p.**Relation:**http://purl.org/au-research/grants/arc/DP190100580**Full Text:****Reviewed:****Description:**The Bundle Enrichment Method (BEM-DC) is introduced for solving nonsmooth difference of convex (DC) programming problems. The novelty of the method consists of the dynamic management of the bundle. More specifically, a DC model, being the difference of two convex piecewise affine functions, is formulated. The (global) minimization of the model is tackled by solving a set of convex problems whose cardinality depends on the number of linearizations adopted to approximate the second DC component function. The new bundle management policy distributes the information coming from previous iterations to separately model the DC components of the objective function. Such a distribution is driven by the sign of linearization errors. If the displacement suggested by the model minimization provides no sufficient decrease of the objective function, then the temporary enrichment of the cutting plane approximation of just the first DC component function takes place until either the termination of the algorithm is certified or a sufficient decrease is achieved. The convergence of the BEM-DC method is studied, and computational results on a set of academic test problems with nonsmooth DC objective functions are provided. © 2023 by the authors.

Nonsmooth optimization-based hyperparameter-free neural networks for large-scale regression

- Karmitsa, Napsu, Taheri, Sona, Joki, Kaisa, Paasivirta, Pauliina, Defterdarovic, J., Bagirov, Adil, Mäkelä, Marko

**Authors:**Karmitsa, Napsu , Taheri, Sona , Joki, Kaisa , Paasivirta, Pauliina , Defterdarovic, J. , Bagirov, Adil , Mäkelä, Marko**Date:**2023**Type:**Text , Journal article**Relation:**Algorithms Vol. 16, no. 9 (2023), p.**Relation:**http://purl.org/au-research/grants/arc/DP190100580**Full Text:****Reviewed:****Description:**In this paper, a new nonsmooth optimization-based algorithm for solving large-scale regression problems is introduced. The regression problem is modeled as fully-connected feedforward neural networks with one hidden layer, piecewise linear activation, and the (Formula presented.) -loss functions. A modified version of the limited memory bundle method is applied to minimize this nonsmooth objective. In addition, a novel constructive approach for automated determination of the proper number of hidden nodes is developed. Finally, large real-world data sets are used to evaluate the proposed algorithm and to compare it with some state-of-the-art neural network algorithms for regression. The results demonstrate the superiority of the proposed algorithm as a predictive tool in most data sets used in numerical experiments. © 2023 by the authors.

**Authors:**Karmitsa, Napsu , Taheri, Sona , Joki, Kaisa , Paasivirta, Pauliina , Defterdarovic, J. , Bagirov, Adil , Mäkelä, Marko**Date:**2023**Type:**Text , Journal article**Relation:**Algorithms Vol. 16, no. 9 (2023), p.**Relation:**http://purl.org/au-research/grants/arc/DP190100580**Full Text:****Reviewed:****Description:**In this paper, a new nonsmooth optimization-based algorithm for solving large-scale regression problems is introduced. The regression problem is modeled as fully-connected feedforward neural networks with one hidden layer, piecewise linear activation, and the (Formula presented.) -loss functions. A modified version of the limited memory bundle method is applied to minimize this nonsmooth objective. In addition, a novel constructive approach for automated determination of the proper number of hidden nodes is developed. Finally, large real-world data sets are used to evaluate the proposed algorithm and to compare it with some state-of-the-art neural network algorithms for regression. The results demonstrate the superiority of the proposed algorithm as a predictive tool in most data sets used in numerical experiments. © 2023 by the authors.

Nonsmooth optimization-based model and algorithm for semisupervised clustering

- Bagirov, Adil, Taheri, Sona, Bai, Fusheng, Zheng, Fangying

**Authors:**Bagirov, Adil , Taheri, Sona , Bai, Fusheng , Zheng, Fangying**Date:**2023**Type:**Text , Journal article**Relation:**IEEE Transactions on Neural Networks and Learning Systems Vol. 34, no. 9 (2023), p. 5517-5530**Relation:**http://purl.org/au-research/grants/arc/DP190100580**Full Text:**false**Reviewed:****Description:**Using a nonconvex nonsmooth optimization approach, we introduce a model for semisupervised clustering (SSC) with pairwise constraints. In this model, the objective function is represented as a sum of three terms: the first term reflects the clustering error for unlabeled data points, the second term expresses the error for data points with must-link (ML) constraints, and the third term represents the error for data points with cannot-link (CL) constraints. This function is nonconvex and nonsmooth. To find its optimal solutions, we introduce an adaptive SSC (A-SSC) algorithm. This algorithm is based on the combination of the nonsmooth optimization method and an incremental approach, which involves the auxiliary SSC problem. The algorithm constructs clusters incrementally starting from one cluster and gradually adding one cluster center at each iteration. The solutions to the auxiliary SSC problem are utilized as starting points for solving the nonconvex SSC problem. The discrete gradient method (DGM) of nonsmooth optimization is applied to solve the underlying nonsmooth optimization problems. This method does not require subgradient evaluations and uses only function values. The performance of the A-SSC algorithm is evaluated and compared with four benchmarking SSC algorithms on one synthetic and 12 real-world datasets. Results demonstrate that the proposed algorithm outperforms the other four algorithms in identifying compact and well-separated clusters while satisfying most constraints. © 2021 IEEE.

- «
- ‹
- 1
- ›
- »

Are you sure you would like to clear your session, including search history and login status?