Your selections:

17Bagirov, Adil
7Webb, Dean
5Sukhorukova, Nadezda
3Mirzayeva, Hijran
2Kasimbeyli, Refail
2Ozturk, Gurkan
2Rubinov, Alex
2Stranieri, Andrew
2Tian, Jing
2Wu, Zhiyou
1Anderson, Neal
1Barton, Andrew
1Baynes, Timothy
1Boland, John
1Branch, Philip
1Briggs, Steven
1Creighton, Douglas
1Crouzeix, Jean-Pierre
1Daniels, Peter

Show More

Show Less

100102 Applied Mathematics
100103 Numerical and Computational Mathematics
10Nonsmooth optimization
60802 Computation Theory and Mathematics
40801 Artificial Intelligence and Image Processing
4Nonconvex optimization
30906 Electrical and Electronic Engineering
3Chebyshev approximation
3Cluster analysis
3Data mining
3Optimization
3Supervised learning
2Classification
2Clusterwise linear regression
2Clusterwise regression
2DC programming
2Data analysis
2Global optimization
2Global optimization method
2Incremental algorithm

Show More

Show Less

Analysis and comparison of co-occurrence matrix and pixel n-gram features for mammographic images

- Kulkarni, Pradnya, Stranieri, Andrew, Kulkarni, Sid, Ugon, Julien, Mittal, Manish

**Authors:**Kulkarni, Pradnya , Stranieri, Andrew , Kulkarni, Sid , Ugon, Julien , Mittal, Manish**Date:**2015**Type:**Text , Conference paper**Relation:**International Conference on Communication and Computing p. 7-14**Full Text:**false**Reviewed:****Description:**Mammography is a proven way of detecting breast cancer at an early stage. Various feature extraction techniques such as histograms, co-occurrence matrix, local binary patterns, Gabor filters, wavelet transforms are used for analysing mammograms. The novel pixel N-gram feature extraction technique has been inspired from the character N-gram concept of text retrieval. In this paper, we have compared the novel N-gram feature extraction technique with the co-occurrence matrix feature extraction technique. The experiments were conducted on the benchmark miniMIAS mammography database. Classification of mammograms into normal and abnormal category using N-gram features showed promising results with greater classification accuracy, sensitivity and specificity compared to classification using co-occurrence matrix features. Moreover, N-gram features computation are found to be considerably faster than co-occurrence matrix feature computation

Optimization in wireless local area network

- Kouhbor, Shahnaz, Ugon, Julien, Kruger, Alexander, Rubinov, Alex, Branch, Philip

**Authors:**Kouhbor, Shahnaz , Ugon, Julien , Kruger, Alexander , Rubinov, Alex , Branch, Philip**Date:**2004**Type:**Text , Conference paper**Relation:**Paper presented at ICOTA6: 6th International Conference on Optimization - Techniques and Applications, Ballarat, Victoria : 9th December, 2004**Full Text:**false**Reviewed:****Description:**2003000886

Workload coverage through nonsmooth optimization

- Sukhorukova, Nadezda, Ugon, Julien, Yearwood, John

**Authors:**Sukhorukova, Nadezda , Ugon, Julien , Yearwood, John**Date:**2009**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 24, no. 2 (2009), p. 285-298**Full Text:**false**Reviewed:****Description:**In this paper, workload coverage is the problem of identifying a pattern of days worked and days off, along with the number of hours worked on each work day. This pattern must satisfy certain work-related constraints and fit best to a predefined workload. In our study, we formulate the problem of workload coverage as an optimization problem. We propose a number of models which take into consideration various staffing constraints. For each of these models, our study aims to find a compromise between an accurate workload coverage and the ability to solve the corresponding optimization problems in a reasonable time. Numerical experiments on each model are carried out and the results are presented. Interestingly, the nonlinear programming approaches are found to be competitive with linear programming ones. © 2009 Taylor & Francis.

- Sukhorukova, Nadezda, Ugon, Julien

**Authors:**Sukhorukova, Nadezda , Ugon, Julien**Date:**2016**Type:**Text , Journal article**Relation:**Journal of Optimization Theory and Applications Vol. 171, no. 2 (2016), p. 536-549**Full Text:**false**Reviewed:****Description:**In this paper, we derive conditions for best uniform approximation by fixed knots polynomial splines with weighting functions. The theory of Chebyshev approximation for fixed knots polynomial functions is very elegant and complete. Necessary and sufficient optimality conditions have been developed leading to efficient algorithms for constructing optimal spline approximations. The optimality conditions are based on the notion of alternance (maximal deviation points with alternating deviation signs). In this paper, we extend these results to the case when the model function is a product of fixed knots polynomial splines (whose parameters are subject to optimization) and other functions (whose parameters are predefined). This problem is nonsmooth, and therefore, we make use of convex and nonsmooth analysis to solve it.

A new modified global k-means algorithm for clustering large data sets

- Bagirov, Adil, Ugon, Julien, Webb, Dean

**Authors:**Bagirov, Adil , Ugon, Julien , Webb, Dean**Date:**2009**Type:**Text , Conference paper**Relation:**Paper presented at XIIIth International Conference : Applied Stochastic Models and Data Analysis, ASMDA 2009, Vilnius, Lithuania : 30th June - 3rd July 2009 p. 1-5**Full Text:**false**Description:**The k-means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and inefficient for solving clustering problems in large data sets. Recently, in order to resolve difficulties with the choice of starting points, incremental approaches have been developed. The modified global k-means algorithm is based on such an approach. It iteratively adds one cluster center at a time. Numerical experiments show that this algorithm considerably improve the k-means algorithm. However, this algorithm is not suitable for clustering very large data sets. In this paper, a new version of the modified global k-means algorithm is proposed. We introduce an auxiliary cluster function to generate a set of starting points spanning different parts of the data set. We exploit information gathered in previous iterations of the incremental algorithm to reduce its complexity.**Description:**2003007558

An incremental approach for the construction of a piecewise linear classifier

- Bagirov, Adil, Ugon, Julien, Webb, Dean

**Authors:**Bagirov, Adil , Ugon, Julien , Webb, Dean**Date:**2009**Type:**Text , Conference paper**Relation:**Paper presented at XIIIth International Conference : Applied Stochastic Models and Data Analysis, ASMDA 2009, Vilnius, Lithuania : 30th June - 3rd July 2009 p. 507–511**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:**false**Description:**In this paper the problem of finding piecewise linear boundaries between sets is considered and is applied for solving supervised data classification problems. An algorithm for the computation of piecewise linear boundaries, consisting of two main steps, is proposed. In the first step sets are approximated by hyperboxes to find so-called “indeterminate” regions between sets. In the second step sets are separated inside these “indeterminate” regions by piecewise linear functions. These functions are computed incrementally starting with a linear function. Results of numerical experiments are reported. These results demonstrate that the new algorithm requires a reasonable training time and it produces consistently good test set accuracy on most data sets comparing with mainstream classifiers.**Description:**2003007559

Optimality conditions and optimization methods for quartic polynomial optimization

- Wu, Zhiyou, Tian, Jing, Quan, Jing, Ugon, Julien

**Authors:**Wu, Zhiyou , Tian, Jing , Quan, Jing , Ugon, Julien**Date:**2014**Type:**Text , Journal article**Relation:**Applied Mathematics and Computation Vol. 232, no. (2014), p. 968-982**Full Text:**false**Reviewed:****Description:**In this paper multivariate quartic polynomial optimization program (QPOP) is considered. Quartic optimization problems arise in various practical applications and are proved to be NP hard. We discuss necessary global optimality conditions for quartic problem (QPOP). And then we present a new (strongly or ε-strongly) local optimization method according to necessary global optimality conditions, which may escape and improve some KKT points. Finally we design a global optimization method for problem (QPOP) by combining the new (strongly or ε-strongly) local optimization method and an auxiliary function. Numerical examples show that our algorithms are efficient and stable.

Nonsmooth optimization algorithm for solving clusterwise linear regression problems

- Bagirov, Adil, Ugon, Julien, Mirzayeva, Hijran

**Authors:**Bagirov, Adil , Ugon, Julien , Mirzayeva, Hijran**Date:**2015**Type:**Text , Journal article**Relation:**Journal of Optimization Theory and Applications Vol. 164, no. 3 (2015), p. 755-780**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**Clusterwise linear regression consists of finding a number of linear regression functions each approximating a subset of the data. In this paper, the clusterwise linear regression problem is formulated as a nonsmooth nonconvex optimization problem and an algorithm based on an incremental approach and on the discrete gradient method of nonsmooth optimization is designed to solve it. This algorithm incrementally divides the whole dataset into groups which can be easily approximated by one linear regression function. A special procedure is introduced to generate good starting points for solving global optimization problems at each iteration of the incremental algorithm. The algorithm is compared with the multi-start Spath and the incremental algorithms on several publicly available datasets for regression analysis.

Generalised rational approximation and its application to improve deep learning classifiers

- Peiris, V, Sharon, Nir, Sukhorukova, Nadezda, Ugon, Julien

**Authors:**Peiris, V , Sharon, Nir , Sukhorukova, Nadezda , Ugon, Julien**Date:**2021**Type:**Text , Journal article**Relation:**Applied Mathematics and Computation Vol. 389, no. (2021), p.**Relation:**http://purl.org/au-research/grants/arc/DP180100602**Full Text:**false**Reviewed:****Description:**A rational approximation (that is, approximation by a ratio of two polynomials) is a flexible alternative to polynomial approximation. In particular, rational functions exhibit accurate estimations to nonsmooth and non-Lipschitz functions, where polynomial approximations are not efficient. We prove that the optimisation problems appearing in the best uniform rational approximation and its generalisation to a ratio of linear combinations of basis functions are quasiconvex even when the basis functions are not restricted to monomials. Then we show how this fact can be used in the development of computational methods. This paper presents a theoretical study of the arising optimisation problems and provides results of several numerical experiments. We apply our approximation as a preprocessing step to deep learning classifiers and demonstrate that the classification accuracy is significantly improved compared to the classification of the raw signals. © 2020**Description:**This research was supported by the Australian Research Council (ARC), Solving hard Chebyshev approximation problems through nonsmooth analysis (Discovery Project DP180100602 ). This research was partially sponsored by Tel Aviv-Swinburne Research Collaboration Grant (2019).

A modified parallel optimization system for updating large-size time-evolving flow matrix

- Yu, Ting, Ugon, Julien, Yu, Wei

**Authors:**Yu, Ting , Ugon, Julien , Yu, Wei**Date:**2011**Type:**Text , Journal article**Relation:**Information Sciences Vol.194, no. (2011), p.57-67**Full Text:**false**Reviewed:****Description:**Flow matrices are widely used in many disciplines, but few methods can estimate them. This paper presents a knowledge-based system as capable of estimating and updating large-size time-evolving flow matrix. The system in this paper consists of two major components with the purposes of matrix estimation and parallel optimization. The matrix estimation algorithm interprets and follows users' query scripts, retrieves data from various sources and integrates them for the matrix estimation. The parallel optimization component is built upon a supercomputing facility to utilize its computational power to efficiently process a large amount of data and estimate a large-size complex matrix. The experimental results demonstrate its outstanding performance and the acceptable accuracy by directly and indirectly comparing the estimation matrix with the actual matrix constructed by surveys. Â© 2011 Elsevier Inc. All rights reserved.

Codifferential method for minimizing nonsmooth DC functions

**Authors:**Bagirov, Adil , Ugon, Julien**Date:**2011**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 50, no. 1 (2011), p. 3-22**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:**false**Reviewed:****Description:**In this paper, a new algorithm to locally minimize nonsmooth functions represented as a difference of two convex functions (DC functions) is proposed. The algorithm is based on the concept of codifferential. It is assumed that DC decomposition of the objective function is known a priori. We develop an algorithm to compute descent directions using a few elements from codifferential. The convergence of the minimization algorithm is studied and its comparison with different versions of the bundle methods using results of numerical experiments is given. © 2010 Springer Science+Business Media, LLC.

Queueing programming models in telecommunication network maintenance

- Ugon, Julien, Jia, Long, Ouveysi, Iradj

**Authors:**Ugon, Julien , Jia, Long , Ouveysi, Iradj**Date:**2003**Type:**Text , Conference paper**Relation:**Paper presented at the Symposium on Industrial Optimisation and the 9th Australian Optimisation Day, Perth : 30th September, 2002**Full Text:**false**Reviewed:****Description:**E1**Description:**2003000350

A feature selection approach for unsupervised classification based on clustering

- Rubinov, Alex, Soukhoroukova, Nadejda, Ugon, Julien

**Authors:**Rubinov, Alex , Soukhoroukova, Nadejda , Ugon, Julien**Date:**2004**Type:**Text , Conference paper**Relation:**Paper presented at Sixth International Conference on Optimization: Techniques and Applications (ICOTA) , University of Ballarat, Ballarat, Victoria : 9th-11th December 2004**Full Text:**false**Description:**Data have been collected for many years in different scientific (industrial, medical) research groups. Very often these groups kept all the the they could collect. It is possible that the data contains a lot of noisy features which do not bring any information, but make the problem more complicated. The additional study of eliminating non-informative and selecting informative features is very important in the area of Data Mining. There are several feature selection methods which were developed for supervised classification. The area of feature selection for unsupervised classification is not so developed. In this paper we present a new feature selection approach for unsupervised classification, based on clustering and nonsmooth optimisation techniques.**Description:**2003004085

Global optimality conditions and optimization methods for polynomial programming problems

- Wu, Zhiyou, Tian, Jing, Ugon, Julien

**Authors:**Wu, Zhiyou , Tian, Jing , Ugon, Julien**Date:**2015**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 62, no. 4 (2015), p. 617-641**Full Text:**false**Reviewed:****Description:**This paper is concerned with the general polynomial programming problem with box constraints, including global optimality conditions and optimization methods. First, a necessary global optimality condition for a general polynomial programming problem with box constraints is given. Then we design a local optimization method by using the necessary global optimality condition to obtain some strongly or -strongly local minimizers which substantially improve some KKT points. Finally, a global optimization method, by combining the new local optimization method and an auxiliary function, is designed. Numerical examples show that our methods are efficient and stable.

A generalized subgradient method with piecewise linear subproblem

- Bagirov, Adil, Ganjehlou, Asef Nazari, Tor, Hakan, Ugon, Julien

**Authors:**Bagirov, Adil , Ganjehlou, Asef Nazari , Tor, Hakan , Ugon, Julien**Date:**2010**Type:**Text , Journal article**Relation:**Dynamics of Continuous, Discrete and Impulsive Systems Series B: Applications and Algorithms Vol. 17, no. 5 (2010), p. 621-638**Full Text:**false**Reviewed:****Description:**In this paper, a new version of the quasisecant method for nonsmooth nonconvex optimization is developed. Quasisecants are overestimates to the objective function in some neighborhood of a given point. Subgradients are used to obtain quasisecants. We describe classes of nonsmooth functions where quasisecants can be computed explicitly. We show that a descent direction with suffcient decrease must satisfy a set of linear inequalities. In the proposed algorithm this set of linear inequalities is solved by applying the subgradient algorithm to minimize a piecewise linear function. We compare results of numerical experiments between the proposed algorithm and subgradient method. Copyright Â© 2010 Watam Press.

Classification through incremental max-min separability

- Bagirov, Adil, Ugon, Julien, Webb, Dean, Karasozen, Bulent

**Authors:**Bagirov, Adil , Ugon, Julien , Webb, Dean , Karasozen, Bulent**Date:**2011**Type:**Text , Journal article**Relation:**Pattern Analysis and Applications Vol. 14, no. 2 (2011), p. 165-174**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:**false**Reviewed:****Description:**Piecewise linear functions can be used to approximate non-linear decision boundaries between pattern classes. Piecewise linear boundaries are known to provide efficient real-time classifiers. However, they require a long training time. Finding piecewise linear boundaries between sets is a difficult optimization problem. Most approaches use heuristics to avoid solving this problem, which may lead to suboptimal piecewise linear boundaries. In this paper, we propose an algorithm for globally training hyperplanes using an incremental approach. Such an approach allows one to find a near global minimizer of the classification error function and to compute as few hyperplanes as needed for separating sets. We apply this algorithm for solving supervised data classification problems and report the results of numerical experiments on real-world data sets. These results demonstrate that the new algorithm requires a reasonable training time and its test set accuracy is consistently good on most data sets compared with mainstream classifiers. © 2010 Springer-Verlag London Limited.

- Crouzeix, Jean-Pierre, Sukhorukova, Nadezda, Ugon, Julien

**Authors:**Crouzeix, Jean-Pierre , Sukhorukova, Nadezda , Ugon, Julien**Date:**2020**Type:**Text , Journal article**Relation:**Set-Valued and Variational Analysis Vol. 28, no. 1 (2020), p. 123-147. http://purl.org/au-research/grants/arc/DP180100602**Full Text:**false**Reviewed:****Description:**One of the purposes in this paper is to provide a better understanding of the alternance property which occurs in Chebyshev polynomial approximation and continuous piecewise polynomial approximation problems. In the first part of this paper, we prove that alternating sequences of any continuous function are finite in any given segment and then propose an original approach to obtain new proofs of the well known necessary and sufficient optimality conditions. There are two main advantages of this approach. First of all, the proofs are intuitive and easy to understand. Second, these proofs are constructive and therefore they lead to new alternation-based algorithms. In the second part of this paper, we develop new local optimality conditions for free knot polynomial spline approximation. The proofs for free knot approximation are relying on the techniques developed in the first part of this paper. The piecewise polynomials are required to be continuous on the approximation segment. © 2020, Springer Nature B.V.

A novel piecewise linear classifier based on polyhedral conic and max-min separabilities

- Bagirov, Adil, Ugon, Julien, Webb, Dean, Ozturk, Gurkan, Kasimbeyli, Refail

**Authors:**Bagirov, Adil , Ugon, Julien , Webb, Dean , Ozturk, Gurkan , Kasimbeyli, Refail**Date:**2011**Type:**Text , Journal article**Relation:**TOP Vol.21, no.1 (2011), p. 1-22**Full Text:**false**Reviewed:****Description:**In this paper, an algorithm for finding piecewise linear boundaries between pattern classes is developed. This algorithm consists of two main stages. In the first stage, a polyhedral conic set is used to identify data points which lie inside their classes, and in the second stage we exclude those points to compute a piecewise linear boundary using the remaining data points. Piecewise linear boundaries are computed incrementally starting with one hyperplane. Such an approach allows one to significantly reduce the computational effort in many large data sets. Results of numerical experiments are reported. These results demonstrate that the new algorithm consistently produces a good test set accuracy on most data sets comparing with a number of other mainstream classifiers. Â© 2011 Sociedad de EstadÃstica e InvestigaciÃ³n Operativa.

Nonsmooth nonconvex optimization approach to clusterwise linear regression problems

- Bagirov, Adil, Ugon, Julien, Mirzayeva, Hijran

**Authors:**Bagirov, Adil , Ugon, Julien , Mirzayeva, Hijran**Date:**2013**Type:**Text , Journal article**Relation:**European Journal of Operational Research Vol. 229, no. 1 (2013), p. 132-142**Full Text:**false**Reviewed:****Description:**Clusterwise regression consists of finding a number of regression functions each approximating a subset of the data. In this paper, a new approach for solving the clusterwise linear regression problems is proposed based on a nonsmooth nonconvex formulation. We present an algorithm for minimizing this nonsmooth nonconvex function. This algorithm incrementally divides the whole data set into groups which can be easily approximated by one linear regression function. A special procedure is introduced to generate a good starting point for solving global optimization problems at each iteration of the incremental algorithm. Such an approach allows one to find global or near global solution to the problem when the data sets are sufficiently dense. The algorithm is compared with the multistart Späth algorithm on several publicly available data sets for regression analysis. © 2013 Elsevier B.V. All rights reserved.**Description:**2003011018

**Authors:**Bagirov, Adil , Ugon, Julien**Date:**2018**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 33, no. 1 (2018), p. 194-219**Full Text:**false**Reviewed:****Description:**The clusterwise linear regression problem is formulated as a nonsmooth nonconvex optimization problem using the squared regression error function. The objective function in this problem is represented as a difference of convex functions. Optimality conditions are derived, and an algorithm is designed based on such a representation. An incremental approach is proposed to generate starting solutions. The algorithm is tested on small to large data sets.

Are you sure you would like to clear your session, including search history and login status?