Your selections:

14Ugon, Julien
11Rubinov, Alex
11Yearwood, John
7Karmitsa, Napsu
6Barton, Andrew
6Taheri, Sona
5Ghosh, Moumita
5Mala-Jetmarova, Helena
4Ghosh, Ranadhir
4Mohebi, Ehsan
4Ozturk, Gurkan
4Webb, Dean
3Beliakov, Gleb
3Clausen, Conny
3Ganjehlou, Asef Nazari
3Karasozen, Bulent
3Kasimbeyli, Refail
3Kohler, Michael
3Makela, Marko

Show More

Show Less

320103 Numerical and Computational Mathematics
30Nonsmooth optimization
270102 Applied Mathematics
180802 Computation Theory and Mathematics
13Nonconvex optimization
120801 Artificial Intelligence and Image Processing
11Cluster analysis
10Algorithms
10Subdifferential
9Optimisation
8Data mining
8Global optimization
7Classification
60906 Electrical and Electronic Engineering
6Derivative-free optimization
6Discrete gradient method
6Regression analysis
50806 Information Systems
51702 Cognitive Science
5Bundle methods

Show More

Show Less

Format Type

An efficient algorithm for the incremental construction of a piecewise linear classifier

- Bagirov, Adil, Ugon, Julien, Webb, Dean

**Authors:**Bagirov, Adil , Ugon, Julien , Webb, Dean**Date:**2011**Type:**Text , Journal article**Relation:**Information Systems Vol. 36, no. 4 (2011), p. 782-790**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:**false**Reviewed:****Description:**In this paper the problem of finding piecewise linear boundaries between sets is considered and is applied for solving supervised data classification problems. An algorithm for the computation of piecewise linear boundaries, consisting of two main steps, is proposed. In the first step sets are approximated by hyperboxes to find so-called "indeterminate" regions between sets. In the second step sets are separated inside these "indeterminate" regions by piecewise linear functions. These functions are computed incrementally starting with a linear function. Results of numerical experiments are reported. These results demonstrate that the new algorithm requires a reasonable training time and it produces consistently good test set accuracy on most data sets comparing with mainstream classifiers. Â© 2010 Elsevier B.V. All rights reserved.

Classification through incremental max-min separability

- Bagirov, Adil, Ugon, Julien, Webb, Dean, Karasozen, Bulent

**Authors:**Bagirov, Adil , Ugon, Julien , Webb, Dean , Karasozen, Bulent**Date:**2011**Type:**Text , Journal article**Relation:**Pattern Analysis and Applications Vol. 14, no. 2 (2011), p. 165-174**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:**false**Reviewed:****Description:**Piecewise linear functions can be used to approximate non-linear decision boundaries between pattern classes. Piecewise linear boundaries are known to provide efficient real-time classifiers. However, they require a long training time. Finding piecewise linear boundaries between sets is a difficult optimization problem. Most approaches use heuristics to avoid solving this problem, which may lead to suboptimal piecewise linear boundaries. In this paper, we propose an algorithm for globally training hyperplanes using an incremental approach. Such an approach allows one to find a near global minimizer of the classification error function and to compute as few hyperplanes as needed for separating sets. We apply this algorithm for solving supervised data classification problems and report the results of numerical experiments on real-world data sets. These results demonstrate that the new algorithm requires a reasonable training time and its test set accuracy is consistently good on most data sets compared with mainstream classifiers. © 2010 Springer-Verlag London Limited.

Codifferential method for minimizing nonsmooth DC functions

**Authors:**Bagirov, Adil , Ugon, Julien**Date:**2011**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 50, no. 1 (2011), p. 3-22**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:**false**Reviewed:****Description:**In this paper, a new algorithm to locally minimize nonsmooth functions represented as a difference of two convex functions (DC functions) is proposed. The algorithm is based on the concept of codifferential. It is assumed that DC decomposition of the objective function is known a priori. We develop an algorithm to compute descent directions using a few elements from codifferential. The convergence of the minimization algorithm is studied and its comparison with different versions of the bundle methods using results of numerical experiments is given. © 2010 Springer Science+Business Media, LLC.

Fast modified global k-means algorithm for incremental cluster construction

- Bagirov, Adil, Ugon, Julien, Webb, Dean

**Authors:**Bagirov, Adil , Ugon, Julien , Webb, Dean**Date:**2011**Type:**Text , Journal article**Relation:**Pattern Recognition Vol. 44, no. 4 (2011), p. 866-876**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:**false**Reviewed:****Description:**The k-means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and are inefficient for solving clustering problems in large datasets. Recently, incremental approaches have been developed to resolve difficulties with the choice of starting points. The global k-means and the modified global k-means algorithms are based on such an approach. They iteratively add one cluster center at a time. Numerical experiments show that these algorithms considerably improve the k-means algorithm. However, they require storing the whole affinity matrix or computing this matrix at each iteration. This makes both algorithms time consuming and memory demanding for clustering even moderately large datasets. In this paper, a new version of the modified global k-means algorithm is proposed. We introduce an auxiliary cluster function to generate a set of starting points lying in different parts of the dataset. We exploit information gathered in previous iterations of the incremental algorithm to eliminate the need of computing or storing the whole affinity matrix and thereby to reduce computational effort and memory usage. Results of numerical experiments on six standard datasets demonstrate that the new algorithm is more efficient than the global and the modified global k-means algorithms. Â© 2010 Elsevier Ltd. All rights reserved.

A generalized subgradient method with piecewise linear subproblem

- Bagirov, Adil, Ganjehlou, Asef Nazari, Tor, Hakan, Ugon, Julien

**Authors:**Bagirov, Adil , Ganjehlou, Asef Nazari , Tor, Hakan , Ugon, Julien**Date:**2010**Type:**Text , Journal article**Relation:**Dynamics of Continuous, Discrete and Impulsive Systems Series B: Applications and Algorithms Vol. 17, no. 5 (2010), p. 621-638**Full Text:**false**Reviewed:****Description:**In this paper, a new version of the quasisecant method for nonsmooth nonconvex optimization is developed. Quasisecants are overestimates to the objective function in some neighborhood of a given point. Subgradients are used to obtain quasisecants. We describe classes of nonsmooth functions where quasisecants can be computed explicitly. We show that a descent direction with suffcient decrease must satisfy a set of linear inequalities. In the proposed algorithm this set of linear inequalities is solved by applying the subgradient algorithm to minimize a piecewise linear function. We compare results of numerical experiments between the proposed algorithm and subgradient method. Copyright Â© 2010 Watam Press.

A quasisecant method for minimizing nonsmooth functions

- Bagirov, Adil, Ganjehlou, Asef Nazari

**Authors:**Bagirov, Adil , Ganjehlou, Asef Nazari**Date:**2010**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 25, no. 1 (2010), p. 3-18**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:**false**Reviewed:****Description:**We present an algorithm to locally minimize nonsmooth, nonconvex functions. In order to find descent directions, the notion of quasisecants, introduced in this paper, is applied. We prove that the algorithm converges to Clarke stationary points. Numerical results are presented demonstrating the applicability of the proposed algorithm to a wide variety of nonsmooth, nonconvex optimization problems. We also compare the proposed algorithm with the bundle method using numerical results.

Alexander Rubinov - An outstanding scholar

**Authors:**Bagirov, Adil**Date:**2010**Type:**Text , Journal article**Relation:**Pacific Journal of Optimization Vol. 6, no. 2, Suppl. 1 (2010), p. 203-209**Full Text:**false

An L-2-Boosting Algorithm for Estimation of a Regression Function

- Bagirov, Adil, Clausen, Conny, Kohler, Michael

**Authors:**Bagirov, Adil , Clausen, Conny , Kohler, Michael**Date:**2010**Type:**Text , Journal article**Relation:**IEEE Transactions on Information Theory Vol. 56, no. 3 (2010), p. 1417-1429**Full Text:****Reviewed:****Description:**An L-2-boosting algorithm for estimation of a regression function from random design is presented, which consists of fitting repeatedly a function from a fixed nonlinear function space to the residuals of the data by least squares and by defining the estimate as a linear combination of the resulting least squares estimates. Splitting of the sample is used to decide after how many iterations of smoothing of the residuals the algorithm terminates. The rate of convergence of the algorithm is analyzed in case of an unbounded response variable. The method is used to fit a sum of maxima of minima of linear functions to a given data set, and is compared with other nonparametric regression estimates using simulated data.

**Authors:**Bagirov, Adil , Clausen, Conny , Kohler, Michael**Date:**2010**Type:**Text , Journal article**Relation:**IEEE Transactions on Information Theory Vol. 56, no. 3 (2010), p. 1417-1429**Full Text:****Reviewed:****Description:**An L-2-boosting algorithm for estimation of a regression function from random design is presented, which consists of fitting repeatedly a function from a fixed nonlinear function space to the residuals of the data by least squares and by defining the estimate as a linear combination of the resulting least squares estimates. Splitting of the sample is used to decide after how many iterations of smoothing of the residuals the algorithm terminates. The rate of convergence of the algorithm is analyzed in case of an unbounded response variable. The method is used to fit a sum of maxima of minima of linear functions to a given data set, and is compared with other nonparametric regression estimates using simulated data.

Application of optimisation-based data mining techniques to tobacco control dataset

- Dzalilov, Zari, Zhang, J, Bagirov, Adil, Mammadov, Musa

**Authors:**Dzalilov, Zari , Zhang, J , Bagirov, Adil , Mammadov, Musa**Date:**2010**Type:**Text , Journal article**Relation:**International Journal of Lean Thinking Vol. 1, no. 1 (2010), p. 27-41**Full Text:**false**Reviewed:****Description:**Tobacco smoking is one of the leading causes of death around the world. Consequently, control of tobacco use is an important global public health issue. Tobacco control may be aided by development of theoretical and methodological frameworks for describing and understanding complex tobacco control systems. Linear regression and logistic regression are currently very popular statistical techniques for modeling and analyzing complex data in tobacco control systems. However, in tobacco markets, numerous interrelated factors nontrivially interact with tobacco control policies, such that policies and control outcomes are nonlinearly related.

**Authors:**Dzalilov, Zari , Zhang, J , Bagirov, Adil , Mammadov, Musa**Date:**2010**Type:**Text , Journal article**Relation:**International Journal of Lean Thinking Vol. 1, no. 1 (2010), p. 27-41**Full Text:**false**Reviewed:****Description:**Tobacco smoking is one of the leading causes of death around the world. Consequently, control of tobacco use is an important global public health issue. Tobacco control may be aided by development of theoretical and methodological frameworks for describing and understanding complex tobacco control systems. Linear regression and logistic regression are currently very popular statistical techniques for modeling and analyzing complex data in tobacco control systems. However, in tobacco markets, numerous interrelated factors nontrivially interact with tobacco control policies, such that policies and control outcomes are nonlinearly related.

Cluster analysis of a tobacco control data set

- Dzalilov, Zari, Bagirov, Adil

**Authors:**Dzalilov, Zari , Bagirov, Adil**Date:**2010**Type:**Text , Journal article**Relation:**International Journal of Lean Thinking Vol. 1, no. 2 (2010), p.**Full Text:**false**Reviewed:****Description:**Development of theoretical and methodological frameworks in data analysis is fundamental for modeling complex tobacco control systems. Following this idea, a new optimization based approach was introduced in the paper through two distinct methods: the modified linear least square fit and a heuristic algorithm for feature slection based on optimization-based methods have the potential to detect nonlinearity, and therefore to be more effective analysis tools of complex data set. In this study we evaluate the modified global k-means clustering algorithm by applying it to a massive set of real-time tobacco control survey data. Cluster analysis identified fixed and stable clusters in the studied data. These clusters correspond to groups of smokers with similar behaviour and the identification of these clusters may allow us to give recommendations on modification of existing tobacco control systems and on the design of future data acquistion surveys.

A multidimensional descent method for global optimization

- Bagirov, Adil, Rubinov, Alex, Zhang, Jiapu

**Authors:**Bagirov, Adil , Rubinov, Alex , Zhang, Jiapu**Date:**2009**Type:**Text , Journal article**Relation:**Optimization Vol. 58, no. 5 (2009), p. 611-625**Full Text:**false**Reviewed:****Description:**This article presents a new multidimensional descent method for solving global optimization problems with box-constraints. This is a hybrid method where local search method is used for a local descent and global search is used for further multidimensional search on the subsets of intersection of cones generated by the local search method and the feasible region. The discrete gradient method is used for local search and the cutting angle method is used for global search. Two-and three-dimensional cones are used for the global search. Such an approach allows one, as a rule, to escape local minimizers which are not global ones. The proposed method is local optimization method with strong global search properties. We present results of numerical experiments using both smooth and non-smooth global optimization test problems. These results demonstrate that the proposed algorithm allows one to find a global or a near global minimizer.

Comments on : Optimization and data mining in medicine

**Authors:**Bagirov, Adil**Date:**2009**Type:**Text , Journal article**Relation:**Top Vol. 17, no. 2 (2009), p. 1-3**Full Text:**false**Reviewed:**

Estimation of a regression function by maxima of minima of linear functions

- Bagirov, Adil, Clausen, Conny, Kohler, Michael

**Authors:**Bagirov, Adil , Clausen, Conny , Kohler, Michael**Date:**2009**Type:**Text , Journal article**Relation:**IEEE Transactions on Information Theory Vol. 55, no. 2 (2009), p. 833-845**Full Text:****Reviewed:****Description:**In this paper, estimation of a regression function from independent and identically distributed random variables is considered. Estimates are defined by minimization of the empirical L2 risk over a class of functions, which are defined as maxima of minima of linear functions. Results concerning the rate of convergence of the estimates are derived. In particular, it is shown that for smooth regression functions satisfying the assumption of single index models, the estimate is able to achieve (up to some logarithmic factor) the corresponding optimal one-dimensional rate of convergence. Hence, under these assumptions, the estimate is able to circumvent the so-called curse of dimensionality. The small sample behavior of the estimates is illustrated by applying them to simulated data. © 2009 IEEE.

**Authors:**Bagirov, Adil , Clausen, Conny , Kohler, Michael**Date:**2009**Type:**Text , Journal article**Relation:**IEEE Transactions on Information Theory Vol. 55, no. 2 (2009), p. 833-845**Full Text:****Reviewed:****Description:**In this paper, estimation of a regression function from independent and identically distributed random variables is considered. Estimates are defined by minimization of the empirical L2 risk over a class of functions, which are defined as maxima of minima of linear functions. Results concerning the rate of convergence of the estimates are derived. In particular, it is shown that for smooth regression functions satisfying the assumption of single index models, the estimate is able to achieve (up to some logarithmic factor) the corresponding optimal one-dimensional rate of convergence. Hence, under these assumptions, the estimate is able to circumvent the so-called curse of dimensionality. The small sample behavior of the estimates is illustrated by applying them to simulated data. © 2009 IEEE.

Optimization methods and the k-committees algorithm for clustering of sequence data

- Yearwood, John, Bagirov, Adil, Kelarev, Andrei

**Authors:**Yearwood, John , Bagirov, Adil , Kelarev, Andrei**Date:**2009**Type:**Text , Journal article**Relation:**Applied and Computational Mathematics Vol. 8, no. 1 (2009), p. 92-101**Relation:**http://purl.org/au-research/grants/arc/DP0211866**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:**false**Description:**The present paper is devoted to new algorithms for unsupervised clustering based on the optimization approaches due to [2], [3] and [4]. We consider a novel situation, where the datasets consist of nucleotide or protein sequences and rather sophisticated biologically significant alignment scores have to be used as a measure of distance. Sequences of this kind cannot be regarded as points in a finite dimensional space. Besides, the alignment scores do not satisfy properties of Minkowski metrics. Nevertheless the optimization approaches have made it possible to introduce a new k-committees algorithm and compare its performance with previous algorithms for two datasets. Our experimental results show that the k-committees algorithms achieves intermediate accuracy for a dataset of ITS sequences, and it can perform better than the discrete k-means and Nearest Neighbour algorithms for certain datasets. All three algorithms achieve good agreement with clusters published in the biological literature before and can be used to obtain biologically significant clusterings.

- Bagirov, Adil, Beliakov, Gleb

**Authors:**Bagirov, Adil , Beliakov, Gleb**Date:**2009**Type:**Text , Journal article**Relation:**Optimization Vol. 58, no. 5 (2009), p. 479-481**Full Text:**false**Reviewed:**

An algorithm for the estimation of a regression function by continuous piecewise linear functions

- Bagirov, Adil, Clausen, Conny, Kohler, Michael

**Authors:**Bagirov, Adil , Clausen, Conny , Kohler, Michael**Date:**2008**Type:**Text , Journal article**Relation:**Computational Optimization and Applications Vol. 45, no. (2008), p. 159-179**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Reviewed:****Description:**The problem of the estimation of a regression function by continuous piecewise linear functions is formulated as a nonconvex, nonsmooth optimization problem. Estimates are defined by minimization of the empirical L 2 risk over a class of functions, which are defined as maxima of minima of linear functions. An algorithm for finding continuous piecewise linear functions is presented. We observe that the objective function in the optimization problem is semismooth, quasidifferentiable and piecewise partially separable. The use of these properties allow us to design an efficient algorithm for approximation of subgradients of the objective function and to apply the discrete gradient method for its minimization. We present computational results with some simulated data and compare the new estimator with a number of existing ones.**Description:**The problem of the estimation of a regression function by continuous piecewise linear functions is formulated as a nonconvex, nonsmooth optimization problem. Estimates are defined by minimization of the empirical L 2 risk over a class of functions, which are defined as maxima of minima of linear functions. An algorithm for finding continuous piecewise linear functions is presented. We observe that the objective function in the optimization problem is semismooth, quasidifferentiable and piecewise partially separable. The use of these properties allow us to design an efficient algorithm for approximation of subgradients of the objective function and to apply the discrete gradient method for its minimization. We present computational results with some simulated data and compare the new estimator with a number of existing ones. © 2008 Springer Science+Business Media, LLC.

An approximate subgradient algorithm for unconstrained nonsmooth, nonconvex optimization

- Bagirov, Adil, Ganjehlou, Asef Nazari

**Authors:**Bagirov, Adil , Ganjehlou, Asef Nazari**Date:**2008**Type:**Text , Journal article**Relation:**Mathematical Methods of Operations Research Vol. 67, no. 2 (2008), p. 187-206**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Reviewed:****Description:**In this paper a new algorithm for minimizing locally Lipschitz functions is developed. Descent directions in this algorithm are computed by solving a system of linear inequalities. The convergence of the algorithm is proved for quasidifferentiable semismooth functions. We present the results of numerical experiments with both regular and nonregular objective functions. We also compare the proposed algorithm with two different versions of the subgradient method using the results of numerical experiments. These results demonstrate the superiority of the proposed algorithm over the subgradient method. © 2007 Springer-Verlag.**Description:**C1

**Authors:**Bagirov, Adil , Ganjehlou, Asef Nazari**Date:**2008**Type:**Text , Journal article**Relation:**Mathematical Methods of Operations Research Vol. 67, no. 2 (2008), p. 187-206**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Reviewed:****Description:**In this paper a new algorithm for minimizing locally Lipschitz functions is developed. Descent directions in this algorithm are computed by solving a system of linear inequalities. The convergence of the algorithm is proved for quasidifferentiable semismooth functions. We present the results of numerical experiments with both regular and nonregular objective functions. We also compare the proposed algorithm with two different versions of the subgradient method using the results of numerical experiments. These results demonstrate the superiority of the proposed algorithm over the subgradient method. © 2007 Springer-Verlag.**Description:**C1

Discrete gradient method : Derivative-free method for nonsmooth optimization

- Bagirov, Adil, Karasozen, Bulent, Sezer, Monsalve

**Authors:**Bagirov, Adil , Karasozen, Bulent , Sezer, Monsalve**Date:**2008**Type:**Text , Journal article**Relation:**Journal of Optimization Theory and Applications Vol. 137, no. 2 (2008), p. 317-334**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Reviewed:****Description:**A new derivative-free method is developed for solving unconstrained nonsmooth optimization problems. This method is based on the notion of a discrete gradient. It is demonstrated that the discrete gradients can be used to approximate subgradients of a broad class of nonsmooth functions. It is also shown that the discrete gradients can be applied to find descent directions of nonsmooth functions. The preliminary results of numerical experiments with unconstrained nonsmooth optimization problems as well as the comparison of the proposed method with the nonsmooth optimization solver DNLP from CONOPT-GAMS and the derivative-free optimization solver CONDOR are presented. © 2007 Springer Science+Business Media, LLC.**Description:**C1

Modified global k-means algorithm for minimum sum-of-squares clustering problems

**Authors:**Bagirov, Adil**Date:**2008**Type:**Text , Journal article**Relation:**Pattern Recognition Vol. 41, no. 10 (2008), p. 3192-3199**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Reviewed:****Description:**k-Means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and inefficient for solving clustering problems in large data sets. Recently, a new version of the k-means algorithm, the global k-means algorithm has been developed. It is an incremental algorithm that dynamically adds one cluster center at a time and uses each data point as a candidate for the k-th cluster center. Results of numerical experiments show that the global k-means algorithm considerably outperforms the k-means algorithms. In this paper, a new version of the global k-means algorithm is proposed. A starting point for the k-th cluster center in this algorithm is computed by minimizing an auxiliary cluster function. Results of numerical experiments on 14 data sets demonstrate the superiority of the new algorithm, however, it requires more computational time than the global k-means algorithm.**Description:**k-Means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and inefficient for solving clustering problems in large data sets. Recently, a new version of the k-means algorithm, the global k-means algorithm has been developed. It is an incremental algorithm that dynamically adds one cluster center at a time and uses each data point as a candidate for the k-th cluster center. Results of numerical experiments show that the global k-means algorithm considerably outperforms the k-means algorithms. In this paper, a new version of the global k-means algorithm is proposed. A starting point for the k-th cluster center in this algorithm is computed by minimizing an auxiliary cluster function. Results of numerical experiments on 14 data sets demonstrate the superiority of the new algorithm, however, it requires more computational time than the global k-means algorithm. © 2008 Elsevier Ltd. All rights reserved.**Description:**2003001713

**Authors:**Bagirov, Adil**Date:**2008**Type:**Text , Journal article**Relation:**Pattern Recognition Vol. 41, no. 10 (2008), p. 3192-3199**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Reviewed:****Description:**k-Means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and inefficient for solving clustering problems in large data sets. Recently, a new version of the k-means algorithm, the global k-means algorithm has been developed. It is an incremental algorithm that dynamically adds one cluster center at a time and uses each data point as a candidate for the k-th cluster center. Results of numerical experiments show that the global k-means algorithm considerably outperforms the k-means algorithms. In this paper, a new version of the global k-means algorithm is proposed. A starting point for the k-th cluster center in this algorithm is computed by minimizing an auxiliary cluster function. Results of numerical experiments on 14 data sets demonstrate the superiority of the new algorithm, however, it requires more computational time than the global k-means algorithm.**Description:**k-Means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and inefficient for solving clustering problems in large data sets. Recently, a new version of the k-means algorithm, the global k-means algorithm has been developed. It is an incremental algorithm that dynamically adds one cluster center at a time and uses each data point as a candidate for the k-th cluster center. Results of numerical experiments show that the global k-means algorithm considerably outperforms the k-means algorithms. In this paper, a new version of the global k-means algorithm is proposed. A starting point for the k-th cluster center in this algorithm is computed by minimizing an auxiliary cluster function. Results of numerical experiments on 14 data sets demonstrate the superiority of the new algorithm, however, it requires more computational time than the global k-means algorithm. © 2008 Elsevier Ltd. All rights reserved.**Description:**2003001713

Integrated production system optimization using global optimization techniques

- Mason, T. L., Emelle, C., Van Berkel, J., Bagirov, Adil, Kampas, F., Pinter, J. D.

**Authors:**Mason, T. L. , Emelle, C. , Van Berkel, J. , Bagirov, Adil , Kampas, F. , Pinter, J. D.**Date:**2007**Type:**Text , Journal article**Relation:**Journal of Industrial and Management Optimization Vol. 3, no. 2 (May 2007), p. 257-277**Full Text:****Reviewed:****Description:**Many optimization problems related to integrated oil and gas production systems are nonconvex and multimodal. Additionally, apart from the innate nonsmoothness of many optimization problems, nonsmooth functions such as minimum and maximum functions may be used to model flow/pressure controllers and cascade mass in the gas gathering and blending networks. In this paper we study the application of different versions of the derivative free Discrete Gradient Method (DGM) as well as the Lipschitz Global Optimizer (LGO) suite to production optimization in integrated oil and gas production systems and their comparison with various local and global solvers used with the General Algebraic Modeling System (GAMS). Four nonconvex and nonsmooth test cases were constructed from a small but realistic integrated gas production system optimization problem. The derivation of the system of equations for the various test cases is also presented. Results demonstrate that DGM is especially effective for solving nonsmooth optimization problems and its two versions are capable global optimization algorithms. We also demonstrate that LGO solves successfully the presented test (as well as other related real-world) problems.**Description:**C1**Description:**2003004725

**Authors:**Mason, T. L. , Emelle, C. , Van Berkel, J. , Bagirov, Adil , Kampas, F. , Pinter, J. D.**Date:**2007**Type:**Text , Journal article**Relation:**Journal of Industrial and Management Optimization Vol. 3, no. 2 (May 2007), p. 257-277**Full Text:****Reviewed:****Description:**Many optimization problems related to integrated oil and gas production systems are nonconvex and multimodal. Additionally, apart from the innate nonsmoothness of many optimization problems, nonsmooth functions such as minimum and maximum functions may be used to model flow/pressure controllers and cascade mass in the gas gathering and blending networks. In this paper we study the application of different versions of the derivative free Discrete Gradient Method (DGM) as well as the Lipschitz Global Optimizer (LGO) suite to production optimization in integrated oil and gas production systems and their comparison with various local and global solvers used with the General Algebraic Modeling System (GAMS). Four nonconvex and nonsmooth test cases were constructed from a small but realistic integrated gas production system optimization problem. The derivation of the system of equations for the various test cases is also presented. Results demonstrate that DGM is especially effective for solving nonsmooth optimization problems and its two versions are capable global optimization algorithms. We also demonstrate that LGO solves successfully the presented test (as well as other related real-world) problems.**Description:**C1**Description:**2003004725

Are you sure you would like to clear your session, including search history and login status?