Your selections:

12Bagirov, Adil
6Dao, Minh
4Usman, Muhammad
3Bauschke, Heinz
3Miller, Mirka
3Mohebi, Ehsan
3Murshed, Manzur
3Rubinov, Alex
3Yearwood, John
2Al Nuaimat, Alia
2Alazab, Mamoun
2Ali, Mortuza
2Bokhari, Syed
2Chen, Jinjun
2Gao, David
2He, Xiangjian
2Jan, Mian Ahmad
2Layton, Robert
2Lin, Yuqing
2Mammadov, Musa

Show More

Show Less

9Optimization
60103 Numerical and Computational Mathematics
6Global optimization
50102 Applied Mathematics
54901 Applied mathematics
5Mathematics
5Problem solving
4Artificial intelligence
4Mathematics and Statistics
4Operations Research/Decision Theory
30801 Artificial Intelligence and Image Processing
34903 Numerical and computational mathematics
3Clustering algorithms
3Clustering problems
3Computer simulation
3Datasets
3Iterative methods
3Learning algorithms
3Learning systems

Show More

Show Less

Format Type

On augmented lagrangians for optimization problems with a single constraint

- Gasimov, Rafail, Rubinov, Alex

**Authors:**Gasimov, Rafail , Rubinov, Alex**Date:**2004**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 28, no. 2 (2004), p. 153-173**Full Text:**false**Reviewed:****Description:**We examine augmented Lagrangians for optimization problems with a single (either inequality or equality) constraint. We establish some links between augmented Lagrangians and Lagrange-type functions and propose a new kind of Lagrange-type functions for a problem with a single inequality constraint. Finally, we discuss a supergradient algorithm for calculating optimal values of dual problems corresponding to some class of augmented Lagrangians.**Description:**C1**Description:**2003000929

All (k;g)-cages are k-edge-connected

- Lin, Yuqing, Miller, Mirka, Rodger, Chris

**Authors:**Lin, Yuqing , Miller, Mirka , Rodger, Chris**Date:**2005**Type:**Text , Journal article**Relation:**Journal of Graph Theory Vol. 48, no. 3 (2005), p. 219-227**Full Text:**false**Reviewed:****Description:**A (k;g)-cage is a k-regular graph with girth g and with the least possible number of vertices. In this paper, we prove that (k;g)-cages are k-edge-connected if g is even. Earlier, Wang, Xu, and Wang proved that (k;g)-cages are k-edge-connected if g is odd. Combining our results, we conclude that the (k;g)-cages are k-edge-connected. © 2005 wiley Periodicals, Inc.**Description:**C1

A survey on the connectivity of cages

**Authors:**Miller, Mirka , Lin, Yuqing**Date:**2005**Type:**Text , Conference paper**Relation:**Paper presented at the Sixteenth Australasian Workshop on Combinatorial Algorithms, 18-21 September 2005, Ballarat, Australia, Ballarat, Victoria : 18th -21st September, 2005**Full Text:**false**Reviewed:****Description:**E1**Description:**2003001393

Statistical techniques for signal generation : The Australian experience

- Purcell, Patrick, Barty, Simon

**Authors:**Purcell, Patrick , Barty, Simon**Date:**2002**Type:**Text , Journal article**Relation:**Drug safety Vol. 25, no. 6 (2002), p. 415-421**Full Text:**false**Reviewed:****Description:**National voluntary reporting systems generate large volumes of clinical data pertinent to drug safety. Currently descriptive statistical techniques are used to assist in the detection of drug safety 'signals'. Australian data have been coded according to guidelines formulated almost 30 years ago and which have resulted in many drugs which are not associated with an adverse drug reaction or 'innocent bystander' drugs being recorded as 'suspected' in individual reports. In this paper we explore the application of an iterative probability filtering algorithm titled 'PROFILE'. This serves to identify the 'signals' and remove the 'innocent bystander' drugs, thus providing a clearer view of the drugs most likely to have caused the reactions. Reaction terms analysed include neutropenia, agranulocytosis, hypotension, hypertension, myocardial infarction, neuroleptic malignant syndrome, and rectal haemorrhage. In this version of PROFILE, Fishers exact test has been used as the statistical tool but other methods could be used in future. Advantages and limitations of the method and its assumptions are discussed together with the rationale underlying the method and suggestions for further enhancements.**Description:**2003000065

An approximate subgradient algorithm for unconstrained nonsmooth, nonconvex optimization

- Bagirov, Adil, Ganjehlou, Asef Nazari

**Authors:**Bagirov, Adil , Ganjehlou, Asef Nazari**Date:**2008**Type:**Text , Journal article**Relation:**Mathematical Methods of Operations Research Vol. 67, no. 2 (2008), p. 187-206**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Reviewed:****Description:**In this paper a new algorithm for minimizing locally Lipschitz functions is developed. Descent directions in this algorithm are computed by solving a system of linear inequalities. The convergence of the algorithm is proved for quasidifferentiable semismooth functions. We present the results of numerical experiments with both regular and nonregular objective functions. We also compare the proposed algorithm with two different versions of the subgradient method using the results of numerical experiments. These results demonstrate the superiority of the proposed algorithm over the subgradient method. © 2007 Springer-Verlag.**Description:**C1

**Authors:**Bagirov, Adil , Ganjehlou, Asef Nazari**Date:**2008**Type:**Text , Journal article**Relation:**Mathematical Methods of Operations Research Vol. 67, no. 2 (2008), p. 187-206**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Reviewed:****Description:**In this paper a new algorithm for minimizing locally Lipschitz functions is developed. Descent directions in this algorithm are computed by solving a system of linear inequalities. The convergence of the algorithm is proved for quasidifferentiable semismooth functions. We present the results of numerical experiments with both regular and nonregular objective functions. We also compare the proposed algorithm with two different versions of the subgradient method using the results of numerical experiments. These results demonstrate the superiority of the proposed algorithm over the subgradient method. © 2007 Springer-Verlag.**Description:**C1

Reducible polytopes

**Authors:**Yost, David**Date:**2005**Type:**Text , Conference paper**Relation:**Paper presented at the Sixteenth Australasian Workshop on Combinatorial Algorithms, 18-21 September 2005, Ballarat, Australia, Ballarat, Victoria : 18th - 21st September, 2005**Full Text:****Reviewed:****Description:**E1**Description:**2003001436

**Authors:**Yost, David**Date:**2005**Type:**Text , Conference paper**Relation:**Paper presented at the Sixteenth Australasian Workshop on Combinatorial Algorithms, 18-21 September 2005, Ballarat, Australia, Ballarat, Victoria : 18th - 21st September, 2005**Full Text:****Reviewed:****Description:**E1**Description:**2003001436

Repeats in graphs and digraphs

- Miller, Mirka, Nguyen, Minh Hoang, Simanjuntak, Rinovia

**Authors:**Miller, Mirka , Nguyen, Minh Hoang , Simanjuntak, Rinovia**Date:**2005**Type:**Text , Conference paper**Relation:**Paper presented at the Sixteenth Australasian Workshop on Combinatorial Algorithms, Ballarat, Victoria : 18th - 21st September, 2005**Full Text:**false**Reviewed:****Description:**E1**Description:**2003001394

Using global optimization to improve classification for medical diagnosis and prognosis

- Bagirov, Adil, Rubinov, Alex, Yearwood, John

**Authors:**Bagirov, Adil , Rubinov, Alex , Yearwood, John**Date:**2001**Type:**Text , Journal article**Relation:**Topics in health information management Vol. 22, no. 1 (2001), p. 65-74**Full Text:**false**Description:**Global optimization-based techniques are studied in order to increase the accuracy of medical diagnosis and prognosis with data from various databases. First, we discuss feature selection, the problem of determining the most informative features for classification in the databases under consideration. Then, we apply a technique based on convex and global optimization for classification in these databases. The third application of this technique is a method that calculates centers of clusters to predict when breast cancer is likely to recur in patients for which cancer has been removed. The technique achieves high accuracy with these databases. Better classifiers will lead to improved assistance in making medical diagnostic and prognostic decisions.**Description:**2003003662

- Ghosh, Ranadhir, Yearwood, John, Ghosh, Moumita, Bagirov, Adil

**Authors:**Ghosh, Ranadhir , Yearwood, John , Ghosh, Moumita , Bagirov, Adil**Date:**2006**Type:**Text , Journal article**Relation:**International Journal of Neural Systems Vol. 16, no. 3 (2006), p. 201-213**Full Text:**false**Reviewed:****Description:**In this paper we investigate a hybrid model based on the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. Also we discuss different variants for hybrid models using the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. The Discrete Gradient method has the advantage of being able to jump over many local minima and find very deep local minima. However, earlier research has shown that a good starting point for the discrete gradient method can improve the quality of the solution point. Evolutionary algorithms are best suited for global optimisation problems. Nevertheless they are cursed with longer training times and often unsuitable for real world application. For optimisation problems such as weight optimisation for ANNs in real world applications the dimensions are large and time complexity is critical. Hence the idea of a hybrid model can be a suitable option. In this paper we propose different fusion strategies for hybrid models combining the evolutionary strategy with the discrete gradient method to obtain an optimal solution much quicker. Three different fusion strategies are discussed: a linear hybrid model, an iterative hybrid model and a restricted local search hybrid model. Comparative results on a range of standard datasets are provided for different fusion hybrid models. © World Scientific Publishing Company.**Description:**C1**Description:**2003001712

A heuristic algorithm for solving the minimum sum-of-squares clustering problems

**Authors:**Ordin, Burak , Bagirov, Adil**Date:**2015**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 61, no. 2 (2015), p. 341-361**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**Clustering is an important task in data mining. It can be formulated as a global optimization problem which is challenging for existing global optimization techniques even in medium size data sets. Various heuristics were developed to solve the clustering problem. The global k-means and modified global k-means are among most efficient heuristics for solving the minimum sum-of-squares clustering problem. However, these algorithms are not always accurate in finding global or near global solutions to the clustering problem. In this paper, we introduce a new algorithm to improve the accuracy of the modified global k-means algorithm in finding global solutions. We use an auxiliary cluster problem to generate a set of initial points and apply the k-means algorithm starting from these points to find the global solution to the clustering problems. Numerical results on 16 real-world data sets clearly demonstrate the superiority of the proposed algorithm over the global and modified global k-means algorithms in finding global solutions to clustering problems.

Improving Naive Bayes classifier using conditional probabilities

- Taheri, Sona, Mammadov, Musa, Bagirov, Adil

**Authors:**Taheri, Sona , Mammadov, Musa , Bagirov, Adil**Date:**2010**Type:**Text , Conference proceedings**Full Text:****Description:**Naive Bayes classifier is the simplest among Bayesian Network classifiers. It has shown to be very efficient on a variety of data classification problems. However, the strong assumption that all features are conditionally independent given the class is often violated on many real world applications. Therefore, improvement of the Naive Bayes classifier by alleviating the feature independence assumption has attracted much attention. In this paper, we develop a new version of the Naive Bayes classifier without assuming independence of features. The proposed algorithm approximates the interactions between features by using conditional probabilities. We present results of numerical experiments on several real world data sets, where continuous features are discretized by applying two different methods. These results demonstrate that the proposed algorithm significantly improve the performance of the Naive Bayes classifier, yet at the same time maintains its robustness. © 2011, Australian Computer Society, Inc.**Description:**2003009505

**Authors:**Taheri, Sona , Mammadov, Musa , Bagirov, Adil**Date:**2010**Type:**Text , Conference proceedings**Full Text:****Description:**Naive Bayes classifier is the simplest among Bayesian Network classifiers. It has shown to be very efficient on a variety of data classification problems. However, the strong assumption that all features are conditionally independent given the class is often violated on many real world applications. Therefore, improvement of the Naive Bayes classifier by alleviating the feature independence assumption has attracted much attention. In this paper, we develop a new version of the Naive Bayes classifier without assuming independence of features. The proposed algorithm approximates the interactions between features by using conditional probabilities. We present results of numerical experiments on several real world data sets, where continuous features are discretized by applying two different methods. These results demonstrate that the proposed algorithm significantly improve the performance of the Naive Bayes classifier, yet at the same time maintains its robustness. © 2011, Australian Computer Society, Inc.**Description:**2003009505

Global optimization in the summarization of text documents

- Alyguliev, R. M., Bagirov, Adil

**Authors:**Alyguliev, R. M. , Bagirov, Adil**Date:**2005**Type:**Text , Journal article**Relation:**Automatic Control and Computer Sciences Vol. 39, no. 6 (2005), p. 42-47**Full Text:**false**Reviewed:****Description:**In order to ensure minimal redundancy in the summary of a document and the greatest possible coverage of its content a method for the construction of summaries (summarization) based on the clustering of sentences is proposed in the article. Clustering of sentences reduces to a determination of cluster centroids the mathematical realization of which relies on a problem of global optimization. A determination of the number of clusters is one of the complex problems in the clustering procedure. Therefore, an algorithm of stepwise determination of the number of clusters is also proposed in the present study. © 2006 by Allerton Press, Inc.**Description:**C1

Local optimization method with global multidimensional search

- Bagirov, Adil, Rubinov, Alex, Zhang, Jiapu

**Authors:**Bagirov, Adil , Rubinov, Alex , Zhang, Jiapu**Date:**2005**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 32, no. 2 (2005), p. 161-179**Full Text:****Reviewed:****Description:**This paper presents a new method for solving global optimization problems. We use a local technique based on the notion of discrete gradients for finding a cone of descent directions and then we use a global cutting angle algorithm for finding global minimum within the intersection of the cone and the feasible region. We present results of numerical experiments with well-known test problems and with the so-called cluster function. These results confirm that the proposed algorithms allows one to find a global minimizer or at least a deep local minimizer of a function with a huge amount of shallow local minima. © Springer 2005.**Description:**C1**Description:**2003001351

**Authors:**Bagirov, Adil , Rubinov, Alex , Zhang, Jiapu**Date:**2005**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 32, no. 2 (2005), p. 161-179**Full Text:****Reviewed:****Description:**This paper presents a new method for solving global optimization problems. We use a local technique based on the notion of discrete gradients for finding a cone of descent directions and then we use a global cutting angle algorithm for finding global minimum within the intersection of the cone and the feasible region. We present results of numerical experiments with well-known test problems and with the so-called cluster function. These results confirm that the proposed algorithms allows one to find a global minimizer or at least a deep local minimizer of a function with a huge amount of shallow local minima. © Springer 2005.**Description:**C1**Description:**2003001351

A comparative study of state transition algorithm with harmony search and artificial bee colony

- Zhou, Xiaojun, Gao, David, Yang, Chunhua

**Authors:**Zhou, Xiaojun , Gao, David , Yang, Chunhua**Date:**2013**Type:**Text , Journal article**Relation:**Advances in Intelligent Systems and Computing Vol. 212, no. (2013), p. 651-659**Full Text:**false**Reviewed:****Description:**We focus on a comparative study of three recently developed nature-inspired optimization algorithms, including state transition algorithm, harmony search and artificial bee colony. Their core mechanisms are introduced and their similarities and differences are described. Then, a suit of 27 well-known benchmark problems are used to investigate the performance of these algorithms and finally we discuss their general applicability with respect to the structure of optimization problems. Â© Springer-Verlag Berlin Heidelberg 2013.**Description:**2003011220

Nonsmooth optimization models and algorithms for data clustering and visualization

**Authors:**Mohebi, Ehsan**Date:**2015**Type:**Text , Thesis**Full Text:****Description:**Cluster analysis deals with the problem of organization of a collection of patterns into clusters based on a similarity measure. Various distance functions can be used to define this measure. Clustering problems with the similarity measure defined by the squared Euclidean distance have been studied extensively over the last five decades. However, problems with other Minkowski norms have attracted significantly less attention. The use of different similarity measures may help to identify different cluster structures of a data set. This in turn may help to significantly improve the decision making process. High dimensional data visualization is another important task in the field of data mining and pattern recognition. To date, the principal component analysis and the self-organizing maps techniques have been used to solve such problems. In this thesis we develop algorithms for solving clustering problems in large data sets using various similarity measures. Such similarity measures are based on the squared L**Description:**Doctor of Philosophy

**Authors:**Mohebi, Ehsan**Date:**2015**Type:**Text , Thesis**Full Text:****Description:**Cluster analysis deals with the problem of organization of a collection of patterns into clusters based on a similarity measure. Various distance functions can be used to define this measure. Clustering problems with the similarity measure defined by the squared Euclidean distance have been studied extensively over the last five decades. However, problems with other Minkowski norms have attracted significantly less attention. The use of different similarity measures may help to identify different cluster structures of a data set. This in turn may help to significantly improve the decision making process. High dimensional data visualization is another important task in the field of data mining and pattern recognition. To date, the principal component analysis and the self-organizing maps techniques have been used to solve such problems. In this thesis we develop algorithms for solving clustering problems in large data sets using various similarity measures. Such similarity measures are based on the squared L**Description:**Doctor of Philosophy

- Bagirov, Adil, Barton, Andrew, Mala-Jetmarova, Helena, Al Nuaimat, Alia, Ahmed, S. T., Sultanova, Nargiz, Yearwood, John

**Authors:**Bagirov, Adil , Barton, Andrew , Mala-Jetmarova, Helena , Al Nuaimat, Alia , Ahmed, S. T. , Sultanova, Nargiz , Yearwood, John**Date:**2013**Type:**Text , Journal article**Relation:**Mathematical and Computer Modelling Vol. 57, no. 3-4 (2013), p. 873-886**Relation:**http://purl.org/au-research/grants/arc/LP0990908**Full Text:**false**Reviewed:****Description:**The operation of a water distribution system is a complex task which involves scheduling of pumps, regulating water levels of storages, and providing satisfactory water quality to customers at required flow and pressure. Pump scheduling is one of the most important tasks of the operation of a water distribution system as it represents the major part of its operating costs. In this paper, a novel approach for modeling of explicit pump scheduling to minimize energy consumption by pumps is introduced which uses the pump start/end run times as continuous variables, and binary integer variables to describe the pump status at the beginning of the scheduling period. This is different from other approaches where binary integer variables for each hour are typically used, which is considered very impractical from an operational perspective. The problem is formulated as a mixed integer nonlinear programming problem, and a new algorithm is developed for its solution. This algorithm is based on the combination of the grid search with the Hooke-Jeeves pattern search method. The performance of the algorithm is evaluated using literature test problems applying the hydraulic simulation model EPANet. © 2012 Elsevier Ltd.**Description:**2003010583

Comparative study of RPSALG algorithm for convex semi-infinite programming

- Auslender, Alfred, Ferrer, Albert, Goberna, Miguel, López, Marco

**Authors:**Auslender, Alfred , Ferrer, Albert , Goberna, Miguel , López, Marco**Date:**2014**Type:**Text , Journal article**Relation:**Computational Optimization and Applications Vol. 60, no. 1 (2014), p. 59-87**Full Text:**false**Reviewed:****Description:**The Remez penalty and smoothing algorithm (RPSALG) is a unified framework for penalty and smoothing methods for solving min-max convex semi-infinite programing problems, whose convergence was analyzed in a previous paper of three of the authors. In this paper we consider a partial implementation of RPSALG for solving ordinary convex semi-infinite programming problems. Each iteration of RPSALG involves two types of auxiliary optimization problems: the first one consists of obtaining an approximate solution of some discretized convex problem, while the second one requires to solve a non-convex optimization problem involving the parametric constraints as objective function with the parameter as variable. In this paper we tackle the latter problem with a variant of the cutting angle method called ECAM, a global optimization procedure for solving Lipschitz programming problems. We implement different variants of RPSALG which are compared with the unique publicly available SIP solver, NSIPS, on a battery of test problems.

Skype Traffic Classification Using Cost Sensitive Algorithms

- Azab, Azab, Layton, Robert, Alazab, Mamoun, Watters, Paul

**Authors:**Azab, Azab , Layton, Robert , Alazab, Mamoun , Watters, Paul**Date:**2013**Type:**Text , Conference paper**Relation:**Proceedings - 4th Cybercrime and Trustworthy Computing Workshop, CTC 2013 p. 14-21**Full Text:**false**Reviewed:****Description:**Voice over IP (VoIP) technologies such as Skype are becoming increasingly popular and widely used in different organisations, and therefore identifying the usage of this service at the network level becomes very important. Reasons for this include applying Quality of Service (QoS), network planning, prohibiting its use in some networks and lawful interception of communications. Researchers have addressed VoIP traffic classification from different viewpoints, such as classifier accuracy, building time, classification time and online classification. This previous research tested their models using the same version of a VoIP product they used for training the model, giving generalizability only to that version of the product. This means that as new VoIP versions are released, these classifiers become obsolete. In this paper, we address if this approach is applicable to detecting new, untrained, versions of Skype. We suggest that using cost-sensitive classifiers can help to improve the accuracy of detecting untrained versions, by testing compared to other algorithms. Our experiment demonstrates promising preliminary results to detect Skype version 4, by building a cost sensitive classifier on Skype version 3, achieving an F-measure score of 0.57. This is a drastic improvement from not using cost sensitivity, which scores an F-measure of 0. This approach may be enhanced to improve the detection results and extended to improve detection for other applications that change protocols from version to version.

Nonsmooth nonconvex optimization approach to clusterwise linear regression problems

- Bagirov, Adil, Ugon, Julien, Mirzayeva, Hijran

**Authors:**Bagirov, Adil , Ugon, Julien , Mirzayeva, Hijran**Date:**2013**Type:**Text , Journal article**Relation:**European Journal of Operational Research Vol. 229, no. 1 (2013), p. 132-142**Full Text:**false**Reviewed:****Description:**Clusterwise regression consists of finding a number of regression functions each approximating a subset of the data. In this paper, a new approach for solving the clusterwise linear regression problems is proposed based on a nonsmooth nonconvex formulation. We present an algorithm for minimizing this nonsmooth nonconvex function. This algorithm incrementally divides the whole data set into groups which can be easily approximated by one linear regression function. A special procedure is introduced to generate a good starting point for solving global optimization problems at each iteration of the incremental algorithm. Such an approach allows one to find global or near global solution to the problem when the data sets are sufficiently dense. The algorithm is compared with the multistart Späth algorithm on several publicly available data sets for regression analysis. © 2013 Elsevier B.V. All rights reserved.**Description:**2003011018

Hyperbolic smoothing in nonsmooth optimization and applications

**Authors:**Al Nuaimat, Alia**Date:**2014**Type:**Text , Thesis , PhD**Full Text:****Description:**Nonsmooth nonconvex optimization problems arise in many applications including economics, business and data mining. In these applications objective functions are not necessarily differentiable or convex. Many algorithms have been proposed over the past three decades to solve such problems. In spite of the significant growth in this field, the development of efficient algorithms for solving this kind of problem is still a challenging task. The subgradient method is one of the simplest methods developed for solving these problems. Its convergence was proved only for convex objective functions. This method does not involve any subproblems, neither for finding search directions nor for computation of step lengths, which are fixed ahead of time. Bundle methods and their various modifications are among the most efficient methods for solving nonsmooth optimization problems. These methods involve a quadratic programming subproblem to find search directions. The size of the subproblem may increase significantly with the number of variables, which makes the bundle-type methods unsuitable for large scale nonsmooth optimization problems. The implementation of bundle-type methods, which require the use of the quadratic programming solvers, is not as easy as the implementation of the subgradient methods. Therefore it is beneficial to develop algorithms for nonsmooth nonconvex optimization which are easy to implement and more efficient than the subgradient methods. In this thesis, we develop two new algorithms for solving nonsmooth nonconvex optimization problems based on the use of the hyperbolic smoothing technique and apply them to solve the pumping cost minimization problem in water distribution. Both algorithms use smoothing techniques. The first algorithm is designed for solving finite minimax problems. In order to apply the hyperbolic smoothing we reformulate the objective function in the minimax problem and study the relationship between the original minimax and reformulated problems. We also study the main properties of the hyperbolic smoothing function. Based on these results an algorithm for solving the finite minimax problem is proposed and this algorithm is implemented in GAMS. We present preliminary results of numerical experiments with well-known nonsmooth optimization test problems. We also compare the proposed algorithm with the algorithm that uses the exponential smoothing function as well as with the algorithm based on nonlinear programming reformulation of the finite minimax problem. The second nonsmooth optimization algorithm we developed was used to demonstrate how smooth optimization methods can be applied to solve general nonsmooth (nonconvex) optimization problems. In order to do so we compute subgradients from some neighborhood of the current point and define a system of linear inequalities using these subgradients. Search directions are computed by solving this system. This system is solved by reducing it to the minimization of the convex piecewise linear function over the unit ball. Then the hyperbolic smoothing function is applied to approximate this minimization problem by a sequence of smooth problems which are solved by smooth optimization methods. Such an approach allows one to apply powerful smooth optimization algorithms for solving nonsmooth optimization problems and extend smoothing techniques for solving general nonsmooth nonconvex optimization problems. The convergence of the algorithm based on this approach is studied. The proposed algorithm was implemented in Fortran 95. Preliminary results of numerical experiments are reported and the proposed algorithm is compared with an other five nonsmooth optimization algorithms. We also implement the algorithm in GAMS and compare it with GAMS solvers using results of numerical experiments.**Description:**Doctor of Philosophy

**Authors:**Al Nuaimat, Alia**Date:**2014**Type:**Text , Thesis , PhD**Full Text:****Description:**Nonsmooth nonconvex optimization problems arise in many applications including economics, business and data mining. In these applications objective functions are not necessarily differentiable or convex. Many algorithms have been proposed over the past three decades to solve such problems. In spite of the significant growth in this field, the development of efficient algorithms for solving this kind of problem is still a challenging task. The subgradient method is one of the simplest methods developed for solving these problems. Its convergence was proved only for convex objective functions. This method does not involve any subproblems, neither for finding search directions nor for computation of step lengths, which are fixed ahead of time. Bundle methods and their various modifications are among the most efficient methods for solving nonsmooth optimization problems. These methods involve a quadratic programming subproblem to find search directions. The size of the subproblem may increase significantly with the number of variables, which makes the bundle-type methods unsuitable for large scale nonsmooth optimization problems. The implementation of bundle-type methods, which require the use of the quadratic programming solvers, is not as easy as the implementation of the subgradient methods. Therefore it is beneficial to develop algorithms for nonsmooth nonconvex optimization which are easy to implement and more efficient than the subgradient methods. In this thesis, we develop two new algorithms for solving nonsmooth nonconvex optimization problems based on the use of the hyperbolic smoothing technique and apply them to solve the pumping cost minimization problem in water distribution. Both algorithms use smoothing techniques. The first algorithm is designed for solving finite minimax problems. In order to apply the hyperbolic smoothing we reformulate the objective function in the minimax problem and study the relationship between the original minimax and reformulated problems. We also study the main properties of the hyperbolic smoothing function. Based on these results an algorithm for solving the finite minimax problem is proposed and this algorithm is implemented in GAMS. We present preliminary results of numerical experiments with well-known nonsmooth optimization test problems. We also compare the proposed algorithm with the algorithm that uses the exponential smoothing function as well as with the algorithm based on nonlinear programming reformulation of the finite minimax problem. The second nonsmooth optimization algorithm we developed was used to demonstrate how smooth optimization methods can be applied to solve general nonsmooth (nonconvex) optimization problems. In order to do so we compute subgradients from some neighborhood of the current point and define a system of linear inequalities using these subgradients. Search directions are computed by solving this system. This system is solved by reducing it to the minimization of the convex piecewise linear function over the unit ball. Then the hyperbolic smoothing function is applied to approximate this minimization problem by a sequence of smooth problems which are solved by smooth optimization methods. Such an approach allows one to apply powerful smooth optimization algorithms for solving nonsmooth optimization problems and extend smoothing techniques for solving general nonsmooth nonconvex optimization problems. The convergence of the algorithm based on this approach is studied. The proposed algorithm was implemented in Fortran 95. Preliminary results of numerical experiments are reported and the proposed algorithm is compared with an other five nonsmooth optimization algorithms. We also implement the algorithm in GAMS and compare it with GAMS solvers using results of numerical experiments.**Description:**Doctor of Philosophy

Are you sure you would like to clear your session, including search history and login status?