Your selections:

21Yearwood, John
20Ugon, Julien
14Barton, Andrew
14Rubinov, Alex
12Mala-Jetmarova, Helena
10Ghosh, Ranadhir
9Ghosh, Moumita
9Karmitsa, Napsu
7Mohebi, Ehsan
7Sultanova, Nargiz
7Taheri, Sona
6Webb, Dean
5Al Nuaimat, Alia
5Beliakov, Gleb
5Makela, Marko
4Karasozen, Bulent
4Mammadov, Musa
4Ozturk, Gurkan
4Stranieri, Andrew

Show More

Show Less

37Nonsmooth optimization
310103 Numerical and Computational Mathematics
260102 Applied Mathematics
180802 Computation Theory and Mathematics
17Optimisation
13Nonconvex optimization
12Cluster analysis
110801 Artificial Intelligence and Image Processing
11Algorithms
11Classification
10Data mining
10Subdifferential
8Global optimization
8Optimization
8Water distribution systems
7Discrete gradient
60806 Information Systems
60906 Electrical and Electronic Engineering
6Derivative-free optimization
6Discrete gradient method

Show More

Show Less

Format Type

A simulated annealing-based maximum-margin clustering algorithm

- Seifollahi, Seiffollahi, Bagirov, Adil, Borzeshi, Ehsan, Piccardi, Massimo

**Authors:**Seifollahi, Seiffollahi , Bagirov, Adil , Borzeshi, Ehsan , Piccardi, Massimo**Date:**2019**Type:**Text , Journal article**Relation:**Computational Intelligence Vol. 35, no. 1 (2019), p. 23-41**Full Text:**false**Reviewed:****Description:**Maximum-margin clustering is an extension of the support vector machine (SVM) to clustering. It partitions a set of unlabeled data into multiple groups by finding hyperplanes with the largest margins. Although existing algorithms have shown promising results, there is no guarantee of convergence of these algorithms to global solutions due to the nonconvexity of the optimization problem. In this paper, we propose a simulated annealing-based algorithm that is able to mitigate the issue of local minima in the maximum-margin clustering problem. The novelty of our algorithm is twofold, ie, (i) it comprises a comprehensive cluster modification scheme based on simulated annealing, and (ii) it introduces a new approach based on the combination of k-means++ and SVM at each step of the annealing process. More precisely, k-means++ is initially applied to extract subsets of the data points. Then, an unsupervised SVM is applied to improve the clustering results. Experimental results on various benchmark data sets (of up to over a million points) give evidence that the proposed algorithm is more effective at solving the clustering problem than a number of popular clustering algorithms.

A comparative assessment of models to predict monthly rainfall in Australia

- Bagirov, Adil, Mahmood, Arshad

**Authors:**Bagirov, Adil , Mahmood, Arshad**Date:**2018**Type:**Text , Journal article**Relation:**Water Resources Management Vol. 32, no. 5 (2018), p. 1777-1794**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**Accurate rainfall prediction is a challenging task. It is especially challenging in Australia where the climate is highly variable. Australia’s climatic zones range from high rainfall tropical regions in the north to the driest desert region in the interior. The performance of prediction models may vary depending on climatic conditions. It is, therefore, important to assess and compare the performance of these models in different climatic zones. This paper examines the performance of data driven models such as the support vector machines for regression, the multiple linear regression, the k-nearest neighbors and the artificial neural networks for monthly rainfall prediction in Australia depending on climatic conditions. Rainfall data with five meteorological variables over the period of 1970–2014 from 24 geographically diverse weather stations are used for this purpose. The prediction performance of each model was evaluated by comparing observed and predicted rainfall using various measures for prediction accuracy. © 2018, Springer Science+Business Media B.V., part of Springer Nature.

A server side solution for detecting webInject : A machine learning approach

- Moniruzzaman, Md, Bagirov, Adil, Gondal, Iqbal, Brown, Simon

**Authors:**Moniruzzaman, Md , Bagirov, Adil , Gondal, Iqbal , Brown, Simon**Date:**2018**Type:**Text , Conference proceedings , Conference Paper**Relation:**22nd Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2018; Melbourne, Australia; 3rd June 2018; published in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 11154 LNAI, p. 162-167**Full Text:**false**Reviewed:****Description:**With the advancement of client-side on the fly web content generation techniques, it becomes easier for attackers to modify the content of a website dynamically and gain access to valuable information. A majority portion of online attacks is now done by WebInject. The end users are not always skilled enough to differentiate between injected content and actual contents of a webpage. Some of the existing solutions are designed for client side and all the users have to install it in their system, which is a challenging task. In addition, various platforms and tools are used by individuals, so different solutions needed to be designed. Existing server side solution often focuses on sanitizing and filtering the inputs. It will fail to detect obfuscated and hidden scripts. In this paper, we propose a server side solution using a machine learning approach to detect WebInject in banking websites. Unlike other techniques, our method collects features of a Document Object Model (DOM) and classifies it with the help of a pre-trained model.

Clustering in large data sets with the limited memory bundle method

- Karmitsa, Napsu, Bagirov, Adil, Taheri, Sona

**Authors:**Karmitsa, Napsu , Bagirov, Adil , Taheri, Sona**Date:**2018**Type:**Text , Journal article**Relation:**Pattern Recognition Vol. 83, no. (2018), p. 245-259**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**The aim of this paper is to design an algorithm based on nonsmooth optimization techniques to solve the minimum sum-of-squares clustering problems in very large data sets. First, the clustering problem is formulated as a nonsmooth optimization problem. Then the limited memory bundle method [Haarala et al., 2007] is modified and combined with an incremental approach to design a new clustering algorithm. The algorithm is evaluated using real world data sets with both the large number of attributes and the large number of data points. It is also compared with some other optimization based clustering algorithms. The numerical results demonstrate the efficiency of the proposed algorithm for clustering in very large data sets.

Double bundle method for finding clarke stationary points in nonsmooth dc programming

- Joki, Kaisa, Bagirov, Adil, Karmitsa, Napsu, Makela, Marko, Taheri, Sona

**Authors:**Joki, Kaisa , Bagirov, Adil , Karmitsa, Napsu , Makela, Marko , Taheri, Sona**Date:**2018**Type:**Text , Journal article**Relation:**SIAM Journal on Optimization Vol. 28, no. 2 (2018), p. 1892-1919**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:****Reviewed:****Description:**The aim of this paper is to introduce a new proximal double bundle method for unconstrained nonsmooth optimization, where the objective function is presented as a difference of two convex (DC) functions. The novelty in our method is a new escape procedure which enables us to guarantee approximate Clarke stationarity for solutions by utilizing the DC components of the objective function. This optimality condition is stronger than the criticality condition typically used in DC programming. Moreover, if a candidate solution is not approximate Clarke stationary, then the escape procedure returns a descent direction. With this escape procedure, we can avoid some shortcomings encountered when criticality is used. The finite termination of the double bundle method to an approximate Clarke stationary point is proved by assuming that the subdifferentials of DC components are polytopes. Finally, some encouraging numerical results are presented.

**Authors:**Joki, Kaisa , Bagirov, Adil , Karmitsa, Napsu , Makela, Marko , Taheri, Sona**Date:**2018**Type:**Text , Journal article**Relation:**SIAM Journal on Optimization Vol. 28, no. 2 (2018), p. 1892-1919**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:****Reviewed:****Description:**The aim of this paper is to introduce a new proximal double bundle method for unconstrained nonsmooth optimization, where the objective function is presented as a difference of two convex (DC) functions. The novelty in our method is a new escape procedure which enables us to guarantee approximate Clarke stationarity for solutions by utilizing the DC components of the objective function. This optimality condition is stronger than the criticality condition typically used in DC programming. Moreover, if a candidate solution is not approximate Clarke stationary, then the escape procedure returns a descent direction. With this escape procedure, we can avoid some shortcomings encountered when criticality is used. The finite termination of the double bundle method to an approximate Clarke stationary point is proved by assuming that the subdifferentials of DC components are polytopes. Finally, some encouraging numerical results are presented.

Minimizing nonsmooth DC functions via successive DC piecewise-affine approximations

- Gaudioso, Manlio, Giallombardo, Giovanni, Miglionico, Giovanna, Bagirov, Adil

**Authors:**Gaudioso, Manlio , Giallombardo, Giovanni , Miglionico, Giovanna , Bagirov, Adil**Date:**2018**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 71, no. 1 (2018), p. 37-55**Full Text:**false**Reviewed:****Description:**We introduce a proximal bundle method for the numerical minimization of a nonsmooth difference-of-convex (DC) function. Exploiting some classic ideas coming from cutting-plane approaches for the convex case, we iteratively build two separate piecewise-affine approximations of the component functions, grouping the corresponding information in two separate bundles. In the bundle of the first component, only information related to points close to the current iterate are maintained, while the second bundle only refers to a global model of the corresponding component function. We combine the two convex piecewise-affine approximations, and generate a DC piecewise-affine model, which can also be seen as the pointwise maximum of several concave piecewise-affine functions. Such a nonconvex model is locally approximated by means of an auxiliary quadratic program, whose solution is used to certify approximate criticality or to generate a descent search-direction, along with a predicted reduction, that is next explored in a line-search setting. To improve the approximation properties at points that are far from the current iterate a supplementary quadratic program is also introduced to generate an alternative more promising search-direction. We discuss the main convergence issues of the line-search based proximal bundle method, and provide computational results on a set of academic benchmark test problems. © 2017, Springer Science+Business Media, LLC.

**Authors:**Bagirov, Adil , Ugon, Julien**Date:**2018**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 33, no. 1 (2018), p. 194-219**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**The clusterwise linear regression problem is formulated as a nonsmooth nonconvex optimization problem using the squared regression error function. The objective function in this problem is represented as a difference of convex functions. Optimality conditions are derived, and an algorithm is designed based on such a representation. An incremental approach is proposed to generate starting solutions. The algorithm is tested on small to large data sets. © 2017 Informa UK Limited, trading as Taylor & Francis Group.

**Authors:**Bagirov, Adil , Ugon, Julien**Date:**2018**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 33, no. 1 (2018), p. 194-219**Full Text:**false**Reviewed:****Description:**The clusterwise linear regression problem is formulated as a nonsmooth nonconvex optimization problem using the squared regression error function. The objective function in this problem is represented as a difference of convex functions. Optimality conditions are derived, and an algorithm is designed based on such a representation. An incremental approach is proposed to generate starting solutions. The algorithm is tested on small to large data sets.

Solving minimax problems : Local smoothing versus global smoothing

- Bagirov, Adil, Sultanova, Nargiz, Al Nuaimat, Alia, Taheri, Sona

**Authors:**Bagirov, Adil , Sultanova, Nargiz , Al Nuaimat, Alia , Taheri, Sona**Date:**2018**Type:**Text , Conference proceedings**Relation:**4th International Conference on Numerical Analysis and Optimization, NAO-IV 2017; Muscat, Oman; 2nd-5th January 2017; published in Numerical Analysis and Optimization NAO-IV (part of the Springer Proceedings in Mathematics and Statistics book series PROMS, volume 235) Vol. 235, p. 23-43**Full Text:**false**Reviewed:****Description:**The aim of this chapter is to compare different smoothing techniques for solving finite minimax problems. We consider the local smoothing technique which approximates the function in some neighborhood of a point of nondifferentiability and also global smoothing techniques such as the exponential and hyperbolic smoothing which approximate the function in the whole domain. Computational results on the collection of academic test problems are used to compare different smoothing techniques. Results show the superiority of the local smoothing technique for convex problems and global smoothing techniques for nonconvex problems. © 2018, Springer International Publishing AG, part of Springer Nature.**Description:**Springer Proceedings in Mathematics and Statistics

A proximal bundle method for nonsmooth DC optimization utilizing nonconvex cutting planes

- Joki, Kaisa, Bagirov, Adil, Karmitsa, Napsu, Makela, Marko

**Authors:**Joki, Kaisa , Bagirov, Adil , Karmitsa, Napsu , Makela, Marko**Date:**2017**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 68, no. 3 (2017), p. 501-535**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**In this paper, we develop a version of the bundle method to solve unconstrained difference of convex (DC) programming problems. It is assumed that a DC representation of the objective function is available. Our main idea is to utilize subgradients of both the first and second components in the DC representation. This subgradient information is gathered from some neighborhood of the current iteration point and it is used to build separately an approximation for each component in the DC representation. By combining these approximations we obtain a new nonconvex cutting plane model of the original objective function, which takes into account explicitly both the convex and the concave behavior of the objective function. We design the proximal bundle method for DC programming based on this new approach and prove the convergence of the method to an -critical point. The algorithm is tested using some academic test problems and the preliminary numerical results have shown the good performance of the new bundle method. An interesting fact is that the new algorithm finds nearly always the global solution in our test problems.

Batch clustering algorithm for big data sets

- Alguliyev, Rasim, Aliguliyev, Ramiz, Bagirov, Adil, Karimov, Rafael

**Authors:**Alguliyev, Rasim , Aliguliyev, Ramiz , Bagirov, Adil , Karimov, Rafael**Date:**2017**Type:**Text , Conference proceedings**Relation:**10th IEEE International Conference on Application of Information and Communication Technologies, AICT 2016; Baku, Azerbaijan; 12th-14th October 2016 p. 1-4**Full Text:**false**Reviewed:****Description:**Vast spread of computing technologies has led to abundance of large data sets. Today tech companies like, Google, Facebook, Twitter and Amazon handle big data sets and log terabytes, if not petabytes, of data per day. Thus, there is a need to find similarities and define groupings among the elements of these big data sets. One of the ways to find these similarities is data clustering. Currently, there exist several data clustering algorithms which differ by their application area and efficiency. Increase in computational power and algorithmic improvements have reduced the time for clustering of big data sets. But it usually happens that big data sets can't be processed whole due to hardware and computational restrictions. In this paper, the classic k-means clustering algorithm is compared to the proposed batch clustering (BC) algorithm for the required computation time and objective function. The BC algorithm is designed to cluster large data sets in batches but maintain the efficiency and quality. Several experiments confirm that batch clustering algorithm for big data sets is more efficient in using computational power, data storage and results in better clustering compared to k-means algorithm. The experiments are conducted with the data set of 2 (two) million two-dimensional data points. © 2016 IEEE.

DC programming algorithm for clusterwise linear L1 regression

**Authors:**Bagirov, Adil , Taheri, Sona**Date:**2017**Type:**Text , Journal article**Relation:**Journal of the Operations Research Society of China Vol. 5, no. 2 (2017), p. 233-256**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**The aim of this paper is to develop an algorithm for solving the clusterwise linear least absolute deviations regression problem. This problem is formulated as a nonsmooth nonconvex optimization problem, and the objective function is represented as a difference of convex functions. Optimality conditions are derived by using this representation. An algorithm is designed based on the difference of convex representation and an incremental approach. The proposed algorithm is tested using small to large artificial and real-world data sets. © 2017, Operations Research Society of China, Periodicals Agency of Shanghai University, Science Press, and Springer-Verlag Berlin Heidelberg.

New diagonal bundle method for clustering problems in large data sets

- Karmitsa, Napsu, Bagirov, Adil, Taheri, Sona

**Authors:**Karmitsa, Napsu , Bagirov, Adil , Taheri, Sona**Date:**2017**Type:**Text , Journal article**Relation:**European Journal of Operational Research Vol. 263, no. 2 (2017), p. 367-379**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**Clustering is one of the most important tasks in data mining. Recent developments in computer hardware allow us to store in random access memory (RAM) and repeatedly read data sets with hundreds of thousands and even millions of data points. This makes it possible to use conventional clustering algorithms in such data sets. However, these algorithms may need prohibitively large computational time and fail to produce accurate solutions. Therefore, it is important to develop clustering algorithms which are accurate and can provide real time clustering in large data sets. This paper introduces one of them. Using nonsmooth optimization formulation of the clustering problem the objective function is represented as a difference of two convex (DC) functions. Then a new diagonal bundle algorithm that explicitly uses this structure is designed and combined with an incremental approach to solve this problem. The method is evaluated using real world data sets with both large number of attributes and large number of data points. The proposed method is compared with two other clustering algorithms using numerical results. © 2017 Elsevier B.V.

Optimization based clustering algorithms for authorship analysis of phishing emails

- Seifollahi, Sattar, Bagirov, Adil, Layton, Robert, Gondal, Iqbal

**Authors:**Seifollahi, Sattar , Bagirov, Adil , Layton, Robert , Gondal, Iqbal**Date:**2017**Type:**Text , Journal article**Relation:**Neural Processing Letters Vol. 46, no. 2 (2017), p. 411-425**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**Phishing has given attackers power to masquerade as legitimate users of organizations, such as banks, to scam money and private information from victims. Phishing is so widespread that combating the phishing attacks could overwhelm the victim organization. It is important to group the phishing attacks to formulate effective defence mechanism. In this paper, we use clustering methods to analyze and characterize phishing emails and perform their relative attribution. Emails are first tokenized to a bag-of-word space and, then, transformed to a numeric vector space using frequencies of words in documents. Wordnet vocabulary is used to take effects of similar words into account and to reduce sparsity. The word similarity measure is combined with the term frequencies to introduce a novel text transformation into numeric features. To improve the accuracy, we apply inverse document frequency weighting, which gives higher weights to features used by fewer authors. The k-means and recently introduced three optimization based algorithms: MS-MGKM, INCA and DCClust are applied for clustering purposes. The optimization based algorithms indicate the existence of well separated clusters in the phishing emails dataset. © 2017, Springer Science+Business Media New York.

Prediction of monthly rainfall in Victoria, Australia : Clusterwise linear regression approach

- Bagirov, Adil, Mahmood, Arshad, Barton, Andrew

**Authors:**Bagirov, Adil , Mahmood, Arshad , Barton, Andrew**Date:**2017**Type:**Text , Journal article**Relation:**Atmospheric Research Vol. 188, no. (2017), p. 20-29**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**This paper develops the Clusterwise Linear Regression (CLR) technique for prediction of monthly rainfall. The CLR is a combination of clustering and regression techniques. It is formulated as an optimization problem and an incremental algorithm is designed to solve it. The algorithm is applied to predict monthly rainfall in Victoria, Australia using rainfall data with five input meteorological variables over the period of 1889–2014 from eight geographically diverse weather stations. The prediction performance of the CLR method is evaluated by comparing observed and predicted rainfall values using four measures of forecast accuracy. The proposed method is also compared with the CLR using the maximum likelihood framework by the expectation-maximization algorithm, multiple linear regression, artificial neural networks and the support vector machines for regression models using computational results. The results demonstrate that the proposed algorithm outperforms other methods in most locations. © 2017 Elsevier B.V.

An algorithm for clustering using L1-norm based on hyperbolic smoothing technique

- Bagirov, Adil, Mohebi, Ehsan

**Authors:**Bagirov, Adil , Mohebi, Ehsan**Date:**2016**Type:**Text , Journal article**Relation:**Computational Intelligence Vol. 32, no. 3 (2016), p. 439-457**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**Cluster analysis deals with the problem of organization of a collection of objects into clusters based on a similarity measure, which can be defined using various distance functions. The use of different similarity measures allows one to find different cluster structures in a data set. In this article, an algorithm is developed to solve clustering problems where the similarity measure is defined using the L1-norm. The algorithm is designed using the nonsmooth optimization approach to the clustering problem. Smoothing techniques are applied to smooth both the clustering function and the L1-norm. The algorithm computes clusters sequentially and finds global or near global solutions to the clustering problem. Results of numerical experiments using 12 real-world data sets are reported, and the proposed algorithm is compared with two other clustering algorithms. ©2015 Wiley Periodicals, Inc.

Constrained self organizing maps for data clusters visualization

- Mohebi, Ehsan, Bagirov, Adil

**Authors:**Mohebi, Ehsan , Bagirov, Adil**Date:**2016**Type:**Text , Journal article**Relation:**Neural Processing Letters Vol. 43, no. 3 (2016), p. 849-869**Full Text:**false**Reviewed:****Description:**High dimensional data visualization is one of the main tasks in the field of data mining and pattern recognition. The self organizing maps (SOM) is one of the topology visualizing tool that contains a set of neurons that gradually adapt to input data space by competitive learning and form clusters. The topology preservation of the SOM strongly depends on the learning process. Due to this limitation one cannot guarantee the convergence of the SOM in data sets with clusters of arbitrary shape. In this paper, we introduce Constrained SOM (CSOM), the new version of the SOM by modifying the learning algorithm. The idea is to introduce an adaptive constraint parameter to the learning process to improve the topology preservation and mapping quality of the basic SOM. The computational complexity of the CSOM is less than those with the SOM. The proposed algorithm is compared with similar topology preservation algorithms and the numerical results on eight small to large real-world data sets demonstrate the efficiency of the proposed algorithm. © 2015, Springer Science+Business Media New York.

Nonsmooth DC programming approach to the minimum sum-of-squares clustering problems

- Bagirov, Adil, Taheri, Sona, Ugon, Julien

**Authors:**Bagirov, Adil , Taheri, Sona , Ugon, Julien**Date:**2016**Type:**Text , Journal article**Relation:**Pattern Recognition Vol. 53, no. (2016), p. 12-24**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**This paper introduces an algorithm for solving the minimum sum-of-squares clustering problems using their difference of convex representations. A non-smooth non-convex optimization formulation of the clustering problem is used to design the algorithm. Characterizations of critical points, stationary points in the sense of generalized gradients and inf-stationary points of the clustering problem are given. The proposed algorithm is tested and compared with other clustering algorithms using large real world data sets. © 2015 Elsevier Ltd. All rights reserved.

A heuristic algorithm for solving the minimum sum-of-squares clustering problems

**Authors:**Ordin, Burak , Bagirov, Adil**Date:**2015**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 61, no. 2 (2015), p. 341-361**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**Clustering is an important task in data mining. It can be formulated as a global optimization problem which is challenging for existing global optimization techniques even in medium size data sets. Various heuristics were developed to solve the clustering problem. The global k-means and modified global k-means are among most efficient heuristics for solving the minimum sum-of-squares clustering problem. However, these algorithms are not always accurate in finding global or near global solutions to the clustering problem. In this paper, we introduce a new algorithm to improve the accuracy of the modified global k-means algorithm in finding global solutions. We use an auxiliary cluster problem to generate a set of initial points and apply the k-means algorithm starting from these points to find the global solution to the clustering problems. Numerical results on 16 real-world data sets clearly demonstrate the superiority of the proposed algorithm over the global and modified global k-means algorithms in finding global solutions to clustering problems.

A history of water distribution systems and their optimisation

- Mala-Jetmarova, Helena, Barton, Andrew, Bagirov, Adil

**Authors:**Mala-Jetmarova, Helena , Barton, Andrew , Bagirov, Adil**Date:**2015**Type:**Text , Journal article**Relation:**Water Science and Technology-Water Supply Vol. 15, no. 2 (2015), p. 224-235**Relation:**http://purl.org/au-research/grants/arc/LP0990908**Full Text:**false**Reviewed:****Description:**Water distribution systems have a very long and rich history dating back to the third millennium B.C. Advances in water supply and distribution were followed in parallel by discoveries and inventions in other related fields. Therefore, it is the aim of this paper to review both the history of water distribution systems and those related fields in order to present a coherent summary of the complex multi-stranded discipline of water engineering. Related fields reviewed in this paper include devices for raising water and water pumps, water quality and water treatment, hydraulics, network analysis, and optimisation of water distribution systems. The review is brief and concise and allows the reader to quickly gain an understanding of the history and advancements of water distribution systems and analysis. Furthermore, the paper gives details of other existing publications where more information can be found.

Are you sure you would like to clear your session, including search history and login status?