A new solar power prediction method based on feature clustering and hybrid-classification-regression forecasting
- Authors: Nejati, Maryam , Amjady, Nima
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE transactions on sustainable energy Vol. 13, no. 2 (2022), p. 1188-1198
- Full Text: false
- Reviewed:
- Description: Solar generation systems are globally extending in terms of scale and number, which highlights the increasing importance of solar power forecast. In this paper, a day-ahead solar power prediction method is proposed including 1) a novel feature selecting/clustering approach based on relevancy and redundancy criteria and 2) an innovative hybrid-classification-regression forecasting engine. The proposed feature selecting/clustering approach filters out irrelevant features and partitions relevant features to two separate subsets to decrease the redundancy of features. Each of these two subsets is separately trained by one forecasting engine and the final solar power prediction of the proposed method is obtained by a relevancy-based combination of these two forecasts. The proposed forecasting engine classifies the historical data based on the learnability of its constituent regression models and assigns each class of training samples to one regression model. Each regression model predicts the outputs of the test samples that belong to its class. The effectiveness of the proposed solar power prediction method is illustrated by testing on two real-world solar farms.
Subgraph adaptive structure-aware graph contrastive learning
- Authors: Chen, Zhikui , Peng, Yin , Yu, Shuo , Cao, Chen , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: Mathematics (Basel) Vol. 10, no. 17 (2022), p. 3047
- Full Text:
- Reviewed:
- Description: Graph contrastive learning (GCL) has been subject to more attention and been widely applied to numerous graph learning tasks such as node classification and link prediction. Although it has achieved great success and even performed better than supervised methods in some tasks, most of them depend on node-level comparison, while ignoring the rich semantic information contained in graph topology, especially for social networks. However, a higher-level comparison requires subgraph construction and encoding, which remain unsolved. To address this problem, we propose a subgraph adaptive structure-aware graph contrastive learning method (PASCAL) in this work, which is a subgraph-level GCL method. In PASCAL, we construct subgraphs by merging all motifs that contain the target node. Then we encode them on the basis of motif number distribution to capture the rich information hidden in subgraphs. By incorporating motif information, PASCAL can capture richer semantic information hidden in local structures compared with other GCL methods. Extensive experiments on six benchmark datasets show that PASCAL outperforms state-of-art graph contrastive learning and supervised methods in most cases.
Impact of node deployment and routing for protection of critical infrastructures
- Authors: Subhan, Fazli , Noreen, Madiha , Imran, Muhammad , Tariq, Moeenuddin , Khan, Asfandyar , Shoaib, Muhammad
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 11502-11514
- Full Text:
- Reviewed:
- Description: Recently, linear wireless sensor networks (LWSNs) have been eliciting increasing attention because of their suitability for applications such as the protection of critical infrastructures. Most of these applications require LWSN to remain operational for a longer period. However, the non-replenishable limited battery power of sensor nodes does not allow them to meet these expectations. Therefore, a shorter network lifetime is one of the most prominent barriers in large-scale deployment of LWSN. Unlike most existing studies, in this paper, we analyze the impact of node placement and clustering on LWSN network lifetime. First, we categorize and classify existing node placement and clustering schemes for LWSN and introduce various topologies for disparate applications. Then, we highlight the peculiarities of LWSN applications and discuss their unique characteristics. Several application domains of LWSN are described. We present three node placement strategies (i.e., linear sequential, linear parallel, and grid) and various deployment methods such as random, uniform, decreasing distance, and triangular. Extensive simulation experiments are conducted to analyze the performance of the three state-of-the-art routing protocols in the context of node deployment strategies and methods. The experimental results demonstrate that the node deployment strategies and methods significantly affect LWSN lifetime. © 2013 IEEE.
Machine learning in mental health: a scoping review of methods and applications
- Authors: Shatte, Adrian , Hutchinson, Delyse , Teague, Samantha
- Date: 2019
- Type: Text , Journal article
- Relation: Psychological Medicine Vol. 49, no. 9 (2019), p. 1426-1448
- Full Text: false
- Reviewed:
- Description: This paper aims to synthesise the literature on machine learning (ML) and big data applications for mental health, highlighting current research and applications in practice. We employed a scoping review methodology to rapidly map the field of ML in mental health. Eight health and information technology research databases were searched for papers covering this domain. Articles were assessed by two reviewers, and data were extracted on the article's mental health application, ML technique, data type, and study results. Articles were then synthesised via narrative review. Three hundred papers focusing on the application of ML to mental health were identified. Four main application domains emerged in the literature, including: (i) detection and diagnosis (ii) prognosis, treatment and support (iii) public health, and (iv) research and clinical administration. The most common mental health conditions addressed included depression, schizophrenia, and Alzheimer's disease. ML techniques used included support vector machines, decision trees, neural networks, latent Dirichlet allocation, and clustering. Overall, the application of ML to mental health has demonstrated a range of benefits across the areas of diagnosis, treatment and support, research, and clinical administration. With the majority of studies identified focusing on the detection and diagnosis of mental health conditions, it is evident that there is significant room for the application of ML to other areas of psychology and mental health. The challenges of using ML techniques are discussed, as well as opportunities to improve and advance the field.
VANET–LTE based heterogeneous vehicular clustering for driving assistance and route planning applications
- Authors: Ahmad, Iftikhar , Noor, Rafidah , Ahmedy, Ismail , Shah, Syed , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: Computer Networks Vol. 145, no. (2018), p. 128-140
- Full Text: false
- Reviewed:
- Description: The Internet of vehicles incorporates multiple access networks and technologies to connect vehicles on roads. These vehicles usually require the use of individual long-term evolution (LTE) connections to send/receive data to/from a remote server to make smart decisions regarding route planning and driving. An increasing number of vehicles on the roads may not only overwhelm LTE network usage but also incur added cost. Clustering helps minimize LTE usage, but the high speed of vehicles renders connections unstable and unreliable not only among vehicles but also between vehicles and the LTE network. Moreover, non-cooperative behavior among vehicles within a cluster is a bottleneck in sharing costly data acquired from the Internet. To address these issues, we propose a novel destination- and interest-aware clustering (DIAC) mechanism. DIAC primarily incorporates a strategic game-theoretic algorithm and a self-location calculation algorithm. The former allows vehicles to participate/cooperate and enforces a fair-use policy among the cluster members (CMs), whereas the latter enables CMs to calculate their location coordinates in the absence of a global positioning system under an urban topography. DIAC strives to reduce the frequency of link failures not only among vehicles but also between each vehicle and the 3G/LTE network. The mechanism also considers vehicle mobility and LTE link quality and exploits common interests among vehicles in the cluster formation phase. The performance of the DIAC mechanism is validated through extensive simulations, whose results demonstrate that the performance of the proposed mechanism is superior to that of similar and existing approaches. © 2018 Elsevier B.V.
A new optimal power flow approach for wind energy integrated power systems
- Authors: Rahmani, Shima , Amjady, Nima
- Date: 2017
- Type: Text , Journal article
- Relation: Energy Vol. 134, no. (2017), p. 349-359
- Full Text: false
- Reviewed:
- Description: Penetration of wind generation into power systems in recent years has greatly affected optimal power flow (OPF) because of the uncertain behavior of this new energy resource. In this research work, at first, a novel scenario generation approach is proposed to model wind power (WP) uncertainty. The proposed scenario generation approach includes construction of probability density function (PDF) pertaining to WP forecast error, segmentation of the PDF by an efficient clustering approach to obtain both the optimal number and the optimal arrangement of the clusters, and the generation of WP scenarios using the optimized clusters through roulette wheel mechanism. Secondly, this paper presents a new OPF framework based on DC network modeling for wind generation integrated power systems. Thirdly, a new out-of-sample analysis is presented to evaluate the long-run performance of the proposed OPF approach encountering various realizations of uncertain WPs. Finally, the performance of the proposed method for solving WP-integrated OPF problem is extensively illustrated on the IEEE 30-bus and the IEEE 118-bus test systems and compared with the performance of the deterministic method and the Weibull PDF method. These comparisons illustrate better performance of the proposed method, while it has reasonable computation times. •A new scenario generation approach is presented.•A new wind power integrated optimal power model is proposed.•A new out-of-sample analysis is presented.•The effectiveness of the proposed model is extensively illustrated.
An overview of geospatial methods used in unintentional injury epidemiology
- Authors: Singh, Himalaya , Fortington, Lauren , Thompson, Helen , Finch, Caroline
- Date: 2016
- Type: Text , Journal article
- Relation: Injury Epidemiology Vol. 3, no. 32 (2016), p. 1-12
- Relation: http://purl.org/au-research/grants/nhmrc/1058737
- Full Text:
- Reviewed:
- Description: BACKGROUND: Injuries are a leading cause of death and disability around the world. Injury incidence is often associated with socio-economic and physical environmental factors. The application of geospatial methods has been recognised as important to gain greater understanding of the complex nature of injury and the associated diverse range of geographically-diverse risk factors. Therefore, the aim of this paper is to provide an overview of geospatial methods applied in unintentional injury epidemiological studies. METHODS: Nine electronic databases were searched for papers published in 2000-2015, inclusive. Included were papers reporting unintentional injuries using geospatial methods for one or more categories of spatial epidemiological methods (mapping; clustering/cluster detection; and ecological analysis). Results describe the included injury cause categories, types of data and details relating to the applied geospatial methods. RESULTS: From over 6,000 articles, 67 studies met all inclusion criteria. The major categories of injury data reported with geospatial methods were road traffic (n = 36), falls (n = 11), burns (n = 9), drowning (n = 4), and others (n = 7). Grouped by categories, mapping was the most frequently used method, with 62 (93%) studies applying this approach independently or in conjunction with other geospatial methods. Clustering/cluster detection methods were less common, applied in 27 (40%) studies. Three studies (4%) applied spatial regression methods (one study using a conditional autoregressive model and two studies using geographically weighted regression) to examine the relationship between injury incidence (drowning, road deaths) with aggregated data in relation to explanatory factors (socio-economic and environmental). CONCLUSION: The number of studies using geospatial methods to investigate unintentional injuries has increased over recent years. While the majority of studies have focused on road traffic injuries, other injury cause categories, particularly falls and burns, have also demonstrated the application of these methods. Geospatial investigations of injury have largely been limited to mapping of data to visualise spatial structures. Use of more sophisticated approaches will help to understand a broader range of spatial risk factors, which remain under-explored when using traditional epidemiological approaches.
Constrained self organizing maps for data clusters visualization
- Authors: Mohebi, Ehsan , Bagirov, Adil
- Date: 2016
- Type: Text , Journal article
- Relation: Neural Processing Letters Vol. 43, no. 3 (2016), p. 849-869
- Full Text: false
- Reviewed:
- Description: High dimensional data visualization is one of the main tasks in the field of data mining and pattern recognition. The self organizing maps (SOM) is one of the topology visualizing tool that contains a set of neurons that gradually adapt to input data space by competitive learning and form clusters. The topology preservation of the SOM strongly depends on the learning process. Due to this limitation one cannot guarantee the convergence of the SOM in data sets with clusters of arbitrary shape. In this paper, we introduce Constrained SOM (CSOM), the new version of the SOM by modifying the learning algorithm. The idea is to introduce an adaptive constraint parameter to the learning process to improve the topology preservation and mapping quality of the basic SOM. The computational complexity of the CSOM is less than those with the SOM. The proposed algorithm is compared with similar topology preservation algorithms and the numerical results on eight small to large real-world data sets demonstrate the efficiency of the proposed algorithm. © 2015, Springer Science+Business Media New York.
Functional specialisation and socio-economic factors in population change : A clustering study in non-metropolitan Australia
- Authors: Mardaneh, Karim
- Date: 2015
- Type: Text , Journal article
- Relation: Urban Studies Vol. 53, no. 8 (2015), p. 1591-1616
- Full Text: false
- Reviewed:
- Description: Although research has examined population growth and decline using functional specialisation, little attention has been paid to the possible combined effects of functional specialisation and socio-economic factors on population change. Using the Australian Bureau of Statistics Census Data 2001–2006 for statistical local areas, this study presents an investigation of the role of both functional specialisation and socio-economic factors in population change in non-metropolitan areas under the sustenance framework. The uniqueness of the study is twofold. Conceptually it develops a framework to compare the combined role of functional specialisation and socio-economic factors on population change; and, empirically it uses data mining (cluster analysis) techniques to investigate the extent of this combined role. The results show the significance of both functional specialisation and socio-economic factors. Policy implications of the study indicate the need to examine regional development and population change in relation to functional specialisation and socio-economic factors and their impact on viability of non-metropolitan areas. © Urban Studies Journal Limited 2015.
REPLOT : REtrieving Profile Links on Twitter for malicious campaign discovery
- Authors: Perez, Charles , Birregah, Babiga , Layton, Robert , Lemercier, Marc , Watters, Paul
- Date: 2015
- Type: Text , Journal article
- Relation: AI Communications Vol. 29, no. 1 (2015), p. 107-122
- Full Text:
- Reviewed:
- Description: Social networking sites are increasingly subject to malicious activities such as self-propagating worms, confidence scams and drive-by-download malwares. The high number of users associated with the presence of sensitive data, such as personal or professional information, is certainly an unprecedented opportunity for attackers. These attackers are moving away from previous platforms of attack, such as emails, towards social networking websites. In this paper, we present a full stack methodology for the identification of campaigns of malicious profiles on social networking sites, composed of maliciousness classification, campaign discovery and attack profiling. The methodology named REPLOT, for REtrieving Profile Links On Twitter, contains three major phases. First, profiles are analysed to determine whether they are more likely to be malicious or benign. Second, connections between suspected malicious profiles are retrieved using a late data fusion approach consisting of temporal and authorship analysis based models to discover campaigns. Third, the analysis of the discovered campaigns is performed to investigate the attacks. In this paper, we apply this methodology to a real world dataset, with a view to understanding the links between malicious profiles, their attack methods and their connections. Our analysis identifies a cluster of linked profiles focusing on propagating malicious links, as well as profiling two other major clusters of attacking campaigns. © 2016 - IOS Press and the authors. All rights reserved.
LiNearN : A new approach to nearest neighbour density estimator
- Authors: Wells, Jonathan , Ting, Kaiming , Washio, Takashi
- Date: 2014
- Type: Text , Journal article
- Relation: Pattern Recognition Vol. 47, no. 8 (2014), p. 2702-2720
- Full Text: false
- Reviewed:
- Description: Despite their wide spread use, nearest neighbour density estimators have two fundamental limitations: O(n2) time complexity and O(n) space complexity. Both limitations constrain nearest neighbour density estimators to small data sets only. Recent progress using indexing schemes has improved to near linear time complexity only.We propose a new approach, called LiNearN for Linear time Nearest Neighbour algorithm, that yields the first nearest neighbour density estimator having O(n) time complexity and constant space complexity, as far as we know. This is achieved without using any indexing scheme because LiNearN uses a subsampling approach for which the subsample values are significantly less than the data size. Like existing density estimators, our asymptotic analysis reveals that the new density estimator has a parameter to trade off between bias and variance. We show that algorithms based on the new nearest neighbour density estimator can easily scale up to data sets with millions of instances in anomaly detection and clustering tasks. Highlights•Reject the premise that a NN algorithm must find the NN for every instance.•The first NN density estimator that has O(n) time complexity and O(1) space complexity.•These complexities are achieved without using any indexing scheme.•Our asymptotic analysis reveals that it trades off between bias and variance.•Easily scales up to large data sets in anomaly detection and clustering tasks.
Sensor selection for tracking multiple groups of targets
- Authors: Armaghani, Farzaneh , Gondal, Iqbal , Kamruzzaman, Joarder , Green, David
- Date: 2014
- Type: Text , Journal article
- Relation: Journal of Network and Computer Applications Vol. 46, no. (2014), p. 36-47
- Full Text: false
- Reviewed:
- Description: Group target tracking is a challenge for sensor networks. It occurs where large numbers of closely spaced targets move together in different groups. In these applications, the sensor selection scheme plays a vital role in extending network lifetime while providing high tracking accuracy. Existing schemes cause an extreme imbalance between energy usages and tracking accuracy. They are capable of tracking only individual groups and without using prior knowledge about the groups. These problems make them impractical for group target tracking. With the aim of balancing the trade-off between lifetime and accuracy, we present a novel Multi-Sensor Group Tracking (MSGT) scheme. MSGT comprises the following steps to accomplish concurrent tracking of multiple groups: (1) Clustering to capture changes in the behavioural properties of groups, such as formation, merging, and splitting; (2) Sensor selection to activate the contributory sensors for the estimated group regions; and (3) Group tracking using the activated sensors. We develop a probabilistic decision-making strategy that triggers the clustering step adaptively with any detected change in group behavioural patterns. The sensor selection step coordinates periodic selection of leader and tracking sensor nodes in a distributed manner. We introduce cost metrics that include sensor′s energy parameters in the selection of active sensors that fully cover the group regions. The tracking step is a Bayesian modelling of the target groups which uses particle filtering algorithm to estimate the group locations. Simulation results show that our scheme achieves substantial improvements over existing approaches in terms of network lifetime and tracking accuracy.
Applications of functional data analysis : A systematic review
- Authors: Ullah, Shahid , Finch, Caroline
- Date: 2013
- Type: Text , Journal article
- Relation: BMC Medical Research Methodology Vol. 13, no. 43 (2013), p.1-12
- Relation: http://purl.org/au-research/grants/nhmrc/565900
- Full Text:
- Reviewed:
- Description: Background Functional data analysis (FDA) is increasingly being used to better analyze, model and predict time series data. Key aspects of FDA include the choice of smoothing technique, data reduction, adjustment for clustering, functional linear modeling and forecasting methods. Methods A systematic review using 11 electronic databases was conducted to identify FDA application studies published in the peer-review literature during 1995–2010. Papers reporting methodological considerations only were excluded, as were non-English articles. Results In total, 84 FDA application articles were identified; 75.0% of the reviewed articles have been published since 2005. Application of FDA has appeared in a large number of publications across various fields of sciences; the majority is related to biomedicine applications (21.4%). Overall, 72 studies (85.7%) provided information about the type of smoothing techniques used, with B-spline smoothing (29.8%) being the most popular. Functional principal component analysis (FPCA) for extracting information from functional data was reported in 51 (60.7%) studies. One-quarter (25.0%) of the published studies used functional linear models to describe relationships between explanatory and outcome variables and only 8.3% used FDA for forecasting time series data. Conclusions Despite its clear benefits for analyzing time series data, full appreciation of the key features and value of FDA have been limited to date, though the applications show its relevance to many public health and biomedical problems. Wider application of FDA to all studies involving correlated measurements should allow better modeling of, and predictions from, such data in the future especially as FDA makes no a priori age and time effects assumptions.
Application of rank correlation, clustering and classification in information security
- Authors: Beliakov, Gleb , Yearwood, John , Kelarev, Andrei
- Date: 2012
- Type: Text , Journal article
- Relation: Journal of Networks Vol. 7, no. 6 (2012), p. 935-945
- Full Text:
- Reviewed:
- Description: This article is devoted to experimental investigation of a novel application of a clustering technique introduced by the authors recently in order to use robust and stable consensus functions in information security, where it is often necessary to process large data sets and monitor outcomes in real time, as it is required, for example, for intrusion detection. Here we concentrate on a particular case of application to profiling of phishing websites. First, we apply several independent clustering algorithms to a randomized sample of data to obtain independent initial clusterings. Silhouette index is used to determine the number of clusters. Second, rank correlation is used to select a subset of features for dimensionality reduction. We investigate the effectiveness of the Pearson Linear Correlation Coefficient, the Spearman Rank Correlation Coefficient and the Goodman-Kruskal Correlation Coefficient in this application. Third, we use a consensus function to combine independent initial clusterings into one consensus clustering. Fourth, we train fast supervised classification algorithms on the resulting consensus clustering in order to enable them to process the whole large data set as well as new data. The precision and recall of classifiers at the final stage of this scheme are critical for effectiveness of the whole procedure. We investigated various combinations of several correlation coefficients, consensus functions, and a variety of supervised classification algorithms. © 2012 Academy Publisher.
- Description: 2003010277
Small-to-medium enterprises and economic growth : A comparative study of clustering techniques
- Authors: Mardaneh, Karim
- Date: 2012
- Type: Text , Journal article
- Relation: Journal of Modern Applied Statistical Methods Vol. 11, no. 2 (2012), p. 469-478
- Full Text:
- Reviewed:
- Description: Small-to-medium enterprises (SMEs) in regional (non-metropolitan) areas are considered when economic planning may require large data sets and sophisticated clustering techniques. The economic growth of regional areas was investigated using four clustering algorithms. Empirical analysis demonstrated that the modified global k-means algorithm outperformed other algorithms. © 2012 JMASM, Inc.
- Description: 2003010429
A general stochastic clustering method for automatic cluster discovery
- Authors: Tan, Swee , Ting, Kaiming , Teng, Shyh
- Date: 2011
- Type: Text , Journal article
- Relation: Pattern Recognition Vol. 44, no. 10-11 (2011), p. 2786-2799
- Full Text: false
- Reviewed:
- Description: Finding clusters in data is a challenging problem. Given a dataset, we usually do not know the number of natural clusters hidden in the dataset. The problem is exacerbated when there is little or no additional information except the data itself. This paper proposes a general stochastic clustering method that is a simplification of nature-inspired ant-based clustering approach. It begins with a basic solution and then performs stochastic search to incrementally improve the solution until the underlying clusters emerge, resulting in automatic cluster discovery in datasets. This method differs from several recent methods in that it does not require users to input the number of clusters and it makes no explicit assumption about the underlying distribution of a dataset. Our experimental results show that the proposed method performs better than several existing methods in terms of clustering accuracy and efficiency in majority of the datasets used in this study. Our theoretical analysis shows that the proposed method has linear time and space complexities, and our empirical study shows that it can accurately and efficiently discover clusters in large datasets in which many existing methods fail to run.
The choice of a similarity measure with respect to its sensitivity to outliers
- Authors: Rubinov, Alex , Sukhorukova, Nadezda , Ugon, Julien
- Date: 2010
- Type: Text , Journal article
- Relation: Dynamics of Continuous, Discrete and Impulsive Systems Series B: Applications and Algorithms Vol. 17, no. 5 (2010), p. 709-721
- Full Text:
- Reviewed:
- Description: This paper examines differences in the choice of similarity measures with respect to their sensitivity to outliers in clustering problems, formulated as mathematical programming problems. Namely, we are focusing on the study of norms (norm-based similarity measures) and convex functions of norms (function-norm-based similarity measures). The study consists of two parts: the study of theoretical models and numerical experiments. The main result of this study is a criterion for the outliers sensitivity with respect to the corresponding similarity measure. In particular, the obtained results show that the norm-based similarity measures are not sensitive to outliers whilst a very widely used square of the Euclidean norm similarity measure (least squares) is sensitive to outliers. Copyright © 2010 Watam Press.
Rees matrix constructions for clustering of data
- Authors: Kelarev, Andrei , Watters, Paul , Yearwood, John
- Date: 2009
- Type: Journal article
- Relation: Journal of the Australian Mathematical Society Vol. 87, no. 3 (2009), p. 377-393
- Relation: http://purl.org/au-research/grants/arc/DP0211866
- Full Text:
- Reviewed:
- Description: This paper continues the investigation of semigroup constructions motivated by applications in data mining. We give a complete description of the error-correcting capabilities of a large family of clusterers based on Rees matrix semigroups well known in semigroup theory. This result strengthens and complements previous formulas recently obtained in the literature. Examples show that our theorems do not generalize to other classes of semigroups.
Statistical tests conducted with school environment data : The effect of teachers being clustered in schools
- Authors: Dorman, Jeffrey
- Date: 2009
- Type: Text , Journal article
- Relation: Learning Environments Research Vol. 12, no. 2 (2009), p. 85-99
- Full Text: false
- Reviewed:
- Description: This article discusses the effect of clustering on statistical tests conducted with school environment data. Because most school environment studies involve the collection of data from teachers nested within schools, the hierarchical nature to these data cannot be ignored. In particular, this article considers the influence of intraschool correlations on tests of statistical significance conducted with the individual teacher as the unit of analysis. Theory that adjusts t test scores for nested data in two-group comparisons is presented and applied to school environment data. This article demonstrates that Type I error rates inflate greatly as the intraschool correlation increases. Because data analysis techniques that recognise the clustering of teachers in schools are essential, it is recommended that either multilevel analysis or adjustments to statistical parameters be undertaken in school environment studies involving nested data.
Methods for global optimization of nonsmooth functions with applications
- Authors: Rubinov, Alex
- Date: 2006
- Type: Text , Journal article
- Relation: Applied and Computational Mathematics Vol. 5, no. 1 (2006), p. 3-15
- Full Text: false
- Reviewed:
- Description: In this survey paper we present some results obtained in the Centre for Informatics and Applied Optimization (CIAO) at University of Ballarat, Australia, in the area of numerical global optimization. We describe a conceptual scheme of two methods developed in CIAO and present results of numerical experiments with some real world problems. The paper is based on a plenary lecture given by the author at the First International Conference on Control and Optimization with Industrial Applications, Baku, Azerbaijan, 2005.
- Description: C1
- Description: 2003001547