Your selections:

3Bayesian analysis
3Climatology
3Fiji
3Forecasting
3Hurricanes
3Regression analysis
3Tropical cyclone
3Weather forecasting
20401 Atmospheric Sciences
2Atmospheric pressure
2Bayesian methods
2Climate prediction
2El Nino-Southern Oscillation
2Environmental parameter
2Hindcasts
2Nickel compounds
2Optimization
2Probability
2Samoa

Show More

Show Less

Format Type

**Authors:**Chand, Savin , Walsh, Kevin**Date:**2011**Type:**Text , Journal article**Relation:**Weather and Forecasting Vol. 26, no. 2 (2011), p. 150-165**Full Text:**false**Reviewed:****Description:**An objective methodology for forecasting the probability of tropical cyclone (TC) formation in the Fiji, Samoa, and Tonga regions (collectively the FST region) using antecedent large-scale environmental conditions is investigated. Three separate probabilistic forecast schemes are developed using a probit regression approach where model parameters are determined via Bayesian fitting. These schemes provide forecasts of TC formation from an existing system (i) within the next 24 h (W24h), (ii) within the next 48 h (W48h), and (iii) within the next 72 h (W72h). To assess the performance of the three forecast schemes in practice, verification methods such as the posterior expected error, Brier skill scores, and relative operating characteristic skill scores are applied. Results suggest that the W24h scheme, which is formulated using large-scale environmental parameters, on average, performs better than that formulated using climatology and persistence (CLIPER) variables. In contrast, the W48h (W72h) scheme formulated using large-scale environmental parameters performs similar to (poorer than) that formulated using CLIPER variables. Therefore, large-scale environmental parameters (CLIPER variables) are preferred as predictors when forecasting TC formation in the FST region within 24 h (at least 48 h) using models formulated in the present investigation. © 2011 American Meteorological Society.

Structure learning of Bayesian networks using a new unrestricted dependency algorithm

- Taheri, Sona, Mammadov, Musa

**Authors:**Taheri, Sona , Mammadov, Musa**Date:**2012**Type:**Text , Conference proceedings**Full Text:****Description:**Bayesian Networks have deserved extensive attentions in data mining due to their efficiencies, and reasonable predictive accuracy. A Bayesian Network is a directed acyclic graph in which each node represents a variable and each arc a probabilistic dependency between two variables. Constructing a Bayesian Network from data is the learning process that is divided in two steps: learning structure and learning parameter. In many domains, the structure is not known a priori and must be inferred from data. This paper presents an iterative unrestricted dependency algorithm for learning structure of Bayesian Networks for binary classification problems. Numerical experiments are conducted on several real world data sets, where continuous features are discretized by applying two different methods. The performance of the proposed algorithm is compared with the Naive Bayes, the Tree Augmented Naive Bayes, and the k

**Authors:**Taheri, Sona , Mammadov, Musa**Date:**2012**Type:**Text , Conference proceedings**Full Text:****Description:**Bayesian Networks have deserved extensive attentions in data mining due to their efficiencies, and reasonable predictive accuracy. A Bayesian Network is a directed acyclic graph in which each node represents a variable and each arc a probabilistic dependency between two variables. Constructing a Bayesian Network from data is the learning process that is divided in two steps: learning structure and learning parameter. In many domains, the structure is not known a priori and must be inferred from data. This paper presents an iterative unrestricted dependency algorithm for learning structure of Bayesian Networks for binary classification problems. Numerical experiments are conducted on several real world data sets, where continuous features are discretized by applying two different methods. The performance of the proposed algorithm is compared with the Naive Bayes, the Tree Augmented Naive Bayes, and the k

Learning the naive bayes classifier with optimization models

- Taheri, Sona, Mammadov, Musa

**Authors:**Taheri, Sona , Mammadov, Musa**Date:**2013**Type:**Text , Journal article**Relation:**International Journal of Applied Mathematics and Computer Science Vol. 23, no. 4 (2013), p. 787-795**Full Text:****Reviewed:****Description:**Naive Bayes is among the simplest probabilistic classifiers. It often performs surprisingly well in many real world applications, despite the strong assumption that all features are conditionally independent given the class. In the learning process of this classifier with the known structure, class probabilities and conditional probabilities are calculated using training data, and then values of these probabilities are used to classify new observations. In this paper, we introduce three novel optimization models for the naive Bayes classifier where both class probabilities and conditional probabilities are considered as variables. The values of these variables are found by solving the corresponding optimization problems. Numerical experiments are conducted on several real world binary classification data sets, where continuous features are discretized by applying three different methods. The performances of these models are compared with the naive Bayes classifier, tree augmented naive Bayes, the SVM, C4.5 and the nearest neighbor classifier. The obtained results demonstrate that the proposed models can significantly improve the performance of the naive Bayes classifier, yet at the same time maintain its simple structure.

**Authors:**Taheri, Sona , Mammadov, Musa**Date:**2013**Type:**Text , Journal article**Relation:**International Journal of Applied Mathematics and Computer Science Vol. 23, no. 4 (2013), p. 787-795**Full Text:****Reviewed:****Description:**Naive Bayes is among the simplest probabilistic classifiers. It often performs surprisingly well in many real world applications, despite the strong assumption that all features are conditionally independent given the class. In the learning process of this classifier with the known structure, class probabilities and conditional probabilities are calculated using training data, and then values of these probabilities are used to classify new observations. In this paper, we introduce three novel optimization models for the naive Bayes classifier where both class probabilities and conditional probabilities are considered as variables. The values of these variables are found by solving the corresponding optimization problems. Numerical experiments are conducted on several real world binary classification data sets, where continuous features are discretized by applying three different methods. The performances of these models are compared with the naive Bayes classifier, tree augmented naive Bayes, the SVM, C4.5 and the nearest neighbor classifier. The obtained results demonstrate that the proposed models can significantly improve the performance of the naive Bayes classifier, yet at the same time maintain its simple structure.

Modeling seasonal tropical cyclone activity in the Fiji region as a binary classification problem

**Authors:**Chand, Savin , Walsh, Kevin**Date:**2012**Type:**Text , Journal article**Relation:**Journal of Climate Vol. 25, no. 14 (2012), p. 5057-5071**Full Text:**false**Reviewed:****Description:**This study presents a binary classification model for the prediction of tropical cyclone (TC) activity in the Fiji, Samoa, and Tonga regions (the FST region) using the accumulated cyclone energy (ACE) as a proxy of TC activity. A probit regression model, which is a suitable probabilitymodel for describing binary response data, is developed to determine at least a fewmonths in advance (by July in this case) the probability that an upcoming TC season may have for high or low TC activity. Years of "high TC activity" are defined as those years when ACE values exceeded the sample climatology (i.e., the 1985-2008 mean value). Model parameters are determined using the Bayesian method. Various combinations of the El Nin{ogonek} o-Southern Oscillation (ENSO) indices and large-scale environmental conditions that are known to affect TCs in the FST region are examined as potential predictors. It was found that a set of predictors comprising low-level relative vorticity, upper-level divergence, and midtropspheric relative humidity provided the best skill in terms of minimum hindcast error. Results based on hindcast verification clearly suggest that the model predicts TC activity in the FST region with substantial skill up to the May-July preseason for all years considered in the analysis, in particular for ENSO-neutral years when TC activity is known to show large variations. © 2012 American Meteorological Society.

A bayesian regression approach to seasonal prediction of tropical cyclones affecting the Fiji region

- Chand, Savin, Walsh, Kevin, Chan, Johnny

**Authors:**Chand, Savin , Walsh, Kevin , Chan, Johnny**Date:**2010**Type:**Text , Journal article**Relation:**Journal of Climate Vol. 23, no. 13 (2010), p. 3425-3445**Full Text:**false**Reviewed:****Description:**This study presents seasonal prediction schemes for tropical cyclones (TCs) affecting the Fiji, Samoa, and Tonga (FST) region. Two separate Bayesian regression models are developed: (i) for cyclones forming within the FST region (FORM) and (ii) for cyclones entering the FST region (ENT). Predictors examined include various El Niño-Southern Oscillation (ENSO) indices and large-scale environmental parameters. Only those predictors that showed significant correlations with FORM and ENT are retained. Significant preseason correlations are found as early as May-July (approximately three months in advance). Therefore, May-July predictors are used to make initial predictions, and updated predictions are issued later using October-December early-cyclone-season predictors. A number of predictor combinations are evaluated through a cross-validation technique. Results suggest that a model based on relative vorticity and the Niño-4 index is optimal to predict the annual number of TCs associated with FORM, as it has the smallest RMSE associated with its hindcasts (RMSE = 1.63). Similarly, the all-parameter-combined model, which includes the Niño-4 index and some large-scale environmental fields over the East China Sea, appears appropriate to predict the annual number of TCs associated with ENT (RMSE = 0.98). While the all-parameter-combined ENT model appears to have good skill over all years, the May-July prediction of the annual number of TCs associated with FORM has two limitations. First, it underestimates (overestimates) the formation for years where the onset of El Niño (La Niña) events is after the May-July preseason or where a previous La Niña (El Niño) event continued through May-July during its decay phase. Second, its performance in neutral conditions is quite variable. Overall, no significant skill can be achieved for neutral conditions even after an October-December update. This is contrary to the performance during El Niño or La Niña events, where model performance is improved substantially after an October-December early-cyclone-season update. © 2010 American Meteorological Society.

Improving Naive Bayes classifier using conditional probabilities

- Taheri, Sona, Mammadov, Musa, Bagirov, Adil

**Authors:**Taheri, Sona , Mammadov, Musa , Bagirov, Adil**Date:**2010**Type:**Text , Conference proceedings**Full Text:****Description:**Naive Bayes classifier is the simplest among Bayesian Network classifiers. It has shown to be very efficient on a variety of data classification problems. However, the strong assumption that all features are conditionally independent given the class is often violated on many real world applications. Therefore, improvement of the Naive Bayes classifier by alleviating the feature independence assumption has attracted much attention. In this paper, we develop a new version of the Naive Bayes classifier without assuming independence of features. The proposed algorithm approximates the interactions between features by using conditional probabilities. We present results of numerical experiments on several real world data sets, where continuous features are discretized by applying two different methods. These results demonstrate that the proposed algorithm significantly improve the performance of the Naive Bayes classifier, yet at the same time maintains its robustness. © 2011, Australian Computer Society, Inc.**Description:**2003009505

**Authors:**Taheri, Sona , Mammadov, Musa , Bagirov, Adil**Date:**2010**Type:**Text , Conference proceedings**Full Text:****Description:**Naive Bayes classifier is the simplest among Bayesian Network classifiers. It has shown to be very efficient on a variety of data classification problems. However, the strong assumption that all features are conditionally independent given the class is often violated on many real world applications. Therefore, improvement of the Naive Bayes classifier by alleviating the feature independence assumption has attracted much attention. In this paper, we develop a new version of the Naive Bayes classifier without assuming independence of features. The proposed algorithm approximates the interactions between features by using conditional probabilities. We present results of numerical experiments on several real world data sets, where continuous features are discretized by applying two different methods. These results demonstrate that the proposed algorithm significantly improve the performance of the Naive Bayes classifier, yet at the same time maintains its robustness. © 2011, Australian Computer Society, Inc.**Description:**2003009505

Learning Bayesian networks based on optimization approaches

**Authors:**Taheri, Sona**Date:**2012**Type:**Text , Thesis , PhD**Full Text:**false**Description:**Learning accurate classifiers from preclassified data is a very active research topic in machine learning and artifcial intelligence. There are numerous classifier paradigms, among which Bayesian Networks are very effective and well known in domains with uncertainty. Bayesian Networks are widely used representation frameworks for reasoning with probabilistic information. These models use graphs to capture dependence and independence relationships between feature variables, allowing a concise representation of the knowledge as well as efficient graph based query processing algorithms. This representation is defined by two components: structure learning and parameter learning. The structure of this model represents a directed acyclic graph. The nodes in the graph correspond to the feature variables in the domain, and the arcs (edges) show the causal relationships between feature variables. A directed edge relates the variables so that the variable corresponding to the terminal node (child) will be conditioned on the variable corresponding to the initial node (parent). The parameter learning represents probabilities and conditional probabilities based on prior information or past experience. The set of probabilities are represented in the conditional probability table. Once the network structure is constructed, the probabilistic inferences are readily calculated, and can be performed to predict the outcome of some variables based on the observations of others. However, the problem of structure learning is a complex problem since the number of candidate structures grows exponentially when the number of feature variables increases. This thesis is devoted to the development of learning structures and parameters in Bayesian Networks. Different models based on optimization techniques are introduced to construct an optimal structure of a Bayesian Network. These models also consider the improvement of the Naive Bayes' structure by developing new algorithms to alleviate the independence assumptions. We present various models to learn parameters of Bayesian Networks; in particular we propose optimization models for the Naive Bayes and the Tree Augmented Naive Bayes by considering different objective functions. To solve corresponding optimization problems in Bayesian Networks, we develop new optimization algorithms. Local optimization methods are introduced based on the combination of the gradient and Newton methods. It is proved that the proposed methods are globally convergent and have superlinear convergence rates. As a global search we use the global optimization method, AGOP, implemented in the open software library GANSO. We apply the proposed local methods in the combination with AGOP. Therefore, the main contributions of this thesis include (a) new algorithms for learning an optimal structure of a Bayesian Network; (b) new models for learning the parameters of Bayesian Networks with the given structures; and finally (c) new optimization algorithms for optimizing the proposed models in (a) and (b). To validate the proposed methods, we conduct experiments across a number of real world problems. Print version is available at: http://library.federation.edu.au/record=b1804607~S4**Description:**Doctor of Philosophy

- «
- ‹
- 1
- ›
- »

Are you sure you would like to clear your session, including search history and login status?