A scaled boundary polygon formulation for elasto-plastic analyses
- Ooi, Ean Tat, Song, Chongmin, Tin-Loi, Francis
- Authors: Ooi, Ean Tat , Song, Chongmin , Tin-Loi, Francis
- Date: 2014
- Type: Text , Journal article
- Relation: Computer Methods in Applied Mechanics and Engineering Vol. 268, no. (January 2014 2014), p. 905-937
- Full Text:
- Reviewed:
- Description: This study presents a novel scaled boundary polygon formulation to model elasto-plastic material responses in structures. The polygons have flexible mesh generation capabilities and are more accurate than standard finite elements, especially for problems with cracks and notches. Shape functions of arbitrary n-sided polygons are constructed using the scaled boundary finite element method. These shape functions are conforming and linearly complete. When modeling a crack, strain singularities are analytically modeled without enrichment. Standard finite element procedures are used to formulate the stiffness matrix and residual load vector. The nonlinear material constitutive matrix and the internal stresses are approximated locally in each polygon by a polynomial function. The stiffness matrix and the residual load vector are matrix power integrals that can be evaluated analytically even when a strain singularity is present. Standard nonlinear equation solvers e.g. the modified Newton–Raphson algorithm are used to obtain the nonlinear response of the structure. The proposed formulation is validated using several numerical benchmarks.
- Authors: Ooi, Ean Tat , Song, Chongmin , Tin-Loi, Francis
- Date: 2014
- Type: Text , Journal article
- Relation: Computer Methods in Applied Mechanics and Engineering Vol. 268, no. (January 2014 2014), p. 905-937
- Full Text:
- Reviewed:
- Description: This study presents a novel scaled boundary polygon formulation to model elasto-plastic material responses in structures. The polygons have flexible mesh generation capabilities and are more accurate than standard finite elements, especially for problems with cracks and notches. Shape functions of arbitrary n-sided polygons are constructed using the scaled boundary finite element method. These shape functions are conforming and linearly complete. When modeling a crack, strain singularities are analytically modeled without enrichment. Standard finite element procedures are used to formulate the stiffness matrix and residual load vector. The nonlinear material constitutive matrix and the internal stresses are approximated locally in each polygon by a polynomial function. The stiffness matrix and the residual load vector are matrix power integrals that can be evaluated analytically even when a strain singularity is present. Standard nonlinear equation solvers e.g. the modified Newton–Raphson algorithm are used to obtain the nonlinear response of the structure. The proposed formulation is validated using several numerical benchmarks.
Detecting K-complexes for sleep stage identification using nonsmooth optimization
- Moloney, David, Sukhorukova, Nadezda, Vamplew, Peter, Ugon, Julien, Li, Gang, Beliakov, Gleb, Philippe, Carole, Amiel, Hélène, Ugon, Adrien
- Authors: Moloney, David , Sukhorukova, Nadezda , Vamplew, Peter , Ugon, Julien , Li, Gang , Beliakov, Gleb , Philippe, Carole , Amiel, Hélène , Ugon, Adrien
- Date: 2012
- Type: Text , Journal article
- Relation: ANZIAM Journal Vol. 52, no. 4 (2012), p. 319-332
- Full Text:
- Reviewed:
- Description: The process of sleep stage identification is a labour-intensive task that involves the specialized interpretation of the polysomnographic signals captured from a patient's overnight sleep session. Automating this task has proven to be challenging for data mining algorithms because of noise, complexity and the extreme size of data. In this paper we apply nonsmooth optimization to extract key features that lead to better accuracy. We develop a specific procedure for identifying K-complexes, a special type of brain wave crucial for distinguishing sleep stages. The procedure contains two steps. We first extract "easily classified" K-complexes, and then apply nonsmooth optimization methods to extract features from the remaining data and refine the results from the first step. Numerical experiments show that this procedure is efficient for detecting K-complexes. It is also found that most classification methods perform significantly better on the extracted features. © 2012 Australian Mathematical Society.
- Authors: Moloney, David , Sukhorukova, Nadezda , Vamplew, Peter , Ugon, Julien , Li, Gang , Beliakov, Gleb , Philippe, Carole , Amiel, Hélène , Ugon, Adrien
- Date: 2012
- Type: Text , Journal article
- Relation: ANZIAM Journal Vol. 52, no. 4 (2012), p. 319-332
- Full Text:
- Reviewed:
- Description: The process of sleep stage identification is a labour-intensive task that involves the specialized interpretation of the polysomnographic signals captured from a patient's overnight sleep session. Automating this task has proven to be challenging for data mining algorithms because of noise, complexity and the extreme size of data. In this paper we apply nonsmooth optimization to extract key features that lead to better accuracy. We develop a specific procedure for identifying K-complexes, a special type of brain wave crucial for distinguishing sleep stages. The procedure contains two steps. We first extract "easily classified" K-complexes, and then apply nonsmooth optimization methods to extract features from the remaining data and refine the results from the first step. Numerical experiments show that this procedure is efficient for detecting K-complexes. It is also found that most classification methods perform significantly better on the extracted features. © 2012 Australian Mathematical Society.
Gene regulatory network modeling via global optimization of high-order dynamic Bayesian network
- Nguyen, Vinh, Chetty, Madhu, Coppel, Ross, Wangikar, Pramod
- Authors: Nguyen, Vinh , Chetty, Madhu , Coppel, Ross , Wangikar, Pramod
- Date: 2012
- Type: Text , Journal article
- Relation: BMC Bioinformatics Vol. 13, no. 131 (2012), p. 1-16
- Full Text:
- Reviewed:
- Description: Abstract Background Dynamic Bayesian network (DBN) is among the mainstream approaches for modeling various biological networks, including the gene regulatory network (GRN). Most current methods for learning DBN employ either local search such as hill-climbing, or a meta stochastic global optimization framework such as genetic algorithm or simulated annealing, which are only able to locate sub-optimal solutions. Further, current DBN applications have essentially been limited to small sized networks. Results To overcome the above difficulties, we introduce here a deterministic global optimization based DBN approach for reverse engineering genetic networks from time course gene expression data. For such DBN models that consist only of inter time slice arcs, we show that there exists a polynomial time algorithm for learning the globally optimal network structure. The proposed approach, named GlobalMIT+, employs the recently proposed information theoretic scoring metric named mutual information test (MIT). GlobalMIT+ is able to learn high-order time delayed genetic interactions, which are common to most biological systems. Evaluation of the approach using both synthetic and real data sets, including a 733 cyanobacterial gene expression data set, shows significantly improved performance over other techniques. Conclusions Our studies demonstrate that deterministic global optimization approaches can infer large scale genetic networks.
- Authors: Nguyen, Vinh , Chetty, Madhu , Coppel, Ross , Wangikar, Pramod
- Date: 2012
- Type: Text , Journal article
- Relation: BMC Bioinformatics Vol. 13, no. 131 (2012), p. 1-16
- Full Text:
- Reviewed:
- Description: Abstract Background Dynamic Bayesian network (DBN) is among the mainstream approaches for modeling various biological networks, including the gene regulatory network (GRN). Most current methods for learning DBN employ either local search such as hill-climbing, or a meta stochastic global optimization framework such as genetic algorithm or simulated annealing, which are only able to locate sub-optimal solutions. Further, current DBN applications have essentially been limited to small sized networks. Results To overcome the above difficulties, we introduce here a deterministic global optimization based DBN approach for reverse engineering genetic networks from time course gene expression data. For such DBN models that consist only of inter time slice arcs, we show that there exists a polynomial time algorithm for learning the globally optimal network structure. The proposed approach, named GlobalMIT+, employs the recently proposed information theoretic scoring metric named mutual information test (MIT). GlobalMIT+ is able to learn high-order time delayed genetic interactions, which are common to most biological systems. Evaluation of the approach using both synthetic and real data sets, including a 733 cyanobacterial gene expression data set, shows significantly improved performance over other techniques. Conclusions Our studies demonstrate that deterministic global optimization approaches can infer large scale genetic networks.
A model of the circadian clock in the cyanobacterium Cyanothece sp. ATCC 51142
- Nguyen, Vinh, Chetty, Madhu, Coppel, Ross, Gaudana, Sandeep, Wangikar, Pramod
- Authors: Nguyen, Vinh , Chetty, Madhu , Coppel, Ross , Gaudana, Sandeep , Wangikar, Pramod
- Date: 2013
- Type: Text , Journal article
- Relation: BMC Bioinformatics Vol. 14, no. (Supplement 2) (2013), p. s14-1-s14-9
- Full Text:
- Reviewed:
- Description: Background The over consumption of fossil fuels has led to growing concerns over climate change and global warming. Increasing research activities have been carried out towards alternative viable biofuel sources. Of several different biofuel platforms, cyanobacteria possess great potential, for their ability to accumulate biomass tens of times faster than traditional oilseed crops. The cyanobacterium Cyanothece sp. ATCC 51142 has recently attracted lots of research interest as a model organism for such research. Cyanothece can perform efficiently both photosynthesis and nitrogen fixation within the same cell, and has been recently shown to produce biohydrogen--a byproduct of nitrogen fixation--at very high rates of several folds higher than previously described hydrogen-producing photosynthetic microbes. Since the key enzyme for nitrogen fixation is very sensitive to oxygen produced by photosynthesis, Cyanothece employs a sophisticated temporal separation scheme, where nitrogen fixation occurs at night and photosynthesis at day. At the core of this temporal separation scheme is a robust clocking mechanism, which so far has not been thoroughly studied. Understanding how this circadian clock interacts with and harmonizes global transcription of key cellular processes is one of the keys to realize the inherent potential of this organism. Results In this paper, we employ several state of the art bioinformatics techniques for studying the core circadian clock in Cyanothece sp. ATCC 51142, and its interactions with other key cellular processes. We employ comparative genomics techniques to map the circadian clock genes and genetic interactions from another cyanobacterial species, namely Synechococcus elongatus PCC 7942, of which the circadian clock has been much more thoroughly investigated. Using time series gene expression data for Cyanothece, we employ gene regulatory network reconstruction techniques to learn this network de novo, and compare the reconstructed network against the interactions currently reported in the literature. Next, we build a computational model of the interactions between the core clock and other cellular processes, and show how this model can predict the behaviour of the system under changing environmental conditions. The constructed models significantly advance our understanding of the Cyanothece circadian clock functional mechanisms.
- Authors: Nguyen, Vinh , Chetty, Madhu , Coppel, Ross , Gaudana, Sandeep , Wangikar, Pramod
- Date: 2013
- Type: Text , Journal article
- Relation: BMC Bioinformatics Vol. 14, no. (Supplement 2) (2013), p. s14-1-s14-9
- Full Text:
- Reviewed:
- Description: Background The over consumption of fossil fuels has led to growing concerns over climate change and global warming. Increasing research activities have been carried out towards alternative viable biofuel sources. Of several different biofuel platforms, cyanobacteria possess great potential, for their ability to accumulate biomass tens of times faster than traditional oilseed crops. The cyanobacterium Cyanothece sp. ATCC 51142 has recently attracted lots of research interest as a model organism for such research. Cyanothece can perform efficiently both photosynthesis and nitrogen fixation within the same cell, and has been recently shown to produce biohydrogen--a byproduct of nitrogen fixation--at very high rates of several folds higher than previously described hydrogen-producing photosynthetic microbes. Since the key enzyme for nitrogen fixation is very sensitive to oxygen produced by photosynthesis, Cyanothece employs a sophisticated temporal separation scheme, where nitrogen fixation occurs at night and photosynthesis at day. At the core of this temporal separation scheme is a robust clocking mechanism, which so far has not been thoroughly studied. Understanding how this circadian clock interacts with and harmonizes global transcription of key cellular processes is one of the keys to realize the inherent potential of this organism. Results In this paper, we employ several state of the art bioinformatics techniques for studying the core circadian clock in Cyanothece sp. ATCC 51142, and its interactions with other key cellular processes. We employ comparative genomics techniques to map the circadian clock genes and genetic interactions from another cyanobacterial species, namely Synechococcus elongatus PCC 7942, of which the circadian clock has been much more thoroughly investigated. Using time series gene expression data for Cyanothece, we employ gene regulatory network reconstruction techniques to learn this network de novo, and compare the reconstructed network against the interactions currently reported in the literature. Next, we build a computational model of the interactions between the core clock and other cellular processes, and show how this model can predict the behaviour of the system under changing environmental conditions. The constructed models significantly advance our understanding of the Cyanothece circadian clock functional mechanisms.
Optimization and matrix constructions for classification of data
- Kelarev, Andrei, Yearwood, John, Vamplew, Peter, Abawajy, Jemal, Chowdhury, Morshed
- Authors: Kelarev, Andrei , Yearwood, John , Vamplew, Peter , Abawajy, Jemal , Chowdhury, Morshed
- Date: 2011
- Type: Journal article
- Relation: New Zealand Journal of Mathematics Vol. 41, no. 2011 (2011), p. 65-73
- Full Text:
- Reviewed:
- Description: Max-plus alegbras and more general semirings have many useful applications and have been actively investigated. On the other hand, structural matrix rings are also well known and have been considered by many authors. The main theorem of this article completely describes all optimal ideas in the more general structural matrix semirings. Originally, our investigation of these ideals was motivated by applications in data mining for the design of multiple classification systems combining several individual classifiers.
- Authors: Kelarev, Andrei , Yearwood, John , Vamplew, Peter , Abawajy, Jemal , Chowdhury, Morshed
- Date: 2011
- Type: Journal article
- Relation: New Zealand Journal of Mathematics Vol. 41, no. 2011 (2011), p. 65-73
- Full Text:
- Reviewed:
- Description: Max-plus alegbras and more general semirings have many useful applications and have been actively investigated. On the other hand, structural matrix rings are also well known and have been considered by many authors. The main theorem of this article completely describes all optimal ideas in the more general structural matrix semirings. Originally, our investigation of these ideals was motivated by applications in data mining for the design of multiple classification systems combining several individual classifiers.
Chemical characterization of MEA degradation in PCC pilot plants operating in Australia
- Cruickshank, Alicia, Verheyen, Vincent, Adeloju, Samuel, Meuleman, Erik, Chaffee, Alan, Cottrell, Aaron, Feron, Paul
- Authors: Cruickshank, Alicia , Verheyen, Vincent , Adeloju, Samuel , Meuleman, Erik , Chaffee, Alan , Cottrell, Aaron , Feron, Paul
- Date: 2013
- Type: Text , Journal article
- Relation: Energy Procedia Vol. 37, no. (2013), p. 877-882
- Full Text:
- Reviewed:
- Description: An important step towards commercial scale post-combustion CO2 capture from coal-fired power stations is understanding solvent degradation. Laboratory scale trials have identified three main solvent degradation pathways for 30% MEA: oxidative degradation, carbamate polymerization and formation of heat stable salts. This paper probes the semi-volatile organic compounds produced from a single batch of 30% MEA which was used to capture CO2 from a black coal-fired power station (Tarong, Queensland, Australia) for approximately 700 hours, followed by 500 hours at the brown coal-fired power station (Loy Yang, Victoria, Australia). Comparisons are made between the compounds identified in this aged solvent system with MEA degradation reactions described in literature. Most of semi-volatile compounds tentatively identified by GC/MS have previously been reported in laboratory scale degradation trials. Our preliminary results show low levels of degradation products were present in samples after its use in the pilot plant at Tarong (black coal) and consequent 13 months storage, but much higher concentrations were later found in the same solvent during its at use in the pilot plant at Loy Yang Power (brown coal). Further work includes identifying the cause of poor GC/MS repeatability and investigating the relative rates of reactions described in literature. The impact of inorganic anions and dissolved metals on MEA degradation will also be explored.
- Authors: Cruickshank, Alicia , Verheyen, Vincent , Adeloju, Samuel , Meuleman, Erik , Chaffee, Alan , Cottrell, Aaron , Feron, Paul
- Date: 2013
- Type: Text , Journal article
- Relation: Energy Procedia Vol. 37, no. (2013), p. 877-882
- Full Text:
- Reviewed:
- Description: An important step towards commercial scale post-combustion CO2 capture from coal-fired power stations is understanding solvent degradation. Laboratory scale trials have identified three main solvent degradation pathways for 30% MEA: oxidative degradation, carbamate polymerization and formation of heat stable salts. This paper probes the semi-volatile organic compounds produced from a single batch of 30% MEA which was used to capture CO2 from a black coal-fired power station (Tarong, Queensland, Australia) for approximately 700 hours, followed by 500 hours at the brown coal-fired power station (Loy Yang, Victoria, Australia). Comparisons are made between the compounds identified in this aged solvent system with MEA degradation reactions described in literature. Most of semi-volatile compounds tentatively identified by GC/MS have previously been reported in laboratory scale degradation trials. Our preliminary results show low levels of degradation products were present in samples after its use in the pilot plant at Tarong (black coal) and consequent 13 months storage, but much higher concentrations were later found in the same solvent during its at use in the pilot plant at Loy Yang Power (brown coal). Further work includes identifying the cause of poor GC/MS repeatability and investigating the relative rates of reactions described in literature. The impact of inorganic anions and dissolved metals on MEA degradation will also be explored.
Learning the naive bayes classifier with optimization models
- Taheri, Sona, Mammadov, Musa
- Authors: Taheri, Sona , Mammadov, Musa
- Date: 2013
- Type: Text , Journal article
- Relation: International Journal of Applied Mathematics and Computer Science Vol. 23, no. 4 (2013), p. 787-795
- Full Text:
- Reviewed:
- Description: Naive Bayes is among the simplest probabilistic classifiers. It often performs surprisingly well in many real world applications, despite the strong assumption that all features are conditionally independent given the class. In the learning process of this classifier with the known structure, class probabilities and conditional probabilities are calculated using training data, and then values of these probabilities are used to classify new observations. In this paper, we introduce three novel optimization models for the naive Bayes classifier where both class probabilities and conditional probabilities are considered as variables. The values of these variables are found by solving the corresponding optimization problems. Numerical experiments are conducted on several real world binary classification data sets, where continuous features are discretized by applying three different methods. The performances of these models are compared with the naive Bayes classifier, tree augmented naive Bayes, the SVM, C4.5 and the nearest neighbor classifier. The obtained results demonstrate that the proposed models can significantly improve the performance of the naive Bayes classifier, yet at the same time maintain its simple structure.
- Authors: Taheri, Sona , Mammadov, Musa
- Date: 2013
- Type: Text , Journal article
- Relation: International Journal of Applied Mathematics and Computer Science Vol. 23, no. 4 (2013), p. 787-795
- Full Text:
- Reviewed:
- Description: Naive Bayes is among the simplest probabilistic classifiers. It often performs surprisingly well in many real world applications, despite the strong assumption that all features are conditionally independent given the class. In the learning process of this classifier with the known structure, class probabilities and conditional probabilities are calculated using training data, and then values of these probabilities are used to classify new observations. In this paper, we introduce three novel optimization models for the naive Bayes classifier where both class probabilities and conditional probabilities are considered as variables. The values of these variables are found by solving the corresponding optimization problems. Numerical experiments are conducted on several real world binary classification data sets, where continuous features are discretized by applying three different methods. The performances of these models are compared with the naive Bayes classifier, tree augmented naive Bayes, the SVM, C4.5 and the nearest neighbor classifier. The obtained results demonstrate that the proposed models can significantly improve the performance of the naive Bayes classifier, yet at the same time maintain its simple structure.
Calibration of an articulated CMM using stochastic approximations
- Sultan, Ibrahim, Puthiyaveettil, Prajeesh
- Authors: Sultan, Ibrahim , Puthiyaveettil, Prajeesh
- Date: 2012
- Type: Text , Journal article
- Relation: International Journal of Advanced Manufacturing Technology Vol. 63, no. 1-4 (2012), p. 201-207
- Full Text:
- Reviewed:
- Description: A coordinate measuring machine (CMM) is meant to digitise the spatial locations of points and feed the resulting measurements to a CAD system for storing and processing. For reliable utilisation of a CMM, a calibration procedure is often undertaken to eliminate the inaccuracies which result from manufacturing, assembly and installation errors. In this paper, an Immersion digitizer coordinate measuring machine has been calibrated using an accurately manufactured master cuboid fixture. This CMM has been designed as an articulated manipulator to enhance its dexterity and versatility. As such, the calibration problem is tackled with the aid of a kinematic model similar to those employed for the analysis of serial robots. In addition, a stochastic-based optimisation technique is used to identify the parameters of the kinematic model in order for the accurate performance to be achieved. The experimental results demonstrate the effectiveness of this method, whereby the measuring accuracy has been improved considerably. © 2012 Springer-Verlag London Limited.
- Description: 2003010394
- Authors: Sultan, Ibrahim , Puthiyaveettil, Prajeesh
- Date: 2012
- Type: Text , Journal article
- Relation: International Journal of Advanced Manufacturing Technology Vol. 63, no. 1-4 (2012), p. 201-207
- Full Text:
- Reviewed:
- Description: A coordinate measuring machine (CMM) is meant to digitise the spatial locations of points and feed the resulting measurements to a CAD system for storing and processing. For reliable utilisation of a CMM, a calibration procedure is often undertaken to eliminate the inaccuracies which result from manufacturing, assembly and installation errors. In this paper, an Immersion digitizer coordinate measuring machine has been calibrated using an accurately manufactured master cuboid fixture. This CMM has been designed as an articulated manipulator to enhance its dexterity and versatility. As such, the calibration problem is tackled with the aid of a kinematic model similar to those employed for the analysis of serial robots. In addition, a stochastic-based optimisation technique is used to identify the parameters of the kinematic model in order for the accurate performance to be achieved. The experimental results demonstrate the effectiveness of this method, whereby the measuring accuracy has been improved considerably. © 2012 Springer-Verlag London Limited.
- Description: 2003010394
Incorporating time-delays in S-System model for reverse engineering genetic networks
- Chowdhury, Ahsan, Chetty, Madhu, Nguyen, Vinh
- Authors: Chowdhury, Ahsan , Chetty, Madhu , Nguyen, Vinh
- Date: 2013
- Type: Text , Journal article
- Relation: BMC Bioinformatics Vol. 14, no. (2013), p. 1-22
- Full Text:
- Reviewed:
- Description: Background In any gene regulatory network (GRN), the complex interactions occurring amongst transcription factors and target genes can be either instantaneous or time-delayed. However, many existing modeling approaches currently applied for inferring GRNs are unable to represent both these interactions simultaneously. As a result, all these approaches cannot detect important interactions of the other type. S-System model, a differential equation based approach which has been increasingly applied for modeling GRNs, also suffers from this limitation. In fact, all S-System based existing modeling approaches have been designed to capture only instantaneous interactions, and are unable to infer time-delayed interactions. Results In this paper, we propose a novel Time-Delayed S-System (TDSS) model which uses a set of delay differential equations to represent the system dynamics. The ability to incorporate time-delay parameters in the proposed S-System model enables simultaneous modeling of both instantaneous and time-delayed interactions. Furthermore, the delay parameters are not limited to just positive integer values (corresponding to time stamps in the data), but can also take fractional values. Moreover, we also propose a new criterion for model evaluation exploiting the sparse and scale-free nature of GRNs to effectively narrow down the search space, which not only reduces the computation time significantly but also improves model accuracy. The evaluation criterion systematically adapts the max-min in-degrees and also systematically balances the effect of network accuracy and complexity during optimization. Conclusion The four well-known performance measures applied to the experimental studies on synthetic networks with various time-delayed regulations clearly demonstrate that the proposed method can capture both instantaneous and delayed interactions correctly with high precision. The experiments carried out on two well-known real-life networks, namely IRMA and SOS DNA repair network in Escherichia coli show a significant improvement compared with other state-of-the-art approaches for GRN modeling.
- Authors: Chowdhury, Ahsan , Chetty, Madhu , Nguyen, Vinh
- Date: 2013
- Type: Text , Journal article
- Relation: BMC Bioinformatics Vol. 14, no. (2013), p. 1-22
- Full Text:
- Reviewed:
- Description: Background In any gene regulatory network (GRN), the complex interactions occurring amongst transcription factors and target genes can be either instantaneous or time-delayed. However, many existing modeling approaches currently applied for inferring GRNs are unable to represent both these interactions simultaneously. As a result, all these approaches cannot detect important interactions of the other type. S-System model, a differential equation based approach which has been increasingly applied for modeling GRNs, also suffers from this limitation. In fact, all S-System based existing modeling approaches have been designed to capture only instantaneous interactions, and are unable to infer time-delayed interactions. Results In this paper, we propose a novel Time-Delayed S-System (TDSS) model which uses a set of delay differential equations to represent the system dynamics. The ability to incorporate time-delay parameters in the proposed S-System model enables simultaneous modeling of both instantaneous and time-delayed interactions. Furthermore, the delay parameters are not limited to just positive integer values (corresponding to time stamps in the data), but can also take fractional values. Moreover, we also propose a new criterion for model evaluation exploiting the sparse and scale-free nature of GRNs to effectively narrow down the search space, which not only reduces the computation time significantly but also improves model accuracy. The evaluation criterion systematically adapts the max-min in-degrees and also systematically balances the effect of network accuracy and complexity during optimization. Conclusion The four well-known performance measures applied to the experimental studies on synthetic networks with various time-delayed regulations clearly demonstrate that the proposed method can capture both instantaneous and delayed interactions correctly with high precision. The experiments carried out on two well-known real-life networks, namely IRMA and SOS DNA repair network in Escherichia coli show a significant improvement compared with other state-of-the-art approaches for GRN modeling.
The impact of handwriting difficulties on compositional quality in children with developmental coordination disorder
- Prunty, Mellissa, Barnett, Anna, Wilmut, Kate, Plumb, Mandy
- Authors: Prunty, Mellissa , Barnett, Anna , Wilmut, Kate , Plumb, Mandy
- Date: 2016
- Type: Text , Journal article
- Relation: British Journal of Occupational Therapy Vol. 79, no. 10 (2016), p. 591-597
- Full Text:
- Reviewed:
- Description: Introduction There is substantial evidence to support the relationship between transcription skills (handwriting and spelling) and compositional quality. For children with developmental coordination disorder, handwriting can be particularly challenging. While recent research has aimed to investigate their handwriting difficulties in more detail, the impact of transcription on their compositional quality has not previously been examined. The aim of this exploratory study was to examine compositional quality in children with developmental coordination disorder and to ascertain whether their transcription skills influence writing quality. Method Twenty-eight children with developmental coordination disorder participated in the study, with 28 typically developing age and gender matched controls. The children completed the free-writing' task from the detailed assessment of speed of handwriting tool, which was evaluated for compositional quality using the Wechsler objective language dimensions. Results The children with developmental coordination disorder performed significantly below their typically developing peers on five of the six Wechsler objective language dimensions items. They also had a higher percentage of misspelled words. Regression analyses indicated that the number of words produced per minute and the percentage of misspelled words explained 55% of the variance for compositional quality. Conclusion The handwriting difficulties so commonly reported in children with developmental coordination disorder have wider repercussions for the quality of written composition.
- Authors: Prunty, Mellissa , Barnett, Anna , Wilmut, Kate , Plumb, Mandy
- Date: 2016
- Type: Text , Journal article
- Relation: British Journal of Occupational Therapy Vol. 79, no. 10 (2016), p. 591-597
- Full Text:
- Reviewed:
- Description: Introduction There is substantial evidence to support the relationship between transcription skills (handwriting and spelling) and compositional quality. For children with developmental coordination disorder, handwriting can be particularly challenging. While recent research has aimed to investigate their handwriting difficulties in more detail, the impact of transcription on their compositional quality has not previously been examined. The aim of this exploratory study was to examine compositional quality in children with developmental coordination disorder and to ascertain whether their transcription skills influence writing quality. Method Twenty-eight children with developmental coordination disorder participated in the study, with 28 typically developing age and gender matched controls. The children completed the free-writing' task from the detailed assessment of speed of handwriting tool, which was evaluated for compositional quality using the Wechsler objective language dimensions. Results The children with developmental coordination disorder performed significantly below their typically developing peers on five of the six Wechsler objective language dimensions items. They also had a higher percentage of misspelled words. Regression analyses indicated that the number of words produced per minute and the percentage of misspelled words explained 55% of the variance for compositional quality. Conclusion The handwriting difficulties so commonly reported in children with developmental coordination disorder have wider repercussions for the quality of written composition.
DRfit : A Java tool for the analysis of discrete data from multi-well plate assays
- Hofmann, Andreas, Preston, Sarah, Cross, Megan, Herath, Dilrukshi, Simon, Anne, Gasser, Robin
- Authors: Hofmann, Andreas , Preston, Sarah , Cross, Megan , Herath, Dilrukshi , Simon, Anne , Gasser, Robin
- Date: 2019
- Type: Text , Journal article
- Relation: BMC Bioinformatics Vol. 20, no. (2019), p. 1-6
- Full Text:
- Reviewed:
- Description: Background: Analyses of replicates in sets of discrete data, typically acquired in multi-well plate formats, is a recurring task in many contemporary areas in the Life Sciences. The availability of accessible cross-platform data analysis tools for such fundamental tasks in varied projects and environments is an important prerequisite to ensuring a reliable and timely turnaround as well as to provide practical analytical tools for student training. Results: We have developed an easy-to-use, interactive software tool for the analysis of multiple data sets comprising replicates of discrete bivariate data points. For each dataset, the software identifies the replicate data points from a defined matrix layout and calculates their means and standard errors. The averaged values are then automatically fitted using either a linear or a logistic dose response function. Conclusions: DRfit is a practical and convenient tool for the analysis of one or multiple sets of discrete data points acquired as replicates from multi-well plate assays. The design of the graphical user interface and the built-in analysis features make it a flexible and useful tool for a wide range of different assays.
- Authors: Hofmann, Andreas , Preston, Sarah , Cross, Megan , Herath, Dilrukshi , Simon, Anne , Gasser, Robin
- Date: 2019
- Type: Text , Journal article
- Relation: BMC Bioinformatics Vol. 20, no. (2019), p. 1-6
- Full Text:
- Reviewed:
- Description: Background: Analyses of replicates in sets of discrete data, typically acquired in multi-well plate formats, is a recurring task in many contemporary areas in the Life Sciences. The availability of accessible cross-platform data analysis tools for such fundamental tasks in varied projects and environments is an important prerequisite to ensuring a reliable and timely turnaround as well as to provide practical analytical tools for student training. Results: We have developed an easy-to-use, interactive software tool for the analysis of multiple data sets comprising replicates of discrete bivariate data points. For each dataset, the software identifies the replicate data points from a defined matrix layout and calculates their means and standard errors. The averaged values are then automatically fitted using either a linear or a logistic dose response function. Conclusions: DRfit is a practical and convenient tool for the analysis of one or multiple sets of discrete data points acquired as replicates from multi-well plate assays. The design of the graphical user interface and the built-in analysis features make it a flexible and useful tool for a wide range of different assays.
On topology optimization and canonical duality method
- Authors: Gao, David
- Date: 2018
- Type: Text , Journal article
- Relation: Computer Methods in Applied Mechanics and Engineering Vol. 341, no. (2018), p. 249-277
- Full Text:
- Reviewed:
- Description: Topology optimization for general materials is correctly formulated as a bi-level knapsack problem, which is considered to be NP-hard in global optimization and computer science. By using canonical duality theory (CDT) developed by the author, the linear knapsack problem can be solved analytically to obtain global optimal solution at each design iteration. Both uniqueness, existence, and NP-hardness are discussed. The novel CDT method for general topology optimization is refined and tested by both 2-D and 3-D benchmark problems. Numerical results show that without using filter and any other artificial technique, the CDT method can produce exactly 0-1 optimal density distribution with almost no checkerboard pattern. Its performance and novelty are compared with the popular SIMP and BESO approaches. Additionally, some mathematical and conceptual mistakes in literature are explicitly addressed. A brief review on the canonical duality theory for modeling multi-scale complex systems and for solving general nonconvex/discrete problems are given in Appendix. This paper demonstrates a simple truth: elegant designs come from correct model and theory. © 2018
- Authors: Gao, David
- Date: 2018
- Type: Text , Journal article
- Relation: Computer Methods in Applied Mechanics and Engineering Vol. 341, no. (2018), p. 249-277
- Full Text:
- Reviewed:
- Description: Topology optimization for general materials is correctly formulated as a bi-level knapsack problem, which is considered to be NP-hard in global optimization and computer science. By using canonical duality theory (CDT) developed by the author, the linear knapsack problem can be solved analytically to obtain global optimal solution at each design iteration. Both uniqueness, existence, and NP-hardness are discussed. The novel CDT method for general topology optimization is refined and tested by both 2-D and 3-D benchmark problems. Numerical results show that without using filter and any other artificial technique, the CDT method can produce exactly 0-1 optimal density distribution with almost no checkerboard pattern. Its performance and novelty are compared with the popular SIMP and BESO approaches. Additionally, some mathematical and conceptual mistakes in literature are explicitly addressed. A brief review on the canonical duality theory for modeling multi-scale complex systems and for solving general nonconvex/discrete problems are given in Appendix. This paper demonstrates a simple truth: elegant designs come from correct model and theory. © 2018
Characterizations of minimal elements of topical functions on semimodules with applications
- Hassani, Sara, Mohebi, Hossein
- Authors: Hassani, Sara , Mohebi, Hossein
- Date: 2017
- Type: Text , Journal article
- Relation: Linear Algebra and Its Applications Vol. 520, no. (2017), p. 104-124
- Full Text:
- Reviewed:
- Description: In this paper, we first give characterizations of the superdifferential of extended valued topical functions defined on a semimodule with values in a semifield. Next, we characterize minimal elements of the upper support set of extended valued topical functions. Finally, as an application, we present a necessary and sufficient condition for global maximum of the difference of two strictly topical functions defined on a semimodule. (C) 2017 Elsevier Inc. All rights reserved.
- Authors: Hassani, Sara , Mohebi, Hossein
- Date: 2017
- Type: Text , Journal article
- Relation: Linear Algebra and Its Applications Vol. 520, no. (2017), p. 104-124
- Full Text:
- Reviewed:
- Description: In this paper, we first give characterizations of the superdifferential of extended valued topical functions defined on a semimodule with values in a semifield. Next, we characterize minimal elements of the upper support set of extended valued topical functions. Finally, as an application, we present a necessary and sufficient condition for global maximum of the difference of two strictly topical functions defined on a semimodule. (C) 2017 Elsevier Inc. All rights reserved.
Modelling optimal warranty price for lifetime policies taking into account the uncertainties in life measures
- Rahman, Anisur, Chattopadhyay, Gopinath
- Authors: Rahman, Anisur , Chattopadhyay, Gopinath
- Date: 2018
- Type: Text , Journal article
- Relation: International Journal of Management Science and Engineering Management Vol. 13, no. 2 (2018), p. 84-90
- Full Text:
- Reviewed:
- Description: Owing to the assurance of longer reliable service life and greater customer peace of mind, products with a lifetime warranty are becoming more and more popular. Under such policies, both the manufacturer and the buyer are exposed to uncertainties and risks of warranty pricing and product performance since product lifetimes are uncertain and are not well defined in these policies. Considering the uncertainties in the measure of lifetime (useful life), this paper extends previous work of the authors [Rahman, A., & Chattopadhay, G. N. (2010). Modelling risks to manufacturer and buyer for lifetime warranty policies. International Journal of Management Science and Engineering Management, 5, 203–209] to determine the optimal warranty price. Risk preference models are developed to find the optimal warranty price through the use of the manufacturer’s utility function for profit and the buyer’s utility function for repair costs. The sensitivity of the risk preferences models are analysed using numerical examples with respect to factors such as the buyer’s and the manufacturer/dealer’s risk preferences, the buyer’s anticipated and the manufacturer’s estimated product failure intensity, the buyer’s loyalty to the original manufacturer/dealer in repairing failed products, and the buyer’s repair costs for non-warrantied products. Analysis of the developed models reveals that the manufacturer’s decisions on warranty price are strictly related to useful life, failure intensity of the product, and risk preferences. On the other hand, the buyer’s acceptance of a lifetime warranty depends on the expected lifetime of the product, the buyer’s anticipated product failure intensity, anticipated repair costs, and most importantly the buyer’s risk preference. © 2017 International Society of Management Science and Engineering Management.
- Authors: Rahman, Anisur , Chattopadhyay, Gopinath
- Date: 2018
- Type: Text , Journal article
- Relation: International Journal of Management Science and Engineering Management Vol. 13, no. 2 (2018), p. 84-90
- Full Text:
- Reviewed:
- Description: Owing to the assurance of longer reliable service life and greater customer peace of mind, products with a lifetime warranty are becoming more and more popular. Under such policies, both the manufacturer and the buyer are exposed to uncertainties and risks of warranty pricing and product performance since product lifetimes are uncertain and are not well defined in these policies. Considering the uncertainties in the measure of lifetime (useful life), this paper extends previous work of the authors [Rahman, A., & Chattopadhay, G. N. (2010). Modelling risks to manufacturer and buyer for lifetime warranty policies. International Journal of Management Science and Engineering Management, 5, 203–209] to determine the optimal warranty price. Risk preference models are developed to find the optimal warranty price through the use of the manufacturer’s utility function for profit and the buyer’s utility function for repair costs. The sensitivity of the risk preferences models are analysed using numerical examples with respect to factors such as the buyer’s and the manufacturer/dealer’s risk preferences, the buyer’s anticipated and the manufacturer’s estimated product failure intensity, the buyer’s loyalty to the original manufacturer/dealer in repairing failed products, and the buyer’s repair costs for non-warrantied products. Analysis of the developed models reveals that the manufacturer’s decisions on warranty price are strictly related to useful life, failure intensity of the product, and risk preferences. On the other hand, the buyer’s acceptance of a lifetime warranty depends on the expected lifetime of the product, the buyer’s anticipated product failure intensity, anticipated repair costs, and most importantly the buyer’s risk preference. © 2017 International Society of Management Science and Engineering Management.
Background concentrations of mercury in Australian freshwater sediments : the effect of catchment characteristics on mercury deposition
- Lintern, Anna, Schneider, Larissa, Beck, Kristen, Mariani, Michela, Gell, Peter
- Authors: Lintern, Anna , Schneider, Larissa , Beck, Kristen , Mariani, Michela , Gell, Peter
- Date: 2020
- Type: Text , Journal article
- Relation: Elementa Vol. 8, no. 1 (2020), p.
- Full Text:
- Reviewed:
- Description: Waterways in the Southern Hemisphere, including on the Australian continent, are facing increasing levels of mercury contamination due to industrialization, agricultural intensification, energy production, urbanization, and mining. Mercury contamination undermines the use of waterways as a source of potable water and also has a deleterious effect on aquatic organisms. When developing management strategies to reduce mercury levels in waterways, it is crucial to set appropriate targets for the mitigation of these contaminated waterways.These mitigation targets could be (1) trigger values or default guideline values provided by water and sediment quality guidelines or (2) background (pre-industrialization) levels of mercury in waterways or sediments. The aims of this study were to (1) quantify the differences between existing environmental guideline values for mercury in freshwater lakes and background mercury concentrations and (2) determine the key factors affecting the spatial differences in background mercury concentrations in freshwater lake systems in Australia. Mercury concentrations were measured in background sediments from 21 lakes in Australia. These data indicate that background mercury concentrations in lake sediments can vary significantly across the continent and are up to nine times lower than current sediment quality guidelines in Australia and New Zealand. This indicates that if waterway managers are aiming to restore systems to ‘pre-industrialization’ mercury levels, it is highly important to quantify the site-specific background mercury concentration. Organic matter and precipitation were the main factors correlating with background mercury concentrations in lake sediments. We also found that the geology of the lake catchment correlates to the background mercury concentration of lake sediments.The highest mercury background concentrations were found in lakes in igneous mafic intrusive regions and the lowest in areas underlain by regolith. Taking into account these findings, we provide a preliminary map of predicted background mercury sediment concentrations across Australia that could be used by waterway managers for determining management targets. Copyright: © 2020 The Author(s). **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Peter Gell” is provided in this record**
- Authors: Lintern, Anna , Schneider, Larissa , Beck, Kristen , Mariani, Michela , Gell, Peter
- Date: 2020
- Type: Text , Journal article
- Relation: Elementa Vol. 8, no. 1 (2020), p.
- Full Text:
- Reviewed:
- Description: Waterways in the Southern Hemisphere, including on the Australian continent, are facing increasing levels of mercury contamination due to industrialization, agricultural intensification, energy production, urbanization, and mining. Mercury contamination undermines the use of waterways as a source of potable water and also has a deleterious effect on aquatic organisms. When developing management strategies to reduce mercury levels in waterways, it is crucial to set appropriate targets for the mitigation of these contaminated waterways.These mitigation targets could be (1) trigger values or default guideline values provided by water and sediment quality guidelines or (2) background (pre-industrialization) levels of mercury in waterways or sediments. The aims of this study were to (1) quantify the differences between existing environmental guideline values for mercury in freshwater lakes and background mercury concentrations and (2) determine the key factors affecting the spatial differences in background mercury concentrations in freshwater lake systems in Australia. Mercury concentrations were measured in background sediments from 21 lakes in Australia. These data indicate that background mercury concentrations in lake sediments can vary significantly across the continent and are up to nine times lower than current sediment quality guidelines in Australia and New Zealand. This indicates that if waterway managers are aiming to restore systems to ‘pre-industrialization’ mercury levels, it is highly important to quantify the site-specific background mercury concentration. Organic matter and precipitation were the main factors correlating with background mercury concentrations in lake sediments. We also found that the geology of the lake catchment correlates to the background mercury concentration of lake sediments.The highest mercury background concentrations were found in lakes in igneous mafic intrusive regions and the lowest in areas underlain by regolith. Taking into account these findings, we provide a preliminary map of predicted background mercury sediment concentrations across Australia that could be used by waterway managers for determining management targets. Copyright: © 2020 The Author(s). **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Peter Gell” is provided in this record**
Atmospheric mercury in the Latrobe Valley, Australia : case study June 2013
- Schofield, Robyn, Utembe, Steven, Gionfriddo, Caitlin, Tate, Michael, Keywood, Melita
- Authors: Schofield, Robyn , Utembe, Steven , Gionfriddo, Caitlin , Tate, Michael , Keywood, Melita
- Date: 2021
- Type: Text , Journal article
- Relation: Elementa Vol. 9, no. 1 (2021), p.
- Full Text:
- Reviewed:
- Description: Gaseous elemental mercury observations were conducted at Churchill, Victoria, in Australia from April to July, 2013, using a Tekran 2537 analyzer. A strong diurnal variation with daytime average values of 1.2–1.3 ng m–3 and nighttime average values of 1.6–1.8 ng m–3 was observed. These values are significantly higher than the Southern Hemisphere average of 0.85–1.05 ng m–3. Churchill is in the Latrobe Valley, approximately 150 km East of Melbourne, where approximately 80% of Victoria’s electricity is generated from low-rank brown coal from four major power stations: Loy Yang A, Loy Yang B, Hazelwood, and Yallourn. These aging generators do not have any sulfur, nitrogen oxide, or mercury air pollution controls. Mercury emitted in the 2015–2016 year in the Latrobe Valley is estimated to have had an externalized health cost of $AUD88 million. Air pollution mercury simulations were conducted using the Weather Research and Forecast model with Chemistry at 3 × 3 km resolution. Electrical power generation emissions were added using mercury emissions created from the National Energy Market’s 5-min energy distribution data. The strong diurnal cycle in the observed mercury was well simulated (R2 ¼ .49 and P value ¼ 0.00) when soil mercury emissions arising from several years of wet and dry deposition in a radius around the power generators was included in the model, as has been observed around aging lignite coal power generators elsewhere. These results indicate that long-term air and soil sampling in power generation regions, even after the closure of coal fired power stations, will have important implications to understanding the airborne mercury emissions sources. Copyright: © 2021 The Author(s). **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Melita Keywood” is provided in this record**
- Authors: Schofield, Robyn , Utembe, Steven , Gionfriddo, Caitlin , Tate, Michael , Keywood, Melita
- Date: 2021
- Type: Text , Journal article
- Relation: Elementa Vol. 9, no. 1 (2021), p.
- Full Text:
- Reviewed:
- Description: Gaseous elemental mercury observations were conducted at Churchill, Victoria, in Australia from April to July, 2013, using a Tekran 2537 analyzer. A strong diurnal variation with daytime average values of 1.2–1.3 ng m–3 and nighttime average values of 1.6–1.8 ng m–3 was observed. These values are significantly higher than the Southern Hemisphere average of 0.85–1.05 ng m–3. Churchill is in the Latrobe Valley, approximately 150 km East of Melbourne, where approximately 80% of Victoria’s electricity is generated from low-rank brown coal from four major power stations: Loy Yang A, Loy Yang B, Hazelwood, and Yallourn. These aging generators do not have any sulfur, nitrogen oxide, or mercury air pollution controls. Mercury emitted in the 2015–2016 year in the Latrobe Valley is estimated to have had an externalized health cost of $AUD88 million. Air pollution mercury simulations were conducted using the Weather Research and Forecast model with Chemistry at 3 × 3 km resolution. Electrical power generation emissions were added using mercury emissions created from the National Energy Market’s 5-min energy distribution data. The strong diurnal cycle in the observed mercury was well simulated (R2 ¼ .49 and P value ¼ 0.00) when soil mercury emissions arising from several years of wet and dry deposition in a radius around the power generators was included in the model, as has been observed around aging lignite coal power generators elsewhere. These results indicate that long-term air and soil sampling in power generation regions, even after the closure of coal fired power stations, will have important implications to understanding the airborne mercury emissions sources. Copyright: © 2021 The Author(s). **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Melita Keywood” is provided in this record**
Advances in the theory of compact groups and pro-lie groups in the last quarter century
- Hofmann, Karl, Morris, Sidney
- Authors: Hofmann, Karl , Morris, Sidney
- Date: 2021
- Type: Text , Journal article , Review
- Relation: Axioms Vol. 10, no. 3 (2021), p.
- Full Text:
- Reviewed:
- Description: This article surveys the development of the theory of compact groups and pro-Lie groups, contextualizing the major achievements over 125 years and focusing on some progress in the last quarter century. It begins with developments in the 18th and 19th centuries. Next is from Hilbert’s Fifth Problem in 1900 to its solution in 1952 by Montgomery, Zippin, and Gleason and Yamabe’s important structure theorem on almost connected locally compact groups. This half century included profound contributions by Weyl and Peter, Haar, Pontryagin, van Kampen, Weil, and Iwasawa. The focus in the last quarter century has been structure theory, largely resulting from extending Lie Theory to compact groups and then to pro-Lie groups, which are projective limits of finite-dimensional Lie groups. The category of pro-Lie groups is the smallest complete category containing Lie groups and includes all compact groups, locally compact abelian groups, and connected locally compact groups. Amongst the structure theorems is that each almost connected pro-Lie group G is homeomorphic to RI × C for a suitable set I and some compact subgroup C. Finally, there is a perfect generalization to compact groups G of the age-old natural duality of the group algebra R[G] of a finite group G to its representation algebra R(G, R), via the natural duality of the topological vector space RI to the vector space R(I), for any set I, thus opening a new approach to the Hochschild-Tannaka duality of compact groups. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
- Authors: Hofmann, Karl , Morris, Sidney
- Date: 2021
- Type: Text , Journal article , Review
- Relation: Axioms Vol. 10, no. 3 (2021), p.
- Full Text:
- Reviewed:
- Description: This article surveys the development of the theory of compact groups and pro-Lie groups, contextualizing the major achievements over 125 years and focusing on some progress in the last quarter century. It begins with developments in the 18th and 19th centuries. Next is from Hilbert’s Fifth Problem in 1900 to its solution in 1952 by Montgomery, Zippin, and Gleason and Yamabe’s important structure theorem on almost connected locally compact groups. This half century included profound contributions by Weyl and Peter, Haar, Pontryagin, van Kampen, Weil, and Iwasawa. The focus in the last quarter century has been structure theory, largely resulting from extending Lie Theory to compact groups and then to pro-Lie groups, which are projective limits of finite-dimensional Lie groups. The category of pro-Lie groups is the smallest complete category containing Lie groups and includes all compact groups, locally compact abelian groups, and connected locally compact groups. Amongst the structure theorems is that each almost connected pro-Lie group G is homeomorphic to RI × C for a suitable set I and some compact subgroup C. Finally, there is a perfect generalization to compact groups G of the age-old natural duality of the group algebra R[G] of a finite group G to its representation algebra R(G, R), via the natural duality of the topological vector space RI to the vector space R(I), for any set I, thus opening a new approach to the Hochschild-Tannaka duality of compact groups. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
The separable quotient problem for topological groups
- Leiderman, Arkady, Morris, Sidney, Tkachenko, Mikhail
- Authors: Leiderman, Arkady , Morris, Sidney , Tkachenko, Mikhail
- Date: 2019
- Type: Text , Journal article
- Relation: Israel Journal of Mathematics Vol. 234, no. 1 (Oct 2019), p. 331-369
- Full Text:
- Reviewed:
- Description: The famous Banach-Mazur problem, which asks if every infinite-dimensional Banach space has an infinite-dimensional separable quotient Banach space, has remained unsolved for 85 years, though it has been answered in the affirmative for reflexive Banach spaces and even Banach spaces which are duals. The analogous problem for locally convex spaces has been answered in the negative, but has been shown to be true for large classes of locally convex spaces including all non-normable Frechet spaces. For a topological group G there are four natural analogous problems: Does G have a separable quotient group which is (i) non-trivial; (ii) infinite; (iii) metrizable; (iv) infinite metrizable. Positive answers to all four questions are proved for groups G which belong to the important classes of (a) all compact groups; (b) all locally compact abelian groups; (c) all sigma-compact locally compact groups; (d) all abelian pro-Lie groups; (e) all sigma-compact pro-Lie groups; (f) all pseudocompact groups. However, a surprising example of an uncountable precompact group G is produced which has no non-trivial separable quotient group other than the trivial group. Indeed G(tau) has the same property, for every cardinal number tau >= 1.
- Authors: Leiderman, Arkady , Morris, Sidney , Tkachenko, Mikhail
- Date: 2019
- Type: Text , Journal article
- Relation: Israel Journal of Mathematics Vol. 234, no. 1 (Oct 2019), p. 331-369
- Full Text:
- Reviewed:
- Description: The famous Banach-Mazur problem, which asks if every infinite-dimensional Banach space has an infinite-dimensional separable quotient Banach space, has remained unsolved for 85 years, though it has been answered in the affirmative for reflexive Banach spaces and even Banach spaces which are duals. The analogous problem for locally convex spaces has been answered in the negative, but has been shown to be true for large classes of locally convex spaces including all non-normable Frechet spaces. For a topological group G there are four natural analogous problems: Does G have a separable quotient group which is (i) non-trivial; (ii) infinite; (iii) metrizable; (iv) infinite metrizable. Positive answers to all four questions are proved for groups G which belong to the important classes of (a) all compact groups; (b) all locally compact abelian groups; (c) all sigma-compact locally compact groups; (d) all abelian pro-Lie groups; (e) all sigma-compact pro-Lie groups; (f) all pseudocompact groups. However, a surprising example of an uncountable precompact group G is produced which has no non-trivial separable quotient group other than the trivial group. Indeed G(tau) has the same property, for every cardinal number tau >= 1.
Use of stochastic XFEM in the investigation of heterogeneity effects on the tensile strength of intermediate geotechnical materials
- Dyson, Ashley, Tang, Zhan, Tolooiyan, Ali
- Authors: Dyson, Ashley , Tang, Zhan , Tolooiyan, Ali
- Date: 2018
- Type: Text , Journal article
- Relation: Finite Elements in Analysis and Design Vol. 145, no. (2018), p. 1-9
- Full Text:
- Reviewed:
- Description: The numerical simulation of an Unconfined Expansion Test (UET) is presented with tensile strength fracture criteria assigned by stochastic methods to take into account material heterogeneity. Tests are performed by producing radial cavity expansion models of thinly sliced cylindrical specimens. The introduction of element-wise allocation of fracture parameters generates instances of specimen failure without the requirement of predefined fracture zones, permitting discontinuities to form naturally within zones containing weak strength parameters. The parallel application of an in-house Python scripts and eXtended Finite Element Method (XFEM) facilitates the investigation of heterogeneity effects on the tensile strength of intermediate geotechnical materials.
- Authors: Dyson, Ashley , Tang, Zhan , Tolooiyan, Ali
- Date: 2018
- Type: Text , Journal article
- Relation: Finite Elements in Analysis and Design Vol. 145, no. (2018), p. 1-9
- Full Text:
- Reviewed:
- Description: The numerical simulation of an Unconfined Expansion Test (UET) is presented with tensile strength fracture criteria assigned by stochastic methods to take into account material heterogeneity. Tests are performed by producing radial cavity expansion models of thinly sliced cylindrical specimens. The introduction of element-wise allocation of fracture parameters generates instances of specimen failure without the requirement of predefined fracture zones, permitting discontinuities to form naturally within zones containing weak strength parameters. The parallel application of an in-house Python scripts and eXtended Finite Element Method (XFEM) facilitates the investigation of heterogeneity effects on the tensile strength of intermediate geotechnical materials.
Reusing artifact-centric business process models : a behavioral consistent specialization approach
- Yongchareon, Sira, Liu, Chengfei, Zhao, Xiaohui
- Authors: Yongchareon, Sira , Liu, Chengfei , Zhao, Xiaohui
- Date: 2020
- Type: Text , Journal article
- Relation: Computing Vol. 102, no. 8 (2020), p. 1843-1879
- Full Text:
- Reviewed:
- Description: Process reuse is one of the important research areas that address efficiency issues in business process modeling. Similar to software reuse, business processes should be able to be componentized and specialized in order to enable flexible process expansion and customization. Current activity/control-flow centric workflow modeling approaches face difficulty in supporting highly flexible process reuse, limited by their procedural nature. In comparison, the emerging artifact-centric workflow modeling approach well fits into these reuse requirements. Beyond the classic class level reuse in existing object-oriented approaches, process reuse faces the challenge of handling synchronization dependencies among artifact lifecycles as parts of a business process. In this article, we propose a theoretical framework for business process specialization that comprises an artifact-centric business process model, a set of methods to design and construct a specialized business process model from a base model, and a set of behavioral consistency criteria to help check the consistency between the two process models. © 2020, Springer-Verlag GmbH Austria, part of Springer Nature.
- Authors: Yongchareon, Sira , Liu, Chengfei , Zhao, Xiaohui
- Date: 2020
- Type: Text , Journal article
- Relation: Computing Vol. 102, no. 8 (2020), p. 1843-1879
- Full Text:
- Reviewed:
- Description: Process reuse is one of the important research areas that address efficiency issues in business process modeling. Similar to software reuse, business processes should be able to be componentized and specialized in order to enable flexible process expansion and customization. Current activity/control-flow centric workflow modeling approaches face difficulty in supporting highly flexible process reuse, limited by their procedural nature. In comparison, the emerging artifact-centric workflow modeling approach well fits into these reuse requirements. Beyond the classic class level reuse in existing object-oriented approaches, process reuse faces the challenge of handling synchronization dependencies among artifact lifecycles as parts of a business process. In this article, we propose a theoretical framework for business process specialization that comprises an artifact-centric business process model, a set of methods to design and construct a specialized business process model from a base model, and a set of behavioral consistency criteria to help check the consistency between the two process models. © 2020, Springer-Verlag GmbH Austria, part of Springer Nature.
- «
- ‹
- 1
- ›
- »