Critique of the article "Preparation and application of 2,4,6-tribromo-[13C6]-anisole for the quantitative determination of 2,4,6-tribromoanisole in wine" by Giannikopoulos and Whitfield
- Authors: Varelis, Peter
- Date: 2009
- Type: Text , Journal article
- Relation: Food Chemistry Vol. 116, no. 3 (2009), p. 816-817
- Full Text:
- Reviewed:
Classification through incremental max-min separability
- Authors: Bagirov, Adil , Ugon, Julien , Webb, Dean , Karasozen, Bulent
- Date: 2011
- Type: Text , Journal article
- Relation: Pattern Analysis and Applications Vol. 14, no. 2 (2011), p. 165-174
- Relation: http://purl.org/au-research/grants/arc/DP0666061
- Full Text: false
- Reviewed:
- Description: Piecewise linear functions can be used to approximate non-linear decision boundaries between pattern classes. Piecewise linear boundaries are known to provide efficient real-time classifiers. However, they require a long training time. Finding piecewise linear boundaries between sets is a difficult optimization problem. Most approaches use heuristics to avoid solving this problem, which may lead to suboptimal piecewise linear boundaries. In this paper, we propose an algorithm for globally training hyperplanes using an incremental approach. Such an approach allows one to find a near global minimizer of the classification error function and to compute as few hyperplanes as needed for separating sets. We apply this algorithm for solving supervised data classification problems and report the results of numerical experiments on real-world data sets. These results demonstrate that the new algorithm requires a reasonable training time and its test set accuracy is consistently good on most data sets compared with mainstream classifiers. © 2010 Springer-Verlag London Limited.
Researching rural-regional (teacher) education in Australia
- Authors: Lock, Graeme , Reid, Joanne , Green, Bill , Hastings, Wendy , Cooper, Maxine , White, Simone
- Date: 2009
- Type: Text , Journal article
- Relation: Education in Rural Australia Vol. 19, no. 2 (2009), p. 31-44
- Full Text: false
- Reviewed:
- Description: In 2007 a group of researchers from four Australian universities was awarded an ARC Discovery grant to undertake a longitudinal study into the nature of successful teacher education strategies aimed at making rural teaching an attractive long-term career option. This paper presents descriptive insights into how the research team, located in three Australian states (New South Wales, Victoria and Western Australia) is able to maintain a sustained cohesive approach to achieving the project's aim. The initial section of the paper introduces each team member prior to discussing the importance of taking a national perspective on rural education. The second section considers the research design and shows how the main objective of the investigation will be achieved. Emerging trends from the quantitative and qualitative data collected in 2008 are revealed in the third section. The discussion in the fourth section centres on how the trends emerging from the collected data requires a reconceptualisation of preparing pre-service teachers for non-metropolitan placements. In doing so, the project's emerging conceptual framework, which emphasizes that preparation of teachers for rural and regional appointments needs to be considered beyond the terms and forms of traditional professional practice, is explored. [Author abstract, ed]
- Description: 1301 Education Systems
We're here to help: Agencies dealing with apprenticeships in Australia
- Authors: Smith, Erica
- Date: 2010
- Type: Text , Book chapter
- Relation: Rediscovering apprenticeship p. 113-124
- Full Text: false
- Reviewed:
- Description: In Australia, approximately 3.5% of the working population is employed in apprenticeships and their newer counterparts, traineeships (both of these are combined under the title of 'Australian apprenticeships'). While apprenticeships were originally intended for young school leavers, they are now open to people of all ages and to part-time as well as full-time workers. The huge growth in numbers, over 300% since the mid-1990s, has been the result of very conscious planning and financial investment by the Australian Government. This paper, using data drawn from a series of research projects, analyses the different agencies that help to promote and manage the apprenticeship system. The paper points out both positive and negative effects of the large numbers of agencies involved.
DRfit : A Java tool for the analysis of discrete data from multi-well plate assays
- Authors: Hofmann, Andreas , Preston, Sarah , Cross, Megan , Herath, Dilrukshi , Simon, Anne , Gasser, Robin
- Date: 2019
- Type: Text , Journal article
- Relation: BMC Bioinformatics Vol. 20, no. (2019), p. 1-6
- Full Text:
- Reviewed:
- Description: Background: Analyses of replicates in sets of discrete data, typically acquired in multi-well plate formats, is a recurring task in many contemporary areas in the Life Sciences. The availability of accessible cross-platform data analysis tools for such fundamental tasks in varied projects and environments is an important prerequisite to ensuring a reliable and timely turnaround as well as to provide practical analytical tools for student training. Results: We have developed an easy-to-use, interactive software tool for the analysis of multiple data sets comprising replicates of discrete bivariate data points. For each dataset, the software identifies the replicate data points from a defined matrix layout and calculates their means and standard errors. The averaged values are then automatically fitted using either a linear or a logistic dose response function. Conclusions: DRfit is a practical and convenient tool for the analysis of one or multiple sets of discrete data points acquired as replicates from multi-well plate assays. The design of the graphical user interface and the built-in analysis features make it a flexible and useful tool for a wide range of different assays.
Development and evaluation of optimization based data mining techniques analysis of brain data
- Authors: Zarei, Mahdi
- Date: 2015
- Type: Text , Thesis , PhD
- Full Text:
- Description: Neuroscience is an interdisciplinary science which deals with the study of structure and function of the brain and nervous system. Neuroscience encompasses disciplines such as computer science, mathematics, engineering, and linguistics. The structure of the healthy brain and representation of information by neural activity are among most challenging problems in neuroscience. Neuroscience is experiencing exponentially growing volumes of data obtained by using different technologies. The investigation of such data has tremendous impact on developing new and improving existing models of both healthy and diseased brains. Various techniques have been used for collecting brain data sets for addressing neuroscience problems. These data sets can be categorized into two main groups: resting-state and state-dependent data sets. Resting-state data is based on recording the brain activity when a subject does not think about any specific concept while state-dependent data is based on recording brain activity related to specific tasks. In general, brain data sets contain a large number of features (e.g. tens of thousands) and significantly fewer samples (e.g. several hundred). Such data sets are sparse and noisy. In addition to these problems, brain data sets have a few number of subjects. Brains are very complex systems and data about any brain activity reflects very complex relationship between neurons as well as different parts of the brain. Such relationships are highly nonlinear and general purpose data mining algorithms are not always efficient for their study. The development of machine learning techniques for brain data sets is an emerging research area in neuroscience. Over the last decade, various machine learning techniques have been developed for application to brain data sets. In the meantime, some well-known algorithms such as feature selection and supervised classification have been modified for analysis of brain data sets. Support vector machines, logistic regression, and Gaussian Naive Bayes classifiers are widely used for application to brain data sets. However, Support vector machines and logistic regression algorithms are not efficient for sparse and noisy data sets and Gaussian Naive Bayes classifiers do not give high accuracy. The aim of this study is to develop new and modify the existing data mining algorithms for the analysis brain data sets. Our contribution in this thesis can be listed as follow: 1. Development of new algorithms: 1.1. Development of new voxel (feature) selection algorithms for Functional magnetic resonance imaging (fMRI) data sets, and evaluation of these algorithms on the Haxby and Science 2008 data sets. 1.2. Development of new feature selection algorithm based on the catastrophe model for regression analysis problems. 2. Development and evaluation of different versions of the adaptive neuro-fuzzy model for the analysis of the spike-discharge as a function of other neuronal parameters. 3. Development and evaluation of the modified global k-means clustering algorithm for investigation of the structure of the healthy brain. 4. Development and evaluation of region of interest (ROI) method for analysis of brain functionalconnectivity in healthy subjects and schizophrenia patients.
- Description: Doctor of Philosophy
The effect of simplification based on words and their role in approving or rejecting articles
- Authors: Mirzaei, Bahareh , Jafarabad, Mohammad , Karimi, Hossein , Khastavaneh, Mohammad , Parveh, Abdolkarim , Dehno, Shahin , Khandelwal, Manoj , Safari, Amirali
- Date: 2015
- Type: Text , Conference proceedings , Conference paper
- Relation: 3rd IEEE International Conference on Progress in Informatics and Computing, PIC 2015; Nanjing, China; 18th-20th December 2015; Published in Proceedings of 2015 IEEE Interntational Conference on Progress in Informatics and Computing, PIC 2015 p. 71-76
- Full Text: false
- Reviewed:
- Description: One of the main reasons of text complexity is usage of uncommon vocabulary. We can overcome this issue by suggesting simpler words as replacements. This paper analyzed texts from Simple English Wikipedia, Wikipedia and IEEE articles. After running the tests, the results indicated that removing Complex words and replacing them with simpler phrases is the fastest practical algorithm to simplify a scientific complex text. Result of our data analysis tests and checking inputs against a database of 2000 words show that IEEE articles use about 15% uncommon words. © 2015 IEEE.
- Description: Proceedings of 2015 IEEE International Conference on Progress in Informatics and Computing, PIC 2015
Significance level of a query for enterprise data
- Authors: Thi Ngoc Dinh, Loan , Karmakar, Gour , Kamruzzaman, Joarder , Stranieri, Andrew , Das, Rajkumar
- Date: 2017
- Type: Text , Conference proceedings
- Relation: 30th International Business Information Management Association Conference - Vision 2020: Sustainable Economic development, Innovation Management, and Global Growth, IBIMA 2017; Madrid, Spain; 8th-9th November 2017 Vol. 2017-January, p. 4494-4504
- Full Text: false
- Reviewed:
- Description: To operate enterprise activities, a large number of queries need to be processed every day through an enterprise system. Consequently, such a system frequently faces hugely overloaded information and incurs high delay in producing query responses for big data. This is because, traditional queries are normally treated with equal importance. With the advent of big data and its use in enterprise systems and the growth of process complexity, the traditional approach of query processing is no more suitable as it does not consider semantic information and captures all data irrespective of their relevance to a business organization, which eventually increases the computational time in both big data collection and analysis. The significance level of a query can make a trade-off between query response delay and the extent of data collection and analysis. This motivates us to concentrate on determining the significance level of a query considering its importance to an enterprise system. To our knowledge, no such approach is available in the literature. To bridge this research gap, this paper, for the first time, proposes an approach to determine the significance level of a query to prioritize them with the relevance to a business organization. As business processes play key roles in any enterprise system and all business processes are not equally important, this is done by determining the semantic similarity between a query and the processes of a business organization and the importance of a business process to that organization. With a case study on an enterprise system of a retail company, the results produced by our proposed approach have shown that significance level is higher for more important queries compared to the less important ones.
Assessing healthcare providers' performance with and without risk adjustment
- Authors: Morales-Silva, Daniel
- Date: 2020
- Type: Text , Journal article
- Relation: Bulletin of the Australian Mathematical Society Vol. 102, no. 1 (AUG 2020), p. 172-173
- Full Text:
- Reviewed:
Modeling neurocognitive reaction time with gamma distribution
- Authors: Santhanagopalan, Meena , Chetty, Madhu , Foale, Cameron , Aryal, Sunil , Klein, Britt
- Date: 2018
- Type: Text , Conference proceedings
- Relation: ACSW'18 . Proceedings of the Australasian Computer Science Week Multiconference; Brisbane, QLD; January 2018; Article 28 p. 1-10
- Full Text: false
- Reviewed:
- Description: As a broader effort to build a holistic biopsychosocial health metric, reaction time data obtained from participants undertaking neurocognitive tests have been examined using Exploratory Data Analysis (EDA) for assessing its distribution. Many of the known existing methods assume, that the reaction time data follows a Gaussian distribution and thus commonly use statistical measures such as Analysis of Variance (ANOVA) for analysis. However, it is not mandatory for the reaction time data, to necessarily follow Gaussian distribution and in many instances, it can be better modeled by other representations such as Gamma distribution. Unlike Gaussian distribution which is defined using mean and variance, the Gamma distribution is defined using shape and scale parameters which also considers higher order moments of data such as skewness and kurtosis. Generalized Linear Models (GLM), based on the family exponential distributions such as Gamma distribution, which have been used to model reaction time in other domains, have not been fully explored for modeling reaction time data in psychology domain. While limited use of Gamma distribution have been reported [5, 17, 21], for analyzing response times, their application has been somewhat ad-hoc rather than systematic. For this proposed research, we use a real life biopsychosocial dataset, generated from the 'digital health' intervention programs conducted by the Faculty of Health, Federation University, Australia. The two digital intervention programs were the 'Mindfulness' program and 'Physical Activity' program. The neurocognitive tests were carried out as part of the 'Mindfulness' program. In this paper, we investigate the participants' reaction time distributions in neurocognitive tests such as the Psychology Experiment Building Language (PEBL) Go/No-Go test [19], which is a subset of the larger biopsychosocial data set. PEBL is an open source software system for designing and running psychological experiments. Analysis of participants' reaction time in the PEBL Go/No-Go test, shows that the reaction time data are more compatible with a Gamma distribution and clearly demonstrate that these can be better modeled by Gamma distribution.
Data Praxis : Teacher educators using data to inform and enhance pre-service teacher mathematics
- Authors: Peter, Sellings , Robyn, Brandenburg
- Date: 2018
- Type: Text , Journal article
- Relation: Mathematics teacher education & development Vol. 20, no. 3 (2018), p. 61-79
- Full Text:
- Reviewed:
- Description: This paper explores how data can shape and enhance mathematics learning and teaching in an initial teacher education Learning and Teaching Mathematics Course for First Bachelor of Education Students in a Regional University. The implementation of a 'data praxis' approach to research, required the development of a custom-designed suite of data gathering tools and approaches to inform our mathematics teaching and enhance pre-service teacher mathematical learning, underpinned the conduct of the study. Praxis required the teacher educators to constantly and systematically interact with the data sets and refine the pedagogical approaches to mathematics teaching and learning. The results of this research highlight the gains that students made and the challenges for teacher educators who choose a data based approach. [Author abstract]
P2DCA: A Privacy-preserving-based data collection and analysis framework for IoMT applications
- Authors: Usman, Muhammad , Jan, Mian Ahmad , He, Xiangjian , Chen, Jinjun
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE journal on selected areas in communications Vol. 37, no. 6 (2019), p. 1222-1230
- Full Text: false
- Reviewed:
- Description: The concept of Internet of Multimedia Things (IoMT) is becoming popular nowadays and can be used in various smart city applications, e.g., traffic management, healthcare, and surveillance. In the IoMT, the devices, e.g., Multimedia Sensor Nodes (MSNs), are capable of generating both multimedia and non-multimedia data. The generated data are forwarded to a cloud server via a Base Station (BS). However, it is possible that the Internet connection between the BS and the cloud server may be temporarily down. The limited computational resources restrict the MSNs from holding the captured data for a longer time. In this situation, mobile sinks can be utilized to collect data from MSNs and upload to the cloud server. However, this data collection may create privacy issues, such as revealing identities and location information of MSNs. Therefore, there is a need to preserve the privacy of MSNs during mobile data collection. In this paper, we propose an efficient privacy-preserving-based data collection and analysis (P2DCA) framework for IoMT applications. The proposed framework partitions an underlying wireless multimedia sensor network into multiple clusters. Each cluster is represented by a Cluster Head (CH). The CHs are responsible to protect the privacy of member MSNs through data and location coordinates aggregation. Later, the aggregated multimedia data are analyzed on the cloud server using a counter-propagation artificial neural network to extract meaningful information through segmentation. Experimental results show that the proposed framework outperforms the existing privacy-preserving schemes, and can be used to collect multimedia data in various IoMT applications.
Intermediary organizations in apprenticeship systems
- Authors: Smith, Erica
- Date: 2019
- Type: Text , Report
- Full Text:
- Description: Intermediary organizations in apprenticeships are those which act on behalf of, link, or mediate between the main parties - apprentices and employers. An intermediary organization in apprenticeship systems is thus one that undertakes one or more of the following activities: employs apprentices as a third-party employer; trains apprentices as part of a specific arrangement with groups of employers; or undertakes other apprentice support activities on behalf of an employer or a specified group of employers. This discussion paper highlights different ways of classifying intermediary organizations, provides examples of different types of intermediary organisations and examines the different roles they can play to support the effective operation of apprenticeship systems. In particular, the report includes brief case studies of intermediary organisations in Australia, India, England.
What impacts do behaviour-based and buffer-based management mechanisms have on enterprise agility?
- Authors: Fayezi, Sajad , O’Loughlin, Andrew , Zutshi, Ambika , Sohal, Amrik , Das, Ajay
- Date: 2020
- Type: Text , Journal article
- Relation: Journal of manufacturing technology management Vol. 31, no. 1 (2020), p. 169-192
- Full Text: false
- Reviewed:
- Description: Purpose The purpose of this paper is to investigate the impact of behaviour-based and buffer-based management mechanisms on enterprise agility using the lens of the agency theory. Design/methodology/approach This study is based on data collected from 185 manufacturing enterprises using a survey instrument. The authors employ structural equation modelling for data analysis. Findings The results of this study show that buffer-based mechanisms used for dealing with agency uncertainty of supplier/buyer not only have a positive impact on agility of enterprises, but are also contingent on the behavioural interventions used in the relationship with a supplier/buyer. Behaviour-based mechanisms also positively impact enterprise agility through mitigating the likelihood of supplier/buyer opportunism. Practical implications This study demonstrates that buffer- and behaviour-based management mechanisms can be used as complementary approaches against agency uncertainties for enhancing enterprise agility. Therefore, for enterprises to boost their agility, it is vital that their resources and capabilities are fairly distributed across entities responsible for creating buffers through functional flexibility, as well as individuals and teams dealing with stakeholder engagement, in particular, suppliers and buyers. Originality/value The authors use the lens of the agency theory to assimilate and model characteristic agency uncertainties and management mechanisms that enhance enterprise agility.
Algorithm development for the non-destructive testing of structural damage
- Authors: Noori Hoshyar, Azadeh , Rashidi, Maria , Liyanapathirana, Ranjith , Samali, Bijan
- Date: 2019
- Type: Text , Journal article
- Relation: Applied sciences Vol. 9, no. 14 (2019), p. 2810
- Full Text:
- Reviewed:
- Description: Monitoring of structures to identify types of damages that occur under loading is essential in practical applications of civil infrastructure. In this paper, we detect and visualize damage based on several non-destructive testing (NDT) methods. A machine learning (ML) approach based on the Support Vector Machine (SVM) method is developed to prevent misdirection of the event interpretation of what is happening in the material. The objective is to identify cracks in the early stages, to reduce the risk of failure in structures. Theoretical and experimental analyses are derived by computing the performance indicators on the smart aggregate (SA)-based sensor data for concrete and reinforced-concrete (RC) beams. Validity assessment of the proposed indices was addressed through a comparative analysis with traditional SVM. The developed ML algorithms are shown to recognize cracks with a higher accuracy than the traditional SVM. Additionally, we propose different algorithms for microwave- or millimeter-wave imaging of steel plates, composite materials, and metal plates, to identify and visualize cracks. The proposed algorithm for steel plates is based on the gradient magnitude in four directions of an image, and is followed by the edge detection technique. Three algorithms were proposed for each of composite materials and metal plates, and are based on 2D fast Fourier transform (FFT) and hybrid fuzzy c-mean techniques, respectively. The proposed algorithms were able to recognize and visualize the cracking incurred in the structure more efficiently than the traditional techniques. The reported results are expected to be beneficial for NDT-based applications, particularly in civil engineering.
Experiences of COVID-19 patients in a Fangcang shelter hospital in China during the first wave of the COVID-19 pandemic: a qualitative descriptive study
- Authors: Zhong, Yaping , Zhao, Huan , Lee, Tsorng-Yeh , Yu, Tianchi , Liu, Ming , Ji, Ji
- Date: 2022
- Type: Text , Journal article
- Relation: BMJ open Vol. 12, no. 9 (2022), p. e065799-e065799
- Full Text: false
- Reviewed:
- Description: ObjectivesThis study aimed to examine COVID-19 patients’ experiences in a Fangcang shelter hospital in China, to provide insights into the effectiveness of this centralised isolation strategy as a novel solution to patient management during emerging infectious disease outbreaks.DesignThis study adopted a qualitative descriptive design. Data were collected by individual semistructured interviews and analysed using thematic analysis.SettingThis study was undertaken in 1 of the 16 Fangcang shelter hospitals in Wuhan, China between 28 February 2020 and 7 March 2020. Fangcang shelter hospitals were temporary healthcare facilities intended for large-scale centralised isolation, treatment and disease monitoring of mild-to-moderate COVID-19 cases. These hospitals were an essential component of China’s response to the first wave of the COVID-19 pandemic.ParticipantsA total of 27 COVID-19 patients were recruited by purposive sampling. Eligible participants were (1) COVID-19 patients (2) above 18 years of age and (3) able to communicate effectively. Exclusion criteria were (1) being clinically or emotionally unstable and (2) experiencing communication difficulties.ResultsThree themes and nine subthemes were identified. First, COVID-19 patients experienced a range of psychological reactions during hospitalisation, including fear, uncertainty, helplessness and concerns. Second, there were positive and negative experiences associated with communal living. While COVID-19 patients’ evaluation of essential services in the hospital was overall positive, privacy and hygiene issues were highlighted as stressors during their hospital stay. Third, positive peer support and a trusting patient–healthcare professional relationship served as a birthplace for resilience, trust and gratitude in COVID-19 patients.ConclusionsOur findings suggest that, while sacrificing privacy, centralised isolation has the potential to mitigate negative psychological impacts of social isolation in COVID-19 patients by promoting meaningful peer connections, companionship and support within the shared living space. To our knowledge, this is the first study bringing patients’ perspectives into healthcare service appraisal in emergency shelter hospitals.
Generalised rational approximation and its application to improve deep learning classifiers
- Authors: Peiris, V , Sharon, Nir , Sukhorukova, Nadezda , Ugon, Julien
- Date: 2021
- Type: Text , Journal article
- Relation: Applied Mathematics and Computation Vol. 389, no. (2021), p.
- Relation: https://purl.org/au-research/grants/arc/DP180100602
- Full Text: false
- Reviewed:
- Description: A rational approximation (that is, approximation by a ratio of two polynomials) is a flexible alternative to polynomial approximation. In particular, rational functions exhibit accurate estimations to nonsmooth and non-Lipschitz functions, where polynomial approximations are not efficient. We prove that the optimisation problems appearing in the best uniform rational approximation and its generalisation to a ratio of linear combinations of basis functions are quasiconvex even when the basis functions are not restricted to monomials. Then we show how this fact can be used in the development of computational methods. This paper presents a theoretical study of the arising optimisation problems and provides results of several numerical experiments. We apply our approximation as a preprocessing step to deep learning classifiers and demonstrate that the classification accuracy is significantly improved compared to the classification of the raw signals. © 2020
- Description: This research was supported by the Australian Research Council (ARC), Solving hard Chebyshev approximation problems through nonsmooth analysis (Discovery Project DP180100602 ). This research was partially sponsored by Tel Aviv-Swinburne Research Collaboration Grant (2019).
Pairwise approach for analysis and reporting of child's free sugars intake from a birth cohort study
- Authors: Nguyen, Huy , Ha, Diep , Dao, An , Golley, Rebecca , Scott, Jane , Spencer, John , Bell, Lucinda , Devenish-Coleman, Gemma , Do, Loc
- Date: 2023
- Type: Text , Journal article
- Relation: Community Dentistry and Oral Epidemiology Vol. 51, no. 5 (2023), p. 820-828
- Full Text:
- Reviewed:
- Description: Objectives: The prospective cohort design is an important research design, but a common challenge is missing data. The purpose of this study is to compare three approaches to managing missing data, the pairwise (n = 1386 children), the partial or modified pairwise (n = 1019) and the listwise (n = 546), to characterize the trajectories of children's free sugars intake (FSI) across early childhood. Methods: By applying the Group-based Trajectory Model Technique to three waves of data collected from a prospective cohort study of South Australian children, this study examined the three approaches in managing missing data to validate and discuss children's FSI trajectories. Results: Each approach identified three distinct trajectories of child's FSI from 1 to 5 years of age: (1) ‘low and fast increasing’, (2) ‘moderate and increasing’ and (3) ‘high and increasing’. The trajectory memberships were consistent across the three approaches, and were for the pairwise scenario (1) 15.1%, (2) 68.3% and (3) 16.6%; the partial or modified pairwise (1) 15.9%, (2) 64.1% and (3) 20.0%; and the listwise (1) 14.9%, (2) 64.9% and (3) 20.2% of children. Conclusions: Given the comparability of the findings across the analytical approaches and the samples' characteristics between baseline and across different data collection waves, it is recommended that the pairwise approach be used in future analyses to optimize the sample size and statistical power when examining the relationship between FSI in the first years of life and health outcome such as dental caries. © 2022 The Authors. Community Dentistry and Oral Epidemiology published by John Wiley & Sons Ltd.