Deep Reinforcement Learning for Vehicular Edge Computing: An Intelligent Offloading System
- Ning, Zhaolong, Dong, Peiran, Wang, Xiaojie, Rodrigues, Joel, Xia, Feng
- Authors: Ning, Zhaolong , Dong, Peiran , Wang, Xiaojie , Rodrigues, Joel , Xia, Feng
- Date: 2019
- Type: Text , Journal article
- Relation: ACM Transactions on Intelligent Systems and Technology Vol. 10, no. 6 (Dec 2019), p. 24
- Full Text:
- Reviewed:
- Description: The development of smart vehicles brings drivers and passengers a comfortable and safe environment. Various emerging applications are promising to enrich users' traveling experiences and daily life. However, how to execute computing-intensive applications on resource-constrained vehicles still faces huge challenges. In this article, we construct an intelligent offloading system for vehicular edge computing by leveraging deep reinforcement learning. First, both the communication and computation states are modelled by finite Markov chains. Moreover, the task scheduling and resource allocation strategy is formulated as a joint optimization problem to maximize users' Quality of Experience (QoE). Due to its complexity, the original problem is further divided into two sub-optimization problems. A two-sided matching scheme and a deep reinforcement learning approach are developed to schedule offloading requests and allocate network resources, respectively. Performance evaluations illustrate the effectiveness and superiority of our constructed system.
- Authors: Ning, Zhaolong , Dong, Peiran , Wang, Xiaojie , Rodrigues, Joel , Xia, Feng
- Date: 2019
- Type: Text , Journal article
- Relation: ACM Transactions on Intelligent Systems and Technology Vol. 10, no. 6 (Dec 2019), p. 24
- Full Text:
- Reviewed:
- Description: The development of smart vehicles brings drivers and passengers a comfortable and safe environment. Various emerging applications are promising to enrich users' traveling experiences and daily life. However, how to execute computing-intensive applications on resource-constrained vehicles still faces huge challenges. In this article, we construct an intelligent offloading system for vehicular edge computing by leveraging deep reinforcement learning. First, both the communication and computation states are modelled by finite Markov chains. Moreover, the task scheduling and resource allocation strategy is formulated as a joint optimization problem to maximize users' Quality of Experience (QoE). Due to its complexity, the original problem is further divided into two sub-optimization problems. A two-sided matching scheme and a deep reinforcement learning approach are developed to schedule offloading requests and allocate network resources, respectively. Performance evaluations illustrate the effectiveness and superiority of our constructed system.
The evolution of Turing Award Collaboration Network : bibliometric-level and network-level metrics
- Kong, Xiangjie, Shi, Yajie, Wang, Wei, Ma, Kai, Wan, Liangtian, Xia, Feng
- Authors: Kong, Xiangjie , Shi, Yajie , Wang, Wei , Ma, Kai , Wan, Liangtian , Xia, Feng
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Transactions on Computational Social Systems Vol. 6, no. 6 (2019), p. 1318-1328
- Full Text:
- Reviewed:
- Description: The year of 2017 for the 50th anniversary of the Turing Award, which represents the top-level award in the computer science field, is a milestone. We study the long-term evolution of the Turing Award Collaboration Network, and it can be considered as a microcosm of the computer science field from 1974 to 2016. First, scholars tend to publish articles by themselves at the early stages, and they began to focus on tight collaboration since the late 1980s. Second, compared with the same scale random network, although the Turing Award Collaboration Network has small-world properties, it is not a scale-free network. The reason may be that the number of collaborators per scholar is limited. It is impossible for scholars to connect to others freely (preferential attachment) as the scale-free network. Third, to measure how far a scholar is from the Turing Award, we propose a metric called the Turing Number (TN) and find that the TN decreases gradually over time. Meanwhile, we discover the phenomenon that scholars prefer to gather into groups to do research with the development of computer science. This article presents a new way to explore the evolution of academic collaboration network in the field of computer science by building and analyzing the Turing Award Collaboration Network for decades. © 2014 IEEE.
- Authors: Kong, Xiangjie , Shi, Yajie , Wang, Wei , Ma, Kai , Wan, Liangtian , Xia, Feng
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Transactions on Computational Social Systems Vol. 6, no. 6 (2019), p. 1318-1328
- Full Text:
- Reviewed:
- Description: The year of 2017 for the 50th anniversary of the Turing Award, which represents the top-level award in the computer science field, is a milestone. We study the long-term evolution of the Turing Award Collaboration Network, and it can be considered as a microcosm of the computer science field from 1974 to 2016. First, scholars tend to publish articles by themselves at the early stages, and they began to focus on tight collaboration since the late 1980s. Second, compared with the same scale random network, although the Turing Award Collaboration Network has small-world properties, it is not a scale-free network. The reason may be that the number of collaborators per scholar is limited. It is impossible for scholars to connect to others freely (preferential attachment) as the scale-free network. Third, to measure how far a scholar is from the Turing Award, we propose a metric called the Turing Number (TN) and find that the TN decreases gradually over time. Meanwhile, we discover the phenomenon that scholars prefer to gather into groups to do research with the development of computer science. This article presents a new way to explore the evolution of academic collaboration network in the field of computer science by building and analyzing the Turing Award Collaboration Network for decades. © 2014 IEEE.
Local contrast as an effective means to robust clustering against varying densities
- Chen, Bo, Ting, Kaiming, Washio, Takashi, Zhu, Ye
- Authors: Chen, Bo , Ting, Kaiming , Washio, Takashi , Zhu, Ye
- Date: 2018
- Type: Text , Journal article
- Relation: Machine Learning Vol. 107, no. 8-10 (2018), p. 1621-1645
- Full Text:
- Reviewed:
- Description: Most density-based clustering methods have difficulties detecting clusters of hugely different densities in a dataset. A recent density-based clustering CFSFDP appears to have mitigated the issue. However, through formalising the condition under which it fails, we reveal that CFSFDP still has the same issue. To address this issue, we propose a new measure called Local Contrast, as an alternative to density, to find cluster centers and detect clusters. We then apply Local Contrast to CFSFDP, and create a new clustering method called LC-CFSFDP which is robust in the presence of varying densities. Our empirical evaluation shows that LC-CFSFDP outperforms CFSFDP and three other state-of-the-art variants of CFSFDP. © 2018, The Author(s).
- Authors: Chen, Bo , Ting, Kaiming , Washio, Takashi , Zhu, Ye
- Date: 2018
- Type: Text , Journal article
- Relation: Machine Learning Vol. 107, no. 8-10 (2018), p. 1621-1645
- Full Text:
- Reviewed:
- Description: Most density-based clustering methods have difficulties detecting clusters of hugely different densities in a dataset. A recent density-based clustering CFSFDP appears to have mitigated the issue. However, through formalising the condition under which it fails, we reveal that CFSFDP still has the same issue. To address this issue, we propose a new measure called Local Contrast, as an alternative to density, to find cluster centers and detect clusters. We then apply Local Contrast to CFSFDP, and create a new clustering method called LC-CFSFDP which is robust in the presence of varying densities. Our empirical evaluation shows that LC-CFSFDP outperforms CFSFDP and three other state-of-the-art variants of CFSFDP. © 2018, The Author(s).
A comparison of bidding strategies for online auctions using fuzzy reasoning and negotiation decision functions
- Kaur, Preetinder, Goyal, Madhu, Lu, Jie
- Authors: Kaur, Preetinder , Goyal, Madhu , Lu, Jie
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Transactions on Fuzzy Systems Vol. 25, no. 2 (2017), p. 425-438
- Full Text:
- Reviewed:
- Description: Bidders often feel challenged when looking for the best bidding strategies to excel in the competitive environment of multiple and simultaneous online auctions for same or similar items. Bidders face complicated issues for deciding which auction to participate in, whether to bid early or late, and how much to bid. In this paper, we present the design of bidding strategies, which aim to forecast the bid amounts for buyers at a particular moment in time based on their bidding behavior and their valuation of an auctioned item. The agent develops a comprehensive methodology for final price estimation, which designs bidding strategies to address buyers' different bidding behaviors using two approaches: Mamdani method with regression analysis and negotiation decision functions. The experimental results show that the agents who follow fuzzy reasoning with a regression approach outperform other existing agents in most settings in terms of their success rate and expected utility.
- Authors: Kaur, Preetinder , Goyal, Madhu , Lu, Jie
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Transactions on Fuzzy Systems Vol. 25, no. 2 (2017), p. 425-438
- Full Text:
- Reviewed:
- Description: Bidders often feel challenged when looking for the best bidding strategies to excel in the competitive environment of multiple and simultaneous online auctions for same or similar items. Bidders face complicated issues for deciding which auction to participate in, whether to bid early or late, and how much to bid. In this paper, we present the design of bidding strategies, which aim to forecast the bid amounts for buyers at a particular moment in time based on their bidding behavior and their valuation of an auctioned item. The agent develops a comprehensive methodology for final price estimation, which designs bidding strategies to address buyers' different bidding behaviors using two approaches: Mamdani method with regression analysis and negotiation decision functions. The experimental results show that the agents who follow fuzzy reasoning with a regression approach outperform other existing agents in most settings in terms of their success rate and expected utility.
A count data model for heart rate variability forecasting and premature ventricular contraction detection
- Allami, Ragheed, Stranieri, Andrew, Balasubramanian, Venki, Jelinek, Herbert
- Authors: Allami, Ragheed , Stranieri, Andrew , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2017
- Type: Text , Journal article
- Relation: Signal Image and Video Processing Vol. 11, no. 8 (2017), p. 1427-1435
- Full Text:
- Reviewed:
- Description: Heart rate variability (HRV) measures including the standard deviation of inter-beat variations (SDNN) require at least 5 min of ECG recordings to accurately measure HRV. In this paper, we predict, using counts data derived from a 3-min ECG recording, the 5-min SDNN and also detect premature ventricular contraction (PVC) beats with a high degree of accuracy. The approach uses counts data combined with a Poisson-generated function that requires minimal computational resources and is well suited to remote patient monitoring with wearable sensors that have limited power, storage and processing capacity. The ease of use and accuracy of the algorithm provide opportunity for accurate assessment of HRV and reduce the time taken to review patients in real time. The PVC beat detection is implemented using the same count data model together with knowledge-based rules derived from clinical knowledge.
- Authors: Allami, Ragheed , Stranieri, Andrew , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2017
- Type: Text , Journal article
- Relation: Signal Image and Video Processing Vol. 11, no. 8 (2017), p. 1427-1435
- Full Text:
- Reviewed:
- Description: Heart rate variability (HRV) measures including the standard deviation of inter-beat variations (SDNN) require at least 5 min of ECG recordings to accurately measure HRV. In this paper, we predict, using counts data derived from a 3-min ECG recording, the 5-min SDNN and also detect premature ventricular contraction (PVC) beats with a high degree of accuracy. The approach uses counts data combined with a Poisson-generated function that requires minimal computational resources and is well suited to remote patient monitoring with wearable sensors that have limited power, storage and processing capacity. The ease of use and accuracy of the algorithm provide opportunity for accurate assessment of HRV and reduce the time taken to review patients in real time. The PVC beat detection is implemented using the same count data model together with knowledge-based rules derived from clinical knowledge.
A logical approach to experience-based reasoning
- Authors: Sun, Zhaohao
- Date: 2017
- Type: Text , Journal article , Review
- Relation: New Mathematics and Natural Computation Vol. 13, no. 1 (2017), p. 21-40
- Full Text:
- Reviewed:
- Description: Experience-based reasoning (EBR) is a paradigm used in almost every human activity as a part of human reasoning. However, EBR has not been seriously studied from a logical viewpoint. This paper will attempt to fill this gap by providing a unified logical approach to EBR. More specifically, this paper first examines EBR and inference rules. Then it proposes eight different rules of inference for EBR, which cover all possible EBRs from a logical viewpoint. These eight different rules of inference constitute the fundamentals for all EBR paradigms, and therefore will be the theoretical foundation for EBR. The proposed approach will facilitate research and development of EBR, human reasoning, and common sense reasoning. © 2017 World Scientific Publishing Company.
- Authors: Sun, Zhaohao
- Date: 2017
- Type: Text , Journal article , Review
- Relation: New Mathematics and Natural Computation Vol. 13, no. 1 (2017), p. 21-40
- Full Text:
- Reviewed:
- Description: Experience-based reasoning (EBR) is a paradigm used in almost every human activity as a part of human reasoning. However, EBR has not been seriously studied from a logical viewpoint. This paper will attempt to fill this gap by providing a unified logical approach to EBR. More specifically, this paper first examines EBR and inference rules. Then it proposes eight different rules of inference for EBR, which cover all possible EBRs from a logical viewpoint. These eight different rules of inference constitute the fundamentals for all EBR paradigms, and therefore will be the theoretical foundation for EBR. The proposed approach will facilitate research and development of EBR, human reasoning, and common sense reasoning. © 2017 World Scientific Publishing Company.
Data-analytically derived flexible HbA1c thresholds for type 2 diabetes mellitus diagnostic
- Stranieri, Andrew, Yatsko, Andrew, Jelinek, Herbert, Venkatraman, Sitalakshmi
- Authors: Stranieri, Andrew , Yatsko, Andrew , Jelinek, Herbert , Venkatraman, Sitalakshmi
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 5, no. 1 (2015), p. 111-134
- Full Text:
- Reviewed:
- Description: Glycated haemoglobin (HbA1c) is now more commonly used as an alternative test to the fasting plasma glucose and oral glucose tolerance tests for the identification of Type 2 Diabetes Mellitus (T2DM) because it is easily obtained using the point-of-care technology and represents long-term blood sugar levels. According to WHO guidelines, HbA1c values of 6.5% or above are required for a diagnosis of T2DM. However outcomes of a large number of trials with HbA1c have been inconsistent across the clinical spectrum and further research is required to determine the efficacy of HbA1c testing in identification of T2DM. Medical records from a diabetes screening program in Australia illustrate that many patients could be classified as diabetics if other clinical indicators are included, even though the HbA1c result does not exceed 6.5%. This suggests that a cutoff for the general population of 6.5% may be too simple and miss individuals at risk or with already overt, undiagnosed diabetes. In this study, data mining algorithms have been applied to identify markers that can be used with HbA1c. The results indicate that T2DM is best classified by HbA1c at 6.2% - a cutoff level lower than the currently recommended one, which can be even less, having assumed the threshold flexibility, if additionally to HbA1c being high the rule is conditioned on oxidative stress or inflammation being present, atherogenicity or adiposity being high, or hypertension being diagnosed, etc.
- Authors: Stranieri, Andrew , Yatsko, Andrew , Jelinek, Herbert , Venkatraman, Sitalakshmi
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 5, no. 1 (2015), p. 111-134
- Full Text:
- Reviewed:
- Description: Glycated haemoglobin (HbA1c) is now more commonly used as an alternative test to the fasting plasma glucose and oral glucose tolerance tests for the identification of Type 2 Diabetes Mellitus (T2DM) because it is easily obtained using the point-of-care technology and represents long-term blood sugar levels. According to WHO guidelines, HbA1c values of 6.5% or above are required for a diagnosis of T2DM. However outcomes of a large number of trials with HbA1c have been inconsistent across the clinical spectrum and further research is required to determine the efficacy of HbA1c testing in identification of T2DM. Medical records from a diabetes screening program in Australia illustrate that many patients could be classified as diabetics if other clinical indicators are included, even though the HbA1c result does not exceed 6.5%. This suggests that a cutoff for the general population of 6.5% may be too simple and miss individuals at risk or with already overt, undiagnosed diabetes. In this study, data mining algorithms have been applied to identify markers that can be used with HbA1c. The results indicate that T2DM is best classified by HbA1c at 6.2% - a cutoff level lower than the currently recommended one, which can be even less, having assumed the threshold flexibility, if additionally to HbA1c being high the rule is conditioned on oxidative stress or inflammation being present, atherogenicity or adiposity being high, or hypertension being diagnosed, etc.
Diagnostic with incomplete nominal/discrete data
- Jelinek, Herbert, Yatsko, Andrew, Stranieri, Andrew, Venkatraman, Sitalakshmi, Bagirov, Adil
- Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi , Bagirov, Adil
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 4, no. 1 (2015), p. 22-35
- Full Text:
- Reviewed:
- Description: Missing values may be present in data without undermining its use for diagnostic / classification purposes but compromise application of readily available software. Surrogate entries can remedy the situation, although the outcome is generally unknown. Discretization of continuous attributes renders all data nominal and is helpful in dealing with missing values; particularly, no special handling is required for different attribute types. A number of classifiers exist or can be reformulated for this representation. Some classifiers can be reinvented as data completion methods. In this work the Decision Tree, Nearest Neighbour, and Naive Bayesian methods are demonstrated to have the required aptness. An approach is implemented whereby the entered missing values are not necessarily a close match of the true data; however, they intend to cause the least hindrance for classification. The proposed techniques find their application particularly in medical diagnostics. Where clinical data represents a number of related conditions, taking Cartesian product of class values of the underlying sub-problems allows narrowing down of the selection of missing value substitutes. Real-world data examples, some publically available, are enlisted for testing. The proposed and benchmark methods are compared by classifying the data before and after missing value imputation, indicating a significant improvement.
- Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi , Bagirov, Adil
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 4, no. 1 (2015), p. 22-35
- Full Text:
- Reviewed:
- Description: Missing values may be present in data without undermining its use for diagnostic / classification purposes but compromise application of readily available software. Surrogate entries can remedy the situation, although the outcome is generally unknown. Discretization of continuous attributes renders all data nominal and is helpful in dealing with missing values; particularly, no special handling is required for different attribute types. A number of classifiers exist or can be reformulated for this representation. Some classifiers can be reinvented as data completion methods. In this work the Decision Tree, Nearest Neighbour, and Naive Bayesian methods are demonstrated to have the required aptness. An approach is implemented whereby the entered missing values are not necessarily a close match of the true data; however, they intend to cause the least hindrance for classification. The proposed techniques find their application particularly in medical diagnostics. Where clinical data represents a number of related conditions, taking Cartesian product of class values of the underlying sub-problems allows narrowing down of the selection of missing value substitutes. Real-world data examples, some publically available, are enlisted for testing. The proposed and benchmark methods are compared by classifying the data before and after missing value imputation, indicating a significant improvement.
Multimodal image registration technique based on improved local feature descriptors
- Teng, Shyh, Hossain, Tanvir, Lu, Guojun
- Authors: Teng, Shyh , Hossain, Tanvir , Lu, Guojun
- Date: 2015
- Type: Text , Journal article
- Relation: Journal of Electronic Imaging Vol. 24, no. 1 (2015), p.
- Full Text:
- Reviewed:
- Description: Multimodal image registration has received significant research attention over the past decade, and the majority of the techniques are global in nature. Although local techniques are widely used for general image registration, there are only limited studies on them for multimodal image registration. Scale invariant feature transform (SIFT) is a well-known general image registration technique. However, SIFT descriptors are not invariant to multimodality. We propose a SIFT-based technique that is modality invariant and still retains the strengths of local techniques. Moreover, our proposed histogram weighting strategies also improve the accuracy of descriptor matching, which is an important image registration step. As a result, our proposed strategies can not only improve the multimodal registration accuracy but also have the potential to improve the performance of all SIFT-based applications, e.g., general image registration and object recognition.
- Authors: Teng, Shyh , Hossain, Tanvir , Lu, Guojun
- Date: 2015
- Type: Text , Journal article
- Relation: Journal of Electronic Imaging Vol. 24, no. 1 (2015), p.
- Full Text:
- Reviewed:
- Description: Multimodal image registration has received significant research attention over the past decade, and the majority of the techniques are global in nature. Although local techniques are widely used for general image registration, there are only limited studies on them for multimodal image registration. Scale invariant feature transform (SIFT) is a well-known general image registration technique. However, SIFT descriptors are not invariant to multimodality. We propose a SIFT-based technique that is modality invariant and still retains the strengths of local techniques. Moreover, our proposed histogram weighting strategies also improve the accuracy of descriptor matching, which is an important image registration step. As a result, our proposed strategies can not only improve the multimodal registration accuracy but also have the potential to improve the performance of all SIFT-based applications, e.g., general image registration and object recognition.
REPLOT : REtrieving Profile Links on Twitter for malicious campaign discovery
- Perez, Charles, Birregah, Babiga, Layton, Robert, Lemercier, Marc, Watters, Paul
- Authors: Perez, Charles , Birregah, Babiga , Layton, Robert , Lemercier, Marc , Watters, Paul
- Date: 2015
- Type: Text , Journal article
- Relation: AI Communications Vol. 29, no. 1 (2015), p. 107-122
- Full Text:
- Reviewed:
- Description: Social networking sites are increasingly subject to malicious activities such as self-propagating worms, confidence scams and drive-by-download malwares. The high number of users associated with the presence of sensitive data, such as personal or professional information, is certainly an unprecedented opportunity for attackers. These attackers are moving away from previous platforms of attack, such as emails, towards social networking websites. In this paper, we present a full stack methodology for the identification of campaigns of malicious profiles on social networking sites, composed of maliciousness classification, campaign discovery and attack profiling. The methodology named REPLOT, for REtrieving Profile Links On Twitter, contains three major phases. First, profiles are analysed to determine whether they are more likely to be malicious or benign. Second, connections between suspected malicious profiles are retrieved using a late data fusion approach consisting of temporal and authorship analysis based models to discover campaigns. Third, the analysis of the discovered campaigns is performed to investigate the attacks. In this paper, we apply this methodology to a real world dataset, with a view to understanding the links between malicious profiles, their attack methods and their connections. Our analysis identifies a cluster of linked profiles focusing on propagating malicious links, as well as profiling two other major clusters of attacking campaigns. © 2016 - IOS Press and the authors. All rights reserved.
- Authors: Perez, Charles , Birregah, Babiga , Layton, Robert , Lemercier, Marc , Watters, Paul
- Date: 2015
- Type: Text , Journal article
- Relation: AI Communications Vol. 29, no. 1 (2015), p. 107-122
- Full Text:
- Reviewed:
- Description: Social networking sites are increasingly subject to malicious activities such as self-propagating worms, confidence scams and drive-by-download malwares. The high number of users associated with the presence of sensitive data, such as personal or professional information, is certainly an unprecedented opportunity for attackers. These attackers are moving away from previous platforms of attack, such as emails, towards social networking websites. In this paper, we present a full stack methodology for the identification of campaigns of malicious profiles on social networking sites, composed of maliciousness classification, campaign discovery and attack profiling. The methodology named REPLOT, for REtrieving Profile Links On Twitter, contains three major phases. First, profiles are analysed to determine whether they are more likely to be malicious or benign. Second, connections between suspected malicious profiles are retrieved using a late data fusion approach consisting of temporal and authorship analysis based models to discover campaigns. Third, the analysis of the discovered campaigns is performed to investigate the attacks. In this paper, we apply this methodology to a real world dataset, with a view to understanding the links between malicious profiles, their attack methods and their connections. Our analysis identifies a cluster of linked profiles focusing on propagating malicious links, as well as profiling two other major clusters of attacking campaigns. © 2016 - IOS Press and the authors. All rights reserved.
A computing perspective on scientific chinese trinity
- Authors: Sun, Zhaohao , Wang, Paul
- Date: 2013
- Type: Text , Journal article
- Relation: New Mathematics and Natural Computation Vol. 9, no. 2 (2013), p. 129-152
- Full Text:
- Reviewed:
- Description: The unprecedented and rapid development of the Chinese economy has been vividly displayed in front of the whole world to see. The attention has been particularly acute for the academic community and career politician alike. Ironically, this rapid economic miracle of China has been built on an unsound and often even questionable foundation of Chinese words, language and culture, of which we call them "Chinese trinity". This paper deals with the Chinese trinity from a computing science perspective. This paper argues the reform in scientific Chinese trinity with an emphasis of the word "scientific" ought to play a key role for further Chinese economic development and to launch a much improved contemporary Chinese society on a solid foundation. In addition, this paper proposes specifically ten computing paradigms and examines critically their potential impacts on scientific Chinese trinity. Finally, we feel the very focused approaches as proposed here might inspire as well as provide a much needed road map toward the goal of the scientific Chinese trinity. Judiciously chosen vigorous research projects appear to be indispensable. The unfortunate well known and long overdue reform has finally been rescued by the pressure of the information revolution coming of age. © 2013 World Scientific Publishing Company.
- Description: 2003011223
- Authors: Sun, Zhaohao , Wang, Paul
- Date: 2013
- Type: Text , Journal article
- Relation: New Mathematics and Natural Computation Vol. 9, no. 2 (2013), p. 129-152
- Full Text:
- Reviewed:
- Description: The unprecedented and rapid development of the Chinese economy has been vividly displayed in front of the whole world to see. The attention has been particularly acute for the academic community and career politician alike. Ironically, this rapid economic miracle of China has been built on an unsound and often even questionable foundation of Chinese words, language and culture, of which we call them "Chinese trinity". This paper deals with the Chinese trinity from a computing science perspective. This paper argues the reform in scientific Chinese trinity with an emphasis of the word "scientific" ought to play a key role for further Chinese economic development and to launch a much improved contemporary Chinese society on a solid foundation. In addition, this paper proposes specifically ten computing paradigms and examines critically their potential impacts on scientific Chinese trinity. Finally, we feel the very focused approaches as proposed here might inspire as well as provide a much needed road map toward the goal of the scientific Chinese trinity. Judiciously chosen vigorous research projects appear to be indispensable. The unfortunate well known and long overdue reform has finally been rescued by the pressure of the information revolution coming of age. © 2013 World Scientific Publishing Company.
- Description: 2003011223
A performance review of recent corner detectors
- Awrangjeb, Mohammad, Lu, Guojun
- Authors: Awrangjeb, Mohammad , Lu, Guojun
- Date: 2013
- Type: Text , Conference paper
- Relation: International Conference on Digital Image Computing: Techniques and Applications, 26 November 2013 to 28 November 2013 p. 157-164
- Full Text:
- Reviewed:
- Description: Contour-based corner detectors directly or indirectly estimate a significance measure (eg, curvature) on the points of a planar curve and select the curvature extrema points as corners. A number of promising contour-based corner detectors have recently been proposed. They mainly differ in how the curvature is estimated on each point of the given curve. As the curvature on a digital curve can only be approximated, it is important to estimate a curvature that remains stable against significant noises, for example, geometric transformations and compression, on the curve. Moreover, in many applications, for instance, in content-based image retrieval, a fast corner detector is a prerequisite. So, it is also a primary characteristic that how much time a corner detector takes for corner detection in a given image. In addition, different authors evaluated their detectors on different platforms using different evaluation systems. Evaluation systems that depend on human judgements and visual identification of corners are manual and too subjective. Application of a manual system on a large test database will be expensive. Therefore, it is important to evaluate the detectors on a common platform using an automatic evaluation system. This paper first reviews six most recent and highly performed corner detectors and analyse their theoretical running time. Then it uses an automatic evaluation system to analyse their performance. Both the robustness to noise and efficiency are estimated to rank the detectors.
- Authors: Awrangjeb, Mohammad , Lu, Guojun
- Date: 2013
- Type: Text , Conference paper
- Relation: International Conference on Digital Image Computing: Techniques and Applications, 26 November 2013 to 28 November 2013 p. 157-164
- Full Text:
- Reviewed:
- Description: Contour-based corner detectors directly or indirectly estimate a significance measure (eg, curvature) on the points of a planar curve and select the curvature extrema points as corners. A number of promising contour-based corner detectors have recently been proposed. They mainly differ in how the curvature is estimated on each point of the given curve. As the curvature on a digital curve can only be approximated, it is important to estimate a curvature that remains stable against significant noises, for example, geometric transformations and compression, on the curve. Moreover, in many applications, for instance, in content-based image retrieval, a fast corner detector is a prerequisite. So, it is also a primary characteristic that how much time a corner detector takes for corner detection in a given image. In addition, different authors evaluated their detectors on different platforms using different evaluation systems. Evaluation systems that depend on human judgements and visual identification of corners are manual and too subjective. Application of a manual system on a large test database will be expensive. Therefore, it is important to evaluate the detectors on a common platform using an automatic evaluation system. This paper first reviews six most recent and highly performed corner detectors and analyse their theoretical running time. Then it uses an automatic evaluation system to analyse their performance. Both the robustness to noise and efficiency are estimated to rank the detectors.
Attribute weighted Naive Bayes classifier using a local optimization
- Taheri, Sona, Yearwood, John, Mammadov, Musa, Seifollahi, Sattar
- Authors: Taheri, Sona , Yearwood, John , Mammadov, Musa , Seifollahi, Sattar
- Date: 2013
- Type: Text , Journal article
- Relation: Neural Computing & Applications Vol.24, no.5 (2013), p.995-1002
- Full Text:
- Reviewed:
- Description: The Naive Bayes classifier is a popular classification technique for data mining and machine learning. It has been shown to be very effective on a variety of data classification problems. However, the strong assumption that all attributes are conditionally independent given the class is often violated in real-world applications. Numerous methods have been proposed in order to improve the performance of the Naive Bayes classifier by alleviating the attribute independence assumption. However, violation of the independence assumption can increase the expected error. Another alternative is assigning the weights for attributes. In this paper, we propose a novel attribute weighted Naive Bayes classifier by considering weights to the conditional probabilities. An objective function is modeled and taken into account, which is based on the structure of the Naive Bayes classifier and the attribute weights. The optimal weights are determined by a local optimization method using the quasisecant method. In the proposed approach, the Naive Bayes classifier is taken as a starting point. We report the results of numerical experiments on several real-world data sets in binary classification, which show the efficiency of the proposed method.
- Authors: Taheri, Sona , Yearwood, John , Mammadov, Musa , Seifollahi, Sattar
- Date: 2013
- Type: Text , Journal article
- Relation: Neural Computing & Applications Vol.24, no.5 (2013), p.995-1002
- Full Text:
- Reviewed:
- Description: The Naive Bayes classifier is a popular classification technique for data mining and machine learning. It has been shown to be very effective on a variety of data classification problems. However, the strong assumption that all attributes are conditionally independent given the class is often violated in real-world applications. Numerous methods have been proposed in order to improve the performance of the Naive Bayes classifier by alleviating the attribute independence assumption. However, violation of the independence assumption can increase the expected error. Another alternative is assigning the weights for attributes. In this paper, we propose a novel attribute weighted Naive Bayes classifier by considering weights to the conditional probabilities. An objective function is modeled and taken into account, which is based on the structure of the Naive Bayes classifier and the attribute weights. The optimal weights are determined by a local optimization method using the quasisecant method. In the proposed approach, the Naive Bayes classifier is taken as a starting point. We report the results of numerical experiments on several real-world data sets in binary classification, which show the efficiency of the proposed method.
Automated unsupervised authorship analysis using evidence accumulation clustering
- Layton, Robert, Watters, Paul, Dazeley, Richard
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2013
- Type: Text , Journal article
- Relation: Natural Language Engineering Vol. 19, no. 1 (2013), p. 95-120
- Full Text:
- Reviewed:
- Description: Authorship Analysis aims to extract information about the authorship of documents from features within those documents. Typically, this is performed as a classification task with the aim of identifying the author of a document, given a set of documents of known authorship. Alternatively, unsupervised methods have been developed primarily as visualisation tools to assist the manual discovery of clusters of authorship within a corpus by analysts. However, there is a need in many fields for more sophisticated unsupervised methods to automate the discovery, profiling and organisation of related information through clustering of documents by authorship. An automated and unsupervised methodology for clustering documents by authorship is proposed in this paper. The methodology is named NUANCE, for n-gram Unsupervised Automated Natural Cluster Ensemble. Testing indicates that the derived clusters have a strong correlation to the true authorship of unseen documents. © 2011 Cambridge University Press.
- Description: 2003010584
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2013
- Type: Text , Journal article
- Relation: Natural Language Engineering Vol. 19, no. 1 (2013), p. 95-120
- Full Text:
- Reviewed:
- Description: Authorship Analysis aims to extract information about the authorship of documents from features within those documents. Typically, this is performed as a classification task with the aim of identifying the author of a document, given a set of documents of known authorship. Alternatively, unsupervised methods have been developed primarily as visualisation tools to assist the manual discovery of clusters of authorship within a corpus by analysts. However, there is a need in many fields for more sophisticated unsupervised methods to automate the discovery, profiling and organisation of related information through clustering of documents by authorship. An automated and unsupervised methodology for clustering documents by authorship is proposed in this paper. The methodology is named NUANCE, for n-gram Unsupervised Automated Natural Cluster Ensemble. Testing indicates that the derived clusters have a strong correlation to the true authorship of unseen documents. © 2011 Cambridge University Press.
- Description: 2003010584
Building roof plane extraction from LIDAR data
- Awrangjeb, Mohammad, Lu, Guojun
- Authors: Awrangjeb, Mohammad , Lu, Guojun
- Date: 2013
- Type: Text , Conference paper
- Relation: 2013 International Conference on Digital Image Computing: Techniques and Applications (DICTA)
- Full Text:
- Reviewed:
- Description: This paper presents a new segmentation technique to use LIDAR point cloud data for automatic extraction of building roof planes. The raw LIDAR points are first classified into two major groups: ground and non-ground points. The ground points are used to generate a 'building mask' in which the black areas represent the ground where there are no laser returns below a certain height. The non-ground points are segmented to extract the planar roof segments. First, the building mask is divided into small grid cells. The cells containing the black pixels are clustered such that each cluster represents an individual building or tree. Second, the non-ground points within a cluster are segmented based on their coplanarity and neighbourhood relations. Third, the planar segments are refined using a rule-based procedure that assigns the common points among the planar segments to the appropriate segments. Finally, another rule-based procedure is applied to remove tree planes which are generally small in size and randomly oriented. Experimental results on three Australian sites have shown that the proposed method offers high building detection and roof plane extraction rates.
- Authors: Awrangjeb, Mohammad , Lu, Guojun
- Date: 2013
- Type: Text , Conference paper
- Relation: 2013 International Conference on Digital Image Computing: Techniques and Applications (DICTA)
- Full Text:
- Reviewed:
- Description: This paper presents a new segmentation technique to use LIDAR point cloud data for automatic extraction of building roof planes. The raw LIDAR points are first classified into two major groups: ground and non-ground points. The ground points are used to generate a 'building mask' in which the black areas represent the ground where there are no laser returns below a certain height. The non-ground points are segmented to extract the planar roof segments. First, the building mask is divided into small grid cells. The cells containing the black pixels are clustered such that each cluster represents an individual building or tree. Second, the non-ground points within a cluster are segmented based on their coplanarity and neighbourhood relations. Third, the planar segments are refined using a rule-based procedure that assigns the common points among the planar segments to the appropriate segments. Finally, another rule-based procedure is applied to remove tree planes which are generally small in size and randomly oriented. Experimental results on three Australian sites have shown that the proposed method offers high building detection and roof plane extraction rates.
Evaluating authorship distance methods using the positive Silhouette coefficient
- Layton, Robert, Watters, Paul, Dazeley, Richard
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2013
- Type: Text , Journal article
- Relation: Natural Language Engineering Vol. 19, no. 4 (2013), p. 517-535
- Full Text:
- Reviewed:
- Description: Unsupervised Authorship Analysis (UAA) aims to cluster documents by authorship without knowing the authorship of any documents. An important factor in UAA is the method for calculating the distance between documents. This choice of the authorship distance method is considered more critical to the end result than the choice of cluster analysis algorithm. One method for measuring the correlation between a distance metric and a labelling (such as class values or clusters) is the Silhouette Coefficient (SC). The SC can be leveraged by measuring the correlation between the authorship distance method and the true authorship, evaluating the quality of the distance method. However, we show that the SC can be severely affected by outliers. To address this issue, we introduce the Positive Silhouette Coefficient, given as the proportion of instances with a positive SC value. This metric is not easily altered by outliers and produces a more robust metric. A large number of authorship distance methods are then compared using the PSC, and the findings are presented. This research provides an insight into the efficacy of methods for UAA and presents a framework for testing authorship distance methods.
- Description: C1
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2013
- Type: Text , Journal article
- Relation: Natural Language Engineering Vol. 19, no. 4 (2013), p. 517-535
- Full Text:
- Reviewed:
- Description: Unsupervised Authorship Analysis (UAA) aims to cluster documents by authorship without knowing the authorship of any documents. An important factor in UAA is the method for calculating the distance between documents. This choice of the authorship distance method is considered more critical to the end result than the choice of cluster analysis algorithm. One method for measuring the correlation between a distance metric and a labelling (such as class values or clusters) is the Silhouette Coefficient (SC). The SC can be leveraged by measuring the correlation between the authorship distance method and the true authorship, evaluating the quality of the distance method. However, we show that the SC can be severely affected by outliers. To address this issue, we introduce the Positive Silhouette Coefficient, given as the proportion of instances with a positive SC value. This metric is not easily altered by outliers and produces a more robust metric. A large number of authorship distance methods are then compared using the PSC, and the findings are presented. This research provides an insight into the efficacy of methods for UAA and presents a framework for testing authorship distance methods.
- Description: C1
Extraction and processing of real time strain of embedded FBG sensors using a fixed filter FBG circuit and an artificial neural network
- Kahandawa, Gayan, Epaarachchi, Jayantha, Wang, Hao, Canning, John, Lau, Alan
- Authors: Kahandawa, Gayan , Epaarachchi, Jayantha , Wang, Hao , Canning, John , Lau, Alan
- Date: 2013
- Type: Text , Journal article
- Relation: Measurement: Journal of the International Measurement Confederation Vol. 46, no. 10 (2013), p. 4045-4051
- Full Text:
- Reviewed:
- Description: Fibre Bragg Grating (FBG) sensors have been used in the development of structural health monitoring (SHM) and damage detection systems for advanced composite structures over several decades. Unfortunately, to date only a handful of appropriate configurations and algorithm sare available for using in SHM systems have been developed. This paper reveals a novel configuration of FBG sensors to acquire strain reading and an integrated statistical approach to analyse data in real time. The proposed configuration has proven its capability to overcome practical constraints and the engineering challenges associated with FBG-based SHM systems. A fixed filter decoding system and an integrated artificial neural network algorithm for extracting strain from embedded FBG sensor were proposed and experimentally proved. Furthermore, the laboratory level experimental data was used to verify the accuracy of the system and it was found that the error levels were less than 0.3% in predictions. The developed SMH system using this technology has been submitted to US patent office and will be available for use of aerospace applications in due course. © 2013 Elsevier Ltd. All rights reserved.
- Authors: Kahandawa, Gayan , Epaarachchi, Jayantha , Wang, Hao , Canning, John , Lau, Alan
- Date: 2013
- Type: Text , Journal article
- Relation: Measurement: Journal of the International Measurement Confederation Vol. 46, no. 10 (2013), p. 4045-4051
- Full Text:
- Reviewed:
- Description: Fibre Bragg Grating (FBG) sensors have been used in the development of structural health monitoring (SHM) and damage detection systems for advanced composite structures over several decades. Unfortunately, to date only a handful of appropriate configurations and algorithm sare available for using in SHM systems have been developed. This paper reveals a novel configuration of FBG sensors to acquire strain reading and an integrated statistical approach to analyse data in real time. The proposed configuration has proven its capability to overcome practical constraints and the engineering challenges associated with FBG-based SHM systems. A fixed filter decoding system and an integrated artificial neural network algorithm for extracting strain from embedded FBG sensor were proposed and experimentally proved. Furthermore, the laboratory level experimental data was used to verify the accuracy of the system and it was found that the error levels were less than 0.3% in predictions. The developed SMH system using this technology has been submitted to US patent office and will be available for use of aerospace applications in due course. © 2013 Elsevier Ltd. All rights reserved.
Rhythmic and sustained oscillations in metabolism and gene expression of Cyanothece sp. ATCC 51142 under constant light
- Gaudana, Sandeep, Krishnakumar, S., Alagesan, Swathi, Digmurti, Madhuri, Viswanathan, Ganesh, Chetty, Madhu, Wangikar, Pramod
- Authors: Gaudana, Sandeep , Krishnakumar, S. , Alagesan, Swathi , Digmurti, Madhuri , Viswanathan, Ganesh , Chetty, Madhu , Wangikar, Pramod
- Date: 2013
- Type: Text , Journal article
- Relation: Frontiers in Microbiology Vol. 4, no. Article 374 (2013), p. 1-11
- Full Text:
- Reviewed:
- Description: Cyanobacteria, a group of photosynthetic prokaryotes, oscillate between day and night time metabolisms with concomitant oscillations in gene expression in response to light/dark cycles (LD). The oscillations in gene expression have been shown to sustain in constant light (LL) with a free running period of 24 h in a model cyanobacterium Synechococcus elongatus PCC 7942. However, equivalent oscillations in metabolism are not reported under LL in this non-nitrogen fixing cyanobacterium. Here we focus on Cyanothece sp. ATCC 51142, a unicellular, nitrogen-fixing cyanobacterium known to temporally separate the processes of oxygenic photosynthesis and oxygen-sensitive nitrogen fixation. In a recent report, metabolism of Cyanothece 51142 has been shown to oscillate between photosynthetic and respiratory phases under LL with free running periods that are temperature dependent but significantly shorter than the circadian period. Further, the oscillations shift to circadian pattern at moderate cell densities that are concomitant with slower growth rates. Here we take this understanding forward and demonstrate that the ultradian rhythm under LL sustains at much higher cell densities when grown under turbulent regimes that simulate flashing light effect. Our results suggest that the ultradian rhythm in metabolism may be needed to support higher carbon and nitrogen requirements of rapidly growing cells under LL. With a comprehensive Real time PCR based gene expression analysis we account for key regulatory interactions and demonstrate the interplay between clock genes and the genes of key metabolic pathways. Further, we observe that several genes that peak at dusk in Synechococcus peak at dawn in Cyanothece and vice versa. The circadian rhythm of this organism appears to be more robust with peaking of genes in anticipation of the ensuing photosynthetic and respiratory metabolic phases.
- Authors: Gaudana, Sandeep , Krishnakumar, S. , Alagesan, Swathi , Digmurti, Madhuri , Viswanathan, Ganesh , Chetty, Madhu , Wangikar, Pramod
- Date: 2013
- Type: Text , Journal article
- Relation: Frontiers in Microbiology Vol. 4, no. Article 374 (2013), p. 1-11
- Full Text:
- Reviewed:
- Description: Cyanobacteria, a group of photosynthetic prokaryotes, oscillate between day and night time metabolisms with concomitant oscillations in gene expression in response to light/dark cycles (LD). The oscillations in gene expression have been shown to sustain in constant light (LL) with a free running period of 24 h in a model cyanobacterium Synechococcus elongatus PCC 7942. However, equivalent oscillations in metabolism are not reported under LL in this non-nitrogen fixing cyanobacterium. Here we focus on Cyanothece sp. ATCC 51142, a unicellular, nitrogen-fixing cyanobacterium known to temporally separate the processes of oxygenic photosynthesis and oxygen-sensitive nitrogen fixation. In a recent report, metabolism of Cyanothece 51142 has been shown to oscillate between photosynthetic and respiratory phases under LL with free running periods that are temperature dependent but significantly shorter than the circadian period. Further, the oscillations shift to circadian pattern at moderate cell densities that are concomitant with slower growth rates. Here we take this understanding forward and demonstrate that the ultradian rhythm under LL sustains at much higher cell densities when grown under turbulent regimes that simulate flashing light effect. Our results suggest that the ultradian rhythm in metabolism may be needed to support higher carbon and nitrogen requirements of rapidly growing cells under LL. With a comprehensive Real time PCR based gene expression analysis we account for key regulatory interactions and demonstrate the interplay between clock genes and the genes of key metabolic pathways. Further, we observe that several genes that peak at dusk in Synechococcus peak at dawn in Cyanothece and vice versa. The circadian rhythm of this organism appears to be more robust with peaking of genes in anticipation of the ensuing photosynthetic and respiratory metabolic phases.
Recentred local profiles for authorship attribution
- Layton, Robert, Watters, Paul, Dazeley, Richard
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2012
- Type: Text , Journal article
- Relation: Natural Language Engineering Vol. 18, no. 3 (2012), p. 293-312
- Full Text:
- Reviewed:
- Description: Authorship attribution methods aim to determine the author of a document, by using information gathered from a set of documents with known authors. One method of performing this task is to create profiles containing distinctive features known to be used by each author. In this paper, a new method of creating an author or document profile is presented that detects features considered distinctive, compared to normal language usage. This recentreing approach creates more accurate profiles than previous methods, as demonstrated empirically using a known corpus of authorship problems. This method, named recentred local profiles, determines authorship accurately using a simple 'best matching author' approach to classification, compared to other methods in the literature. The proposed method is shown to be more stable than related methods as parameter values change. Using a weighted voting scheme, recentred local profiles is shown to outperform other methods in authorship attribution, with an overall accuracy of 69.9% on the ad-hoc authorship attribution competition corpus, representing a significant improvement over related methods. Copyright © Cambridge University Press 2011.
- Description: 2003010688
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2012
- Type: Text , Journal article
- Relation: Natural Language Engineering Vol. 18, no. 3 (2012), p. 293-312
- Full Text:
- Reviewed:
- Description: Authorship attribution methods aim to determine the author of a document, by using information gathered from a set of documents with known authors. One method of performing this task is to create profiles containing distinctive features known to be used by each author. In this paper, a new method of creating an author or document profile is presented that detects features considered distinctive, compared to normal language usage. This recentreing approach creates more accurate profiles than previous methods, as demonstrated empirically using a known corpus of authorship problems. This method, named recentred local profiles, determines authorship accurately using a simple 'best matching author' approach to classification, compared to other methods in the literature. The proposed method is shown to be more stable than related methods as parameter values change. Using a weighted voting scheme, recentred local profiles is shown to outperform other methods in authorship attribution, with an overall accuracy of 69.9% on the ad-hoc authorship attribution competition corpus, representing a significant improvement over related methods. Copyright © Cambridge University Press 2011.
- Description: 2003010688
Sustaining the future through virtual worlds
- Gregory, Sue, Gregory, Brent, Hillier, Mathew, Miller, Charlynn, Meredith, Grant
- Authors: Gregory, Sue , Gregory, Brent , Hillier, Mathew , Miller, Charlynn , Meredith, Grant
- Date: 2012
- Type: Text , Conference paper
- Relation: Future Challenges, Sustainable Futures p. 361-368
- Full Text:
- Reviewed:
- Description: Virtual worlds (VWs) continue to be used extensively in Australia and New Zealand higher education institutions although the tendency towards making unrealistic claims of efficacy and popularity appears to be over. Some educators at higher education institutions continue to use VWs in the same way as they have done in the past; others are exploring a range of different VWs or using them in new ways; whilst some are opting out altogether. This paper presents an overview of how 46 educators from some 26 institutions see VWs as an opportunity to sustain higher education. The positives and negatives of using VWs are discussed.
- Authors: Gregory, Sue , Gregory, Brent , Hillier, Mathew , Miller, Charlynn , Meredith, Grant
- Date: 2012
- Type: Text , Conference paper
- Relation: Future Challenges, Sustainable Futures p. 361-368
- Full Text:
- Reviewed:
- Description: Virtual worlds (VWs) continue to be used extensively in Australia and New Zealand higher education institutions although the tendency towards making unrealistic claims of efficacy and popularity appears to be over. Some educators at higher education institutions continue to use VWs in the same way as they have done in the past; others are exploring a range of different VWs or using them in new ways; whilst some are opting out altogether. This paper presents an overview of how 46 educators from some 26 institutions see VWs as an opportunity to sustain higher education. The positives and negatives of using VWs are discussed.