The gene of scientific success
- Kong, Xiangjie, Zhang, Jun, Zhang, Da, Bu, Yi, Ding, Ying, Xia, Feng
- Authors: Kong, Xiangjie , Zhang, Jun , Zhang, Da , Bu, Yi , Ding, Ying , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: ACM Transactions on Knowledge Discovery from Data Vol. 14, no. 4 (2020), p.
- Full Text:
- Reviewed:
- Description: This article elaborates how to identify and evaluate causal factors to improve scientific impact. Currently, analyzing scientific impact can be beneficial to various academic activities including funding application, mentor recommendation, discovering potential cooperators, and the like. It is universally acknowledged that high-impact scholars often have more opportunities to receive awards as an encouragement for their hard work. Therefore, scholars spend great efforts in making scientific achievements and improving scientific impact during their academic life. However, what are the determinate factors that control scholars' academic success? The answer to this question can help scholars conduct their research more efficiently. Under this consideration, our article presents and analyzes the causal factors that are crucial for scholars' academic success. We first propose five major factors including article-centered factors, author-centered factors, venue-centered factors, institution-centered factors, and temporal factors. Then, we apply recent advanced machine learning algorithms and jackknife method to assess the importance of each causal factor. Our empirical results show that author-centered and article-centered factors have the highest relevancy to scholars' future success in the computer science area. Additionally, we discover an interesting phenomenon that the h-index of scholars within the same institution or university are actually very close to each other. © 2020 ACM.
- Authors: Kong, Xiangjie , Zhang, Jun , Zhang, Da , Bu, Yi , Ding, Ying , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: ACM Transactions on Knowledge Discovery from Data Vol. 14, no. 4 (2020), p.
- Full Text:
- Reviewed:
- Description: This article elaborates how to identify and evaluate causal factors to improve scientific impact. Currently, analyzing scientific impact can be beneficial to various academic activities including funding application, mentor recommendation, discovering potential cooperators, and the like. It is universally acknowledged that high-impact scholars often have more opportunities to receive awards as an encouragement for their hard work. Therefore, scholars spend great efforts in making scientific achievements and improving scientific impact during their academic life. However, what are the determinate factors that control scholars' academic success? The answer to this question can help scholars conduct their research more efficiently. Under this consideration, our article presents and analyzes the causal factors that are crucial for scholars' academic success. We first propose five major factors including article-centered factors, author-centered factors, venue-centered factors, institution-centered factors, and temporal factors. Then, we apply recent advanced machine learning algorithms and jackknife method to assess the importance of each causal factor. Our empirical results show that author-centered and article-centered factors have the highest relevancy to scholars' future success in the computer science area. Additionally, we discover an interesting phenomenon that the h-index of scholars within the same institution or university are actually very close to each other. © 2020 ACM.
Evaluating authorship distance methods using the positive Silhouette coefficient
- Layton, Robert, Watters, Paul, Dazeley, Richard
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2013
- Type: Text , Journal article
- Relation: Natural Language Engineering Vol. 19, no. 4 (2013), p. 517-535
- Full Text:
- Reviewed:
- Description: Unsupervised Authorship Analysis (UAA) aims to cluster documents by authorship without knowing the authorship of any documents. An important factor in UAA is the method for calculating the distance between documents. This choice of the authorship distance method is considered more critical to the end result than the choice of cluster analysis algorithm. One method for measuring the correlation between a distance metric and a labelling (such as class values or clusters) is the Silhouette Coefficient (SC). The SC can be leveraged by measuring the correlation between the authorship distance method and the true authorship, evaluating the quality of the distance method. However, we show that the SC can be severely affected by outliers. To address this issue, we introduce the Positive Silhouette Coefficient, given as the proportion of instances with a positive SC value. This metric is not easily altered by outliers and produces a more robust metric. A large number of authorship distance methods are then compared using the PSC, and the findings are presented. This research provides an insight into the efficacy of methods for UAA and presents a framework for testing authorship distance methods.
- Description: C1
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2013
- Type: Text , Journal article
- Relation: Natural Language Engineering Vol. 19, no. 4 (2013), p. 517-535
- Full Text:
- Reviewed:
- Description: Unsupervised Authorship Analysis (UAA) aims to cluster documents by authorship without knowing the authorship of any documents. An important factor in UAA is the method for calculating the distance between documents. This choice of the authorship distance method is considered more critical to the end result than the choice of cluster analysis algorithm. One method for measuring the correlation between a distance metric and a labelling (such as class values or clusters) is the Silhouette Coefficient (SC). The SC can be leveraged by measuring the correlation between the authorship distance method and the true authorship, evaluating the quality of the distance method. However, we show that the SC can be severely affected by outliers. To address this issue, we introduce the Positive Silhouette Coefficient, given as the proportion of instances with a positive SC value. This metric is not easily altered by outliers and produces a more robust metric. A large number of authorship distance methods are then compared using the PSC, and the findings are presented. This research provides an insight into the efficacy of methods for UAA and presents a framework for testing authorship distance methods.
- Description: C1
Recentred local profiles for authorship attribution
- Layton, Robert, Watters, Paul, Dazeley, Richard
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2012
- Type: Text , Journal article
- Relation: Natural Language Engineering Vol. 18, no. 3 (2012), p. 293-312
- Full Text:
- Reviewed:
- Description: Authorship attribution methods aim to determine the author of a document, by using information gathered from a set of documents with known authors. One method of performing this task is to create profiles containing distinctive features known to be used by each author. In this paper, a new method of creating an author or document profile is presented that detects features considered distinctive, compared to normal language usage. This recentreing approach creates more accurate profiles than previous methods, as demonstrated empirically using a known corpus of authorship problems. This method, named recentred local profiles, determines authorship accurately using a simple 'best matching author' approach to classification, compared to other methods in the literature. The proposed method is shown to be more stable than related methods as parameter values change. Using a weighted voting scheme, recentred local profiles is shown to outperform other methods in authorship attribution, with an overall accuracy of 69.9% on the ad-hoc authorship attribution competition corpus, representing a significant improvement over related methods. Copyright © Cambridge University Press 2011.
- Description: 2003010688
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2012
- Type: Text , Journal article
- Relation: Natural Language Engineering Vol. 18, no. 3 (2012), p. 293-312
- Full Text:
- Reviewed:
- Description: Authorship attribution methods aim to determine the author of a document, by using information gathered from a set of documents with known authors. One method of performing this task is to create profiles containing distinctive features known to be used by each author. In this paper, a new method of creating an author or document profile is presented that detects features considered distinctive, compared to normal language usage. This recentreing approach creates more accurate profiles than previous methods, as demonstrated empirically using a known corpus of authorship problems. This method, named recentred local profiles, determines authorship accurately using a simple 'best matching author' approach to classification, compared to other methods in the literature. The proposed method is shown to be more stable than related methods as parameter values change. Using a weighted voting scheme, recentred local profiles is shown to outperform other methods in authorship attribution, with an overall accuracy of 69.9% on the ad-hoc authorship attribution competition corpus, representing a significant improvement over related methods. Copyright © Cambridge University Press 2011.
- Description: 2003010688
Automated unsupervised authorship analysis using evidence accumulation clustering
- Layton, Robert, Watters, Paul, Dazeley, Richard
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2013
- Type: Text , Journal article
- Relation: Natural Language Engineering Vol. 19, no. 1 (2013), p. 95-120
- Full Text:
- Reviewed:
- Description: Authorship Analysis aims to extract information about the authorship of documents from features within those documents. Typically, this is performed as a classification task with the aim of identifying the author of a document, given a set of documents of known authorship. Alternatively, unsupervised methods have been developed primarily as visualisation tools to assist the manual discovery of clusters of authorship within a corpus by analysts. However, there is a need in many fields for more sophisticated unsupervised methods to automate the discovery, profiling and organisation of related information through clustering of documents by authorship. An automated and unsupervised methodology for clustering documents by authorship is proposed in this paper. The methodology is named NUANCE, for n-gram Unsupervised Automated Natural Cluster Ensemble. Testing indicates that the derived clusters have a strong correlation to the true authorship of unseen documents. © 2011 Cambridge University Press.
- Description: 2003010584
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2013
- Type: Text , Journal article
- Relation: Natural Language Engineering Vol. 19, no. 1 (2013), p. 95-120
- Full Text:
- Reviewed:
- Description: Authorship Analysis aims to extract information about the authorship of documents from features within those documents. Typically, this is performed as a classification task with the aim of identifying the author of a document, given a set of documents of known authorship. Alternatively, unsupervised methods have been developed primarily as visualisation tools to assist the manual discovery of clusters of authorship within a corpus by analysts. However, there is a need in many fields for more sophisticated unsupervised methods to automate the discovery, profiling and organisation of related information through clustering of documents by authorship. An automated and unsupervised methodology for clustering documents by authorship is proposed in this paper. The methodology is named NUANCE, for n-gram Unsupervised Automated Natural Cluster Ensemble. Testing indicates that the derived clusters have a strong correlation to the true authorship of unseen documents. © 2011 Cambridge University Press.
- Description: 2003010584
Effects of a proper feature selection on prediction and optimization of drilling rate using intelligent techniques
- Liao, Xiufeng, Khandelwal, Manoj, Yang, Haiqing, Koopialipoor, Mohammadreza, Murlidhar, Bhatawdekar
- Authors: Liao, Xiufeng , Khandelwal, Manoj , Yang, Haiqing , Koopialipoor, Mohammadreza , Murlidhar, Bhatawdekar
- Date: 2020
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 36, no. 2 (Apr 2020), p. 499-510
- Full Text:
- Reviewed:
- Description: One of the important factors during drilling times is the rate of penetration (ROP), which is controlled based on different variables. Factors affecting different drillings are of paramount importance. In the current research, an attempt was made to better recognize drilling parameters and optimize them based on an optimization algorithm. For this purpose, 618 data sets, including RPM, flushing media, and compressive strength parameters, were measured and collected. After an initial investigation, the compressive strength feature of samples, which is an important parameter from the rocks, was used as a proper criterion for classification. Then using intelligent systems, three different levels of the rock strength and all data were modeled. The results showed that systems which were classified based on compressive strength showed a better performance for ROP assessment due to the proximity of features. Therefore, these three levels were used for classification. A new artificial bee colony algorithm was used to solve this problem. Optimizations were applied to the selected models under different optimization conditions, and optimal states were determined. As determining drilling machine parameters is important, these parameters were determined based on optimal conditions. The obtained results showed that this intelligent system can well improve drilling conditions and increase the ROP value for three strength levels of the rocks. This modeling system can be used in different drilling operations.
- Authors: Liao, Xiufeng , Khandelwal, Manoj , Yang, Haiqing , Koopialipoor, Mohammadreza , Murlidhar, Bhatawdekar
- Date: 2020
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 36, no. 2 (Apr 2020), p. 499-510
- Full Text:
- Reviewed:
- Description: One of the important factors during drilling times is the rate of penetration (ROP), which is controlled based on different variables. Factors affecting different drillings are of paramount importance. In the current research, an attempt was made to better recognize drilling parameters and optimize them based on an optimization algorithm. For this purpose, 618 data sets, including RPM, flushing media, and compressive strength parameters, were measured and collected. After an initial investigation, the compressive strength feature of samples, which is an important parameter from the rocks, was used as a proper criterion for classification. Then using intelligent systems, three different levels of the rock strength and all data were modeled. The results showed that systems which were classified based on compressive strength showed a better performance for ROP assessment due to the proximity of features. Therefore, these three levels were used for classification. A new artificial bee colony algorithm was used to solve this problem. Optimizations were applied to the selected models under different optimization conditions, and optimal states were determined. As determining drilling machine parameters is important, these parameters were determined based on optimal conditions. The obtained results showed that this intelligent system can well improve drilling conditions and increase the ROP value for three strength levels of the rocks. This modeling system can be used in different drilling operations.
Tracing the Pace of COVID-19 research : topic modeling and evolution
- Liu, Jiaying, Nie, Hansong, Li, Shihao, Ren, Jing, Xia, Feng
- Authors: Liu, Jiaying , Nie, Hansong , Li, Shihao , Ren, Jing , Xia, Feng
- Date: 2021
- Type: Text , Journal article
- Relation: Big Data Research Vol. 25, no. (2021), p.
- Full Text:
- Reviewed:
- Description: COVID-19 has been spreading rapidly around the world. With the growing attention on the deadly pandemic, discussions and research on COVID-19 are rapidly increasing to exchange latest findings with the hope to accelerate the pace of finding a cure. As a branch of information technology, artificial intelligence (AI) has greatly expedited the development of human society. In this paper, we investigate and visualize the on-going advancements of early scientific research on COVID-19 from the perspective of AI. By adopting the Latent Dirichlet Allocation (LDA) model, this paper allocates the research articles into 50 key research topics pertinent to COVID-19 according to their abstracts. We present an overview of early studies of the COVID-19 crisis at different scales including referencing/citation behavior, topic variation and their inner interactions. We also identify innovative papers that are regarded as the cornerstones in the development of COVID-19 research. The results unveil the focus of scientific research, thereby giving deep insights into how the academic society contributes to combating the COVID-19 pandemic. © 2021 Elsevier Inc. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Jing Ren and Feng Xia" is provided in this record**
- Description: COVID-19 has been spreading rapidly around the world. With the growing attention on the deadly pandemic, discussions and research on COVID-19 are rapidly increasing to exchange latest findings with the hope to accelerate the pace of finding a cure. As a branch of information technology, artificial intelligence (AI) has greatly expedited the development of human society. In this paper, we investigate and visualize the on-going advancements of early scientific research on COVID-19 from the perspective of AI. By adopting the Latent Dirichlet Allocation (LDA) model, this paper allocates the research articles into 50 key research topics pertinent to COVID-19 according to their abstracts. We present an overview of early studies of the COVID-19 crisis at different scales including referencing/citation behavior, topic variation and their inner interactions. We also identify innovative papers that are regarded as the cornerstones in the development of COVID-19 research. The results unveil the focus of scientific research, thereby giving deep insights into how the academic society contributes to combating the COVID-19 pandemic. © 2021 Elsevier Inc.
- Authors: Liu, Jiaying , Nie, Hansong , Li, Shihao , Ren, Jing , Xia, Feng
- Date: 2021
- Type: Text , Journal article
- Relation: Big Data Research Vol. 25, no. (2021), p.
- Full Text:
- Reviewed:
- Description: COVID-19 has been spreading rapidly around the world. With the growing attention on the deadly pandemic, discussions and research on COVID-19 are rapidly increasing to exchange latest findings with the hope to accelerate the pace of finding a cure. As a branch of information technology, artificial intelligence (AI) has greatly expedited the development of human society. In this paper, we investigate and visualize the on-going advancements of early scientific research on COVID-19 from the perspective of AI. By adopting the Latent Dirichlet Allocation (LDA) model, this paper allocates the research articles into 50 key research topics pertinent to COVID-19 according to their abstracts. We present an overview of early studies of the COVID-19 crisis at different scales including referencing/citation behavior, topic variation and their inner interactions. We also identify innovative papers that are regarded as the cornerstones in the development of COVID-19 research. The results unveil the focus of scientific research, thereby giving deep insights into how the academic society contributes to combating the COVID-19 pandemic. © 2021 Elsevier Inc. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Jing Ren and Feng Xia" is provided in this record**
- Description: COVID-19 has been spreading rapidly around the world. With the growing attention on the deadly pandemic, discussions and research on COVID-19 are rapidly increasing to exchange latest findings with the hope to accelerate the pace of finding a cure. As a branch of information technology, artificial intelligence (AI) has greatly expedited the development of human society. In this paper, we investigate and visualize the on-going advancements of early scientific research on COVID-19 from the perspective of AI. By adopting the Latent Dirichlet Allocation (LDA) model, this paper allocates the research articles into 50 key research topics pertinent to COVID-19 according to their abstracts. We present an overview of early studies of the COVID-19 crisis at different scales including referencing/citation behavior, topic variation and their inner interactions. We also identify innovative papers that are regarded as the cornerstones in the development of COVID-19 research. The results unveil the focus of scientific research, thereby giving deep insights into how the academic society contributes to combating the COVID-19 pandemic. © 2021 Elsevier Inc.
Structured reasoning to support deliberative dialogue
- Macfadyen, Alyx, Stranieri, Andrew, Yearwood, John
- Authors: Macfadyen, Alyx , Stranieri, Andrew , Yearwood, John
- Date: 2005
- Type: Text , Journal article
- Relation: Lecture Notes in Artificial Intelligence 3681: Knowledge-Based Intelligent Information and Engineering Systems, 9th International Conference, KES 2005, Melbourne, Australia, September 2005, Proceedings, Part 1 Vol. 1, no. (2005), p. 283-289
- Full Text:
- Reviewed:
- Description: Deliberative dialogue is a form of dialogue that involves participants advancing claims and, without power plays or posturing, deliberating on the claims of others until a consensus decision is reached. This paper describes a deliberative support system to facilitate and encourage participants to engage in a discussion deliberatively. A knowledge representation framework is deployed to generate a strong domain model of reasoning structure. The structure, coupled with a deliberative dialogue protocol results in a web based system that regulates a discussion to avoid combative, non-deliberative exchanges. The system has been designed for online dispute resolution between husband and wife in divorce proceedings involving property.
- Description: C1
- Description: 2003001381
- Authors: Macfadyen, Alyx , Stranieri, Andrew , Yearwood, John
- Date: 2005
- Type: Text , Journal article
- Relation: Lecture Notes in Artificial Intelligence 3681: Knowledge-Based Intelligent Information and Engineering Systems, 9th International Conference, KES 2005, Melbourne, Australia, September 2005, Proceedings, Part 1 Vol. 1, no. (2005), p. 283-289
- Full Text:
- Reviewed:
- Description: Deliberative dialogue is a form of dialogue that involves participants advancing claims and, without power plays or posturing, deliberating on the claims of others until a consensus decision is reached. This paper describes a deliberative support system to facilitate and encourage participants to engage in a discussion deliberatively. A knowledge representation framework is deployed to generate a strong domain model of reasoning structure. The structure, coupled with a deliberative dialogue protocol results in a web based system that regulates a discussion to avoid combative, non-deliberative exchanges. The system has been designed for online dispute resolution between husband and wife in divorce proceedings involving property.
- Description: C1
- Description: 2003001381
Mobile robotic sensors for environmental monitoring using gaussian markov random field
- Nguyen, Linh, Kodagoda, Sarath, Ranasinghe, Ravindra, Dissanayake, Gamini
- Authors: Nguyen, Linh , Kodagoda, Sarath , Ranasinghe, Ravindra , Dissanayake, Gamini
- Date: 2021
- Type: Text , Journal article
- Relation: Robotica Vol. 39, no. 5 (2021), p. 862-884
- Full Text:
- Reviewed:
- Description: This paper addresses the issue of monitoring spatial environmental phenomena of interest utilizing information collected by a network of mobile, wireless, and noisy sensors that can take discrete measurements as they navigate through the environment. It is proposed to employ Gaussian Markov random field (GMRF) represented on an irregular discrete lattice by using the stochastic partial differential equations method to model the physical spatial field. It then derives a GMRF-based approach to effectively predict the field at unmeasured locations, given available observations, in both centralized and distributed manners. Furthermore, a novel but efficient optimality criterion is then proposed to design centralized and distributed adaptive sampling strategies for the mobile robotic sensors to find the most informative sampling paths in taking future measurements. By taking advantage of conditional independence property in the GMRF, the adaptive sampling optimization problem is proven to be resolved in a deterministic time. The effectiveness of the proposed approach is compared and demonstrated using pre-published data sets with appealing results. Copyright © The Author(s), 2020. Published by Cambridge University Press.
- Authors: Nguyen, Linh , Kodagoda, Sarath , Ranasinghe, Ravindra , Dissanayake, Gamini
- Date: 2021
- Type: Text , Journal article
- Relation: Robotica Vol. 39, no. 5 (2021), p. 862-884
- Full Text:
- Reviewed:
- Description: This paper addresses the issue of monitoring spatial environmental phenomena of interest utilizing information collected by a network of mobile, wireless, and noisy sensors that can take discrete measurements as they navigate through the environment. It is proposed to employ Gaussian Markov random field (GMRF) represented on an irregular discrete lattice by using the stochastic partial differential equations method to model the physical spatial field. It then derives a GMRF-based approach to effectively predict the field at unmeasured locations, given available observations, in both centralized and distributed manners. Furthermore, a novel but efficient optimality criterion is then proposed to design centralized and distributed adaptive sampling strategies for the mobile robotic sensors to find the most informative sampling paths in taking future measurements. By taking advantage of conditional independence property in the GMRF, the adaptive sampling optimization problem is proven to be resolved in a deterministic time. The effectiveness of the proposed approach is compared and demonstrated using pre-published data sets with appealing results. Copyright © The Author(s), 2020. Published by Cambridge University Press.
Deep Reinforcement Learning for Vehicular Edge Computing: An Intelligent Offloading System
- Ning, Zhaolong, Dong, Peiran, Wang, Xiaojie, Rodrigues, Joel, Xia, Feng
- Authors: Ning, Zhaolong , Dong, Peiran , Wang, Xiaojie , Rodrigues, Joel , Xia, Feng
- Date: 2019
- Type: Text , Journal article
- Relation: ACM Transactions on Intelligent Systems and Technology Vol. 10, no. 6 (Dec 2019), p. 24
- Full Text:
- Reviewed:
- Description: The development of smart vehicles brings drivers and passengers a comfortable and safe environment. Various emerging applications are promising to enrich users' traveling experiences and daily life. However, how to execute computing-intensive applications on resource-constrained vehicles still faces huge challenges. In this article, we construct an intelligent offloading system for vehicular edge computing by leveraging deep reinforcement learning. First, both the communication and computation states are modelled by finite Markov chains. Moreover, the task scheduling and resource allocation strategy is formulated as a joint optimization problem to maximize users' Quality of Experience (QoE). Due to its complexity, the original problem is further divided into two sub-optimization problems. A two-sided matching scheme and a deep reinforcement learning approach are developed to schedule offloading requests and allocate network resources, respectively. Performance evaluations illustrate the effectiveness and superiority of our constructed system.
- Authors: Ning, Zhaolong , Dong, Peiran , Wang, Xiaojie , Rodrigues, Joel , Xia, Feng
- Date: 2019
- Type: Text , Journal article
- Relation: ACM Transactions on Intelligent Systems and Technology Vol. 10, no. 6 (Dec 2019), p. 24
- Full Text:
- Reviewed:
- Description: The development of smart vehicles brings drivers and passengers a comfortable and safe environment. Various emerging applications are promising to enrich users' traveling experiences and daily life. However, how to execute computing-intensive applications on resource-constrained vehicles still faces huge challenges. In this article, we construct an intelligent offloading system for vehicular edge computing by leveraging deep reinforcement learning. First, both the communication and computation states are modelled by finite Markov chains. Moreover, the task scheduling and resource allocation strategy is formulated as a joint optimization problem to maximize users' Quality of Experience (QoE). Due to its complexity, the original problem is further divided into two sub-optimization problems. A two-sided matching scheme and a deep reinforcement learning approach are developed to schedule offloading requests and allocate network resources, respectively. Performance evaluations illustrate the effectiveness and superiority of our constructed system.
A numerical control algorithm for navigation of an operator-driven snake-like robot with 4WD-4WS segments
- Authors: Percy, Andrew , Spark, Ian
- Date: 2010
- Type: Text , Journal article
- Relation: Robotica Vol. 29, no. 3 (2010), p. 471-482
- Full Text:
- Reviewed:
- Description: This paper presents a new algorithm for the control of a snake-like robot with passive joints and active wheels. Each segment has four autonomously driven and steered wheels. The algorithm approximates the ideal solution in which all wheels on a segment have the same centre of curvature with wheel speeds, providing cooperative redundancy. Each hitch point joining segments traverses the same path, which is determined by an operator, prescribing the path curvature and front hitch speed. The numerical algorithm developed in this paper is simulation tested against a previously derived analytical solution for a predetermined path. Further simulations are carried out to show the effects of changing curvature and front hitch speed on hitch path, wheel angles and wheel speeds for a one, two and three segment robot.
- Authors: Percy, Andrew , Spark, Ian
- Date: 2010
- Type: Text , Journal article
- Relation: Robotica Vol. 29, no. 3 (2010), p. 471-482
- Full Text:
- Reviewed:
- Description: This paper presents a new algorithm for the control of a snake-like robot with passive joints and active wheels. Each segment has four autonomously driven and steered wheels. The algorithm approximates the ideal solution in which all wheels on a segment have the same centre of curvature with wheel speeds, providing cooperative redundancy. Each hitch point joining segments traverses the same path, which is determined by an operator, prescribing the path curvature and front hitch speed. The numerical algorithm developed in this paper is simulation tested against a previously derived analytical solution for a predetermined path. Further simulations are carried out to show the effects of changing curvature and front hitch speed on hitch path, wheel angles and wheel speeds for a one, two and three segment robot.
REPLOT : REtrieving Profile Links on Twitter for malicious campaign discovery
- Perez, Charles, Birregah, Babiga, Layton, Robert, Lemercier, Marc, Watters, Paul
- Authors: Perez, Charles , Birregah, Babiga , Layton, Robert , Lemercier, Marc , Watters, Paul
- Date: 2015
- Type: Text , Journal article
- Relation: AI Communications Vol. 29, no. 1 (2015), p. 107-122
- Full Text:
- Reviewed:
- Description: Social networking sites are increasingly subject to malicious activities such as self-propagating worms, confidence scams and drive-by-download malwares. The high number of users associated with the presence of sensitive data, such as personal or professional information, is certainly an unprecedented opportunity for attackers. These attackers are moving away from previous platforms of attack, such as emails, towards social networking websites. In this paper, we present a full stack methodology for the identification of campaigns of malicious profiles on social networking sites, composed of maliciousness classification, campaign discovery and attack profiling. The methodology named REPLOT, for REtrieving Profile Links On Twitter, contains three major phases. First, profiles are analysed to determine whether they are more likely to be malicious or benign. Second, connections between suspected malicious profiles are retrieved using a late data fusion approach consisting of temporal and authorship analysis based models to discover campaigns. Third, the analysis of the discovered campaigns is performed to investigate the attacks. In this paper, we apply this methodology to a real world dataset, with a view to understanding the links between malicious profiles, their attack methods and their connections. Our analysis identifies a cluster of linked profiles focusing on propagating malicious links, as well as profiling two other major clusters of attacking campaigns. © 2016 - IOS Press and the authors. All rights reserved.
- Authors: Perez, Charles , Birregah, Babiga , Layton, Robert , Lemercier, Marc , Watters, Paul
- Date: 2015
- Type: Text , Journal article
- Relation: AI Communications Vol. 29, no. 1 (2015), p. 107-122
- Full Text:
- Reviewed:
- Description: Social networking sites are increasingly subject to malicious activities such as self-propagating worms, confidence scams and drive-by-download malwares. The high number of users associated with the presence of sensitive data, such as personal or professional information, is certainly an unprecedented opportunity for attackers. These attackers are moving away from previous platforms of attack, such as emails, towards social networking websites. In this paper, we present a full stack methodology for the identification of campaigns of malicious profiles on social networking sites, composed of maliciousness classification, campaign discovery and attack profiling. The methodology named REPLOT, for REtrieving Profile Links On Twitter, contains three major phases. First, profiles are analysed to determine whether they are more likely to be malicious or benign. Second, connections between suspected malicious profiles are retrieved using a late data fusion approach consisting of temporal and authorship analysis based models to discover campaigns. Third, the analysis of the discovered campaigns is performed to investigate the attacks. In this paper, we apply this methodology to a real world dataset, with a view to understanding the links between malicious profiles, their attack methods and their connections. Our analysis identifies a cluster of linked profiles focusing on propagating malicious links, as well as profiling two other major clusters of attacking campaigns. © 2016 - IOS Press and the authors. All rights reserved.
Feature weighting and retrieval methods for dynamic texture motion features
- Rahman, Ashfaqur, Murshed, Manzur
- Authors: Rahman, Ashfaqur , Murshed, Manzur
- Date: 2010
- Type: Text , Journal article
- Relation: International Journal of Computational Intelligence Systems Vol. 2, no. 1 (2010 2010), p. 27-38
- Full Text:
- Reviewed:
- Description: Feature weighing methods are commonly used to find the relative significance among a set of features that are effectively used by the retrieval methods to search image sequences efficiently from large databases. As evidenced in the current literature, dynamic textures (image sequences with regular motion patterns) can be effectively modelled by a set of spatial and temporal motion distribution features like motion co-occurrence matrix. The aim of this paper is to develop effective feature weighting and retrieval methods for a set of dynamic textures while characterized by motion co-occurrence matrices.
- Authors: Rahman, Ashfaqur , Murshed, Manzur
- Date: 2010
- Type: Text , Journal article
- Relation: International Journal of Computational Intelligence Systems Vol. 2, no. 1 (2010 2010), p. 27-38
- Full Text:
- Reviewed:
- Description: Feature weighing methods are commonly used to find the relative significance among a set of features that are effectively used by the retrieval methods to search image sequences efficiently from large databases. As evidenced in the current literature, dynamic textures (image sequences with regular motion patterns) can be effectively modelled by a set of spatial and temporal motion distribution features like motion co-occurrence matrix. The aim of this paper is to develop effective feature weighting and retrieval methods for a set of dynamic textures while characterized by motion co-occurrence matrices.
Novel spectral descriptor for object shape
- Sajjanhar, Atul, Lu, Guojun, Zhang, Dengsheng
- Authors: Sajjanhar, Atul , Lu, Guojun , Zhang, Dengsheng
- Date: 2010
- Type: Text , Book chapter
- Relation: Proceedings of the 11th Pacific Rim Conference on Multimedia: Advances in Multimedia Information Processing p. 58-67
- Full Text:
- Reviewed:
- Description: In this paper, we propose a novel descriptor for shapes. The proposed descriptor is obtained from 3D spherical harmonics. The inadequacy of 2D spherical harmonics is addressed and the method to obtain 3D spherical harmonics is described. 3D spherical harmonics requires construction of a 3D model which implicitly represents rich features of objects. Spherical harmonics are used to obtain descriptors from the 3D models. The performance of the proposed method is compared against the CSS approach which is the MPEG-7 descriptor for shape contour. MPEG-7 dataset of shape contours, namely, CE-1 is used to perform the experiments. It is shown that the proposed method is effective
- Authors: Sajjanhar, Atul , Lu, Guojun , Zhang, Dengsheng
- Date: 2010
- Type: Text , Book chapter
- Relation: Proceedings of the 11th Pacific Rim Conference on Multimedia: Advances in Multimedia Information Processing p. 58-67
- Full Text:
- Reviewed:
- Description: In this paper, we propose a novel descriptor for shapes. The proposed descriptor is obtained from 3D spherical harmonics. The inadequacy of 2D spherical harmonics is addressed and the method to obtain 3D spherical harmonics is described. 3D spherical harmonics requires construction of a 3D model which implicitly represents rich features of objects. Spherical harmonics are used to obtain descriptors from the 3D models. The performance of the proposed method is compared against the CSS approach which is the MPEG-7 descriptor for shape contour. MPEG-7 dataset of shape contours, namely, CE-1 is used to perform the experiments. It is shown that the proposed method is effective
Image preprocessing in classification and identification of diabetic eye diseases
- Sarki, Rubina, Ahmed, Khandakar, Wang, Hua, Zhang, Yanchun, Ma, Jiangang, Wang, Kate
- Authors: Sarki, Rubina , Ahmed, Khandakar , Wang, Hua , Zhang, Yanchun , Ma, Jiangang , Wang, Kate
- Date: 2021
- Type: Text , Journal article
- Relation: Data Science and Engineering Vol. 6, no. 4 (2021), p. 455-471
- Full Text:
- Reviewed:
- Description: Diabetic eye disease (DED) is a cluster of eye problem that affects diabetic patients. Identifying DED is a crucial activity in retinal fundus images because early diagnosis and treatment can eventually minimize the risk of visual impairment. The retinal fundus image plays a significant role in early DED classification and identification. An accurate diagnostic model’s development using a retinal fundus image depends highly on image quality and quantity. This paper presents a methodical study on the significance of image processing for DED classification. The proposed automated classification framework for DED was achieved in several steps: image quality enhancement, image segmentation (region of interest), image augmentation (geometric transformation), and classification. The optimal results were obtained using traditional image processing methods with a new build convolution neural network (CNN) architecture. The new built CNN combined with the traditional image processing approach presented the best performance with accuracy for DED classification problems. The results of the experiments conducted showed adequate accuracy, specificity, and sensitivity. © 2021, The Author(s).
- Authors: Sarki, Rubina , Ahmed, Khandakar , Wang, Hua , Zhang, Yanchun , Ma, Jiangang , Wang, Kate
- Date: 2021
- Type: Text , Journal article
- Relation: Data Science and Engineering Vol. 6, no. 4 (2021), p. 455-471
- Full Text:
- Reviewed:
- Description: Diabetic eye disease (DED) is a cluster of eye problem that affects diabetic patients. Identifying DED is a crucial activity in retinal fundus images because early diagnosis and treatment can eventually minimize the risk of visual impairment. The retinal fundus image plays a significant role in early DED classification and identification. An accurate diagnostic model’s development using a retinal fundus image depends highly on image quality and quantity. This paper presents a methodical study on the significance of image processing for DED classification. The proposed automated classification framework for DED was achieved in several steps: image quality enhancement, image segmentation (region of interest), image augmentation (geometric transformation), and classification. The optimal results were obtained using traditional image processing methods with a new build convolution neural network (CNN) architecture. The new built CNN combined with the traditional image processing approach presented the best performance with accuracy for DED classification problems. The results of the experiments conducted showed adequate accuracy, specificity, and sensitivity. © 2021, The Author(s).
Heuristic non parametric collateral missing value imputation : A step towards robust post-genomic knowledge discovery
- Sehgal, Muhammad Shoaib B, Gondal, Iqbal, Dooley, Laurence, Coppel, Ross
- Authors: Sehgal, Muhammad Shoaib B , Gondal, Iqbal , Dooley, Laurence , Coppel, Ross
- Date: 2008
- Type: Text , Conference paper
- Relation: Third IAPR International Conference on Pattern Recognition in Bioinformatics (PRIB 2008) Vol. 5625
- Full Text:
- Reviewed:
- Description: Microarrays are able to measure the patterns of expression of thousands of genes in a genometo give profiles that faciliate much faster analysis of biological process for diagnosis, prognosis and tailored drug discovery. Microarrays, however commonly have missing values, various algorithms have been proposed including Collateral Missing Value Estimation (CMVE), Bayesian Principal Component Analysis (BPCA), Least Square Impute (LSImpute). Local Least Square Impute (LLSImpute) and K-Nearest Neighbour (KNN).
- Authors: Sehgal, Muhammad Shoaib B , Gondal, Iqbal , Dooley, Laurence , Coppel, Ross
- Date: 2008
- Type: Text , Conference paper
- Relation: Third IAPR International Conference on Pattern Recognition in Bioinformatics (PRIB 2008) Vol. 5625
- Full Text:
- Reviewed:
- Description: Microarrays are able to measure the patterns of expression of thousands of genes in a genometo give profiles that faciliate much faster analysis of biological process for diagnosis, prognosis and tailored drug discovery. Microarrays, however commonly have missing values, various algorithms have been proposed including Collateral Missing Value Estimation (CMVE), Bayesian Principal Component Analysis (BPCA), Least Square Impute (LSImpute). Local Least Square Impute (LLSImpute) and K-Nearest Neighbour (KNN).
Computational modelling strategies for gene regulatory network reconstruction
- Sehgal, Muhammad Shoaib B, Gondal, Iqbal, Dooley, Laurence
- Authors: Sehgal, Muhammad Shoaib B , Gondal, Iqbal , Dooley, Laurence
- Date: 2008
- Type: Text , Book chapter
- Relation: Studies in Computational Intelligence p. 207-220
- Full Text:
- Reviewed:
- Description: Gene Regulatory Network (GRN) modelling infers genetic interactions between different genes and other cellular components to elucidate the cellular functionality. This GRN modelling has overwhelming applications in biology starting from diagnosis through to drug target identification. Several GRN modelling methods have been proposed in the literature, and it is important to study the relative merits and demerits of each method. This chapter provides a comprehensive comparative study on GRN reconstruction algorithms. The methods discussed in this chapter are diverse and vary from simple similarity based methods to state of the art hybrid and probabilistic methods. In addition, the chapter also underpins the need of strategies which should be able to model the stochastic behavior of gene regulation in the presence of limited number of samples, noisy data, multi-collinearity for high number of genes.
- Authors: Sehgal, Muhammad Shoaib B , Gondal, Iqbal , Dooley, Laurence
- Date: 2008
- Type: Text , Book chapter
- Relation: Studies in Computational Intelligence p. 207-220
- Full Text:
- Reviewed:
- Description: Gene Regulatory Network (GRN) modelling infers genetic interactions between different genes and other cellular components to elucidate the cellular functionality. This GRN modelling has overwhelming applications in biology starting from diagnosis through to drug target identification. Several GRN modelling methods have been proposed in the literature, and it is important to study the relative merits and demerits of each method. This chapter provides a comprehensive comparative study on GRN reconstruction algorithms. The methods discussed in this chapter are diverse and vary from simple similarity based methods to state of the art hybrid and probabilistic methods. In addition, the chapter also underpins the need of strategies which should be able to model the stochastic behavior of gene regulation in the presence of limited number of samples, noisy data, multi-collinearity for high number of genes.
How to improve postgenomic knowledge discovery using imputation
- Sehgal, Muhammad Shoaib B, Gondal, Iqbal, Dooley, Laurence, Coppel, Ross
- Authors: Sehgal, Muhammad Shoaib B , Gondal, Iqbal , Dooley, Laurence , Coppel, Ross
- Date: 2009
- Type: Text , Journal article
- Relation: Eurasip Journal on Bioinformatics and Systems Biology Vol. 2009, no. 1 (2009), p. 1-14
- Full Text:
- Reviewed:
- Description: While microarrays make it feasible to rapidly investigate many complex biological problems, their multistep fabrication has the proclivity for error at every stage. The standard tactic has been to either ignore or regard erroneous gene readings as missing values, though this assumption can exert a major influence upon postgenomic knowledge discovery methods like gene selection and gene regulatory network (GRN) reconstruction. This has been the catalyst for a raft of new flexible imputation algorithms including local least square impute and the recent heuristic collateral missing value imputation, which exploit the biological transactional behaviour of functionally correlated genes to afford accurate missing value estimation. This paper examines the influence of missing value imputation techniques upon postgenomic knowledge inference methods with results for various algorithms consistently corroborating that instead of ignoring missing values, recycling microarray data by flexible and robust imputation can provide substantial performance benefits for subsequent downstream procedures
- Authors: Sehgal, Muhammad Shoaib B , Gondal, Iqbal , Dooley, Laurence , Coppel, Ross
- Date: 2009
- Type: Text , Journal article
- Relation: Eurasip Journal on Bioinformatics and Systems Biology Vol. 2009, no. 1 (2009), p. 1-14
- Full Text:
- Reviewed:
- Description: While microarrays make it feasible to rapidly investigate many complex biological problems, their multistep fabrication has the proclivity for error at every stage. The standard tactic has been to either ignore or regard erroneous gene readings as missing values, though this assumption can exert a major influence upon postgenomic knowledge discovery methods like gene selection and gene regulatory network (GRN) reconstruction. This has been the catalyst for a raft of new flexible imputation algorithms including local least square impute and the recent heuristic collateral missing value imputation, which exploit the biological transactional behaviour of functionally correlated genes to afford accurate missing value estimation. This paper examines the influence of missing value imputation techniques upon postgenomic knowledge inference methods with results for various algorithms consistently corroborating that instead of ignoring missing values, recycling microarray data by flexible and robust imputation can provide substantial performance benefits for subsequent downstream procedures
An efficient RANSAC hypothesis evaluation using sufficient statistics for RGB-D pose estimation
- Senthooran, Ilankalkone, Murshed, Manzur, Barca, Jan, Kamruzzaman, Joarder, Chung, Hoam
- Authors: Senthooran, Ilankalkone , Murshed, Manzur , Barca, Jan , Kamruzzaman, Joarder , Chung, Hoam
- Date: 2019
- Type: Text , Journal article
- Relation: Autonomous Robots Vol. 43, no. 5 (2019), p. 1257-1270
- Full Text:
- Reviewed:
- Description: Achieving autonomous flight in GPS-denied environments begins with pose estimation in three-dimensional space, and this is much more challenging in an MAV in a swarm robotic system due to limited computational resources. In vision-based pose estimation, outlier detection is the most time-consuming step. This usually involves a RANSAC procedure using the reprojection-error method for hypothesis evaluation. Realignment-based hypothesis evaluation method is observed to be more accurate, but the considerably slower speed makes it unsuitable for robots with limited resources. We use sufficient statistics of least-squares minimisation to speed up this process. The additive nature of these sufficient statistics makes it possible to compute pose estimates in each evaluation by reusing previously computed statistics. Thus estimates need not be calculated from scratch each time. The proposed method is tested on standard RANSAC, Preemptive RANSAC and R-RANSAC using benchmark datasets. The results show that the use of sufficient statistics speeds up the outlier detection process with realignment hypothesis evaluation for all RANSAC variants, achieving an execution speed of up to 6.72 times.
- Authors: Senthooran, Ilankalkone , Murshed, Manzur , Barca, Jan , Kamruzzaman, Joarder , Chung, Hoam
- Date: 2019
- Type: Text , Journal article
- Relation: Autonomous Robots Vol. 43, no. 5 (2019), p. 1257-1270
- Full Text:
- Reviewed:
- Description: Achieving autonomous flight in GPS-denied environments begins with pose estimation in three-dimensional space, and this is much more challenging in an MAV in a swarm robotic system due to limited computational resources. In vision-based pose estimation, outlier detection is the most time-consuming step. This usually involves a RANSAC procedure using the reprojection-error method for hypothesis evaluation. Realignment-based hypothesis evaluation method is observed to be more accurate, but the considerably slower speed makes it unsuitable for robots with limited resources. We use sufficient statistics of least-squares minimisation to speed up this process. The additive nature of these sufficient statistics makes it possible to compute pose estimates in each evaluation by reusing previously computed statistics. Thus estimates need not be calculated from scratch each time. The proposed method is tested on standard RANSAC, Preemptive RANSAC and R-RANSAC using benchmark datasets. The results show that the use of sufficient statistics speeds up the outlier detection process with realignment hypothesis evaluation for all RANSAC variants, achieving an execution speed of up to 6.72 times.
Depth sequence coding with hierarchical partitioning and spatial-domain quantization
- Shahriyar, Shampa, Murshed, Manzur, Ali, Mortuza, Paul, Manoranjan
- Authors: Shahriyar, Shampa , Murshed, Manzur , Ali, Mortuza , Paul, Manoranjan
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Transactions on Circuits and Systems for Video Technology Vol. 30, no. 3 (2020), p. 835-849
- Full Text:
- Reviewed:
- Description: Depth coding in 3D-HEVC deforms object shapes due to block-level edge-approximation and lacks efficient techniques to exploit the statistical redundancy, due to the frame-level clustering tendency in depth data, for higher coding gain at near-lossless quality. This paper presents a standalone mono-view depth sequence coder, which preserves edges implicitly by limiting quantization to the spatial-domain and exploits the frame-level clustering tendency efficiently with a novel binary tree-based decomposition (BTBD) technique. The BTBD can exploit the statistical redundancy in frame-level syntax, motion components, and residuals efficiently with fewer block-level prediction/coding modes and simpler context modeling for context-adaptive arithmetic coding. Compared with the depth coder in 3D-HEVC, the proposed one has achieved significantly lower bitrate at lossless to near-lossless quality range for mono-view coding and rendered superior quality synthetic views from the depth maps, compressed at the same bitrate, and the corresponding texture frames. © 1991-2012 IEEE.
- Authors: Shahriyar, Shampa , Murshed, Manzur , Ali, Mortuza , Paul, Manoranjan
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Transactions on Circuits and Systems for Video Technology Vol. 30, no. 3 (2020), p. 835-849
- Full Text:
- Reviewed:
- Description: Depth coding in 3D-HEVC deforms object shapes due to block-level edge-approximation and lacks efficient techniques to exploit the statistical redundancy, due to the frame-level clustering tendency in depth data, for higher coding gain at near-lossless quality. This paper presents a standalone mono-view depth sequence coder, which preserves edges implicitly by limiting quantization to the spatial-domain and exploits the frame-level clustering tendency efficiently with a novel binary tree-based decomposition (BTBD) technique. The BTBD can exploit the statistical redundancy in frame-level syntax, motion components, and residuals efficiently with fewer block-level prediction/coding modes and simpler context modeling for context-adaptive arithmetic coding. Compared with the depth coder in 3D-HEVC, the proposed one has achieved significantly lower bitrate at lossless to near-lossless quality range for mono-view coding and rendered superior quality synthetic views from the depth maps, compressed at the same bitrate, and the corresponding texture frames. © 1991-2012 IEEE.
Investment decision model via an improved BP neural network
- Shen, Jihong, Zhang, Canxin, Lian, Chunbo, Hu, Hao, Mammadov, Musa
- Authors: Shen, Jihong , Zhang, Canxin , Lian, Chunbo , Hu, Hao , Mammadov, Musa
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at 2010 IEEE International Conference on Information and Automation, ICIA 2010, Harbin, Heilongjiang 20th-23rd June 2010 p. 2092-2096
- Full Text:
- Description: In macro investment, an investment decision model is established by using an improved back propagation (BP) artificial neural network (ANN). In this paper, the relations between elements of investment and output of products are determined, and then the optimal distribution of investment is determined by adjusting the distributions rationally. This model can reflect the highly nonlinear mapping relations among each element of investment by using nonlinear utility functions to improve the architecture of artificial neural network, which can be widely applied in investment problems. ©2010 IEEE.
- Authors: Shen, Jihong , Zhang, Canxin , Lian, Chunbo , Hu, Hao , Mammadov, Musa
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at 2010 IEEE International Conference on Information and Automation, ICIA 2010, Harbin, Heilongjiang 20th-23rd June 2010 p. 2092-2096
- Full Text:
- Description: In macro investment, an investment decision model is established by using an improved back propagation (BP) artificial neural network (ANN). In this paper, the relations between elements of investment and output of products are determined, and then the optimal distribution of investment is determined by adjusting the distributions rationally. This model can reflect the highly nonlinear mapping relations among each element of investment by using nonlinear utility functions to improve the architecture of artificial neural network, which can be widely applied in investment problems. ©2010 IEEE.