Integrated generalized zero-shot learning for fine-grained classification
- Authors: Shermin, Tasfia , Teng, Shyh , Sohel, Ferdous , Murshed, Manzur , Lu, Guojun
- Date: 2022
- Type: Text , Journal article
- Relation: Pattern Recognition Vol. 122, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Embedding learning (EL) and feature synthesizing (FS) are two of the popular categories of fine-grained GZSL methods. EL or FS using global features cannot discriminate fine details in the absence of local features. On the other hand, EL or FS methods exploiting local features either neglect direct attribute guidance or global information. Consequently, neither method performs well. In this paper, we propose to explore global and direct attribute-supervised local visual features for both EL and FS categories in an integrated manner for fine-grained GZSL. The proposed integrated network has an EL sub-network and a FS sub-network. Consequently, the proposed integrated network can be tested in two ways. We propose a novel two-step dense attention mechanism to discover attribute-guided local visual features. We introduce new mutual learning between the sub-networks to exploit mutually beneficial information for optimization. Moreover, we propose to compute source-target class similarity based on mutual information and transfer-learn the target classes to reduce bias towards the source domain during testing. We demonstrate that our proposed method outperforms contemporary methods on benchmark datasets. © 2021 Elsevier Ltd
Assessing trust level of a driverless car using deep learning
- Authors: Karmakar, Gour , Chowdhury, Abdullahi , Das, Rajkumar , Kamruzzaman, Joarder , Islam, Syed
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Intelligent Transportation Systems Vol. 22, no. 7 (2021), p. 4457-4466
- Full Text: false
- Reviewed:
- Description: The increasing adoption of driverless cars already providing a shift to move away from traditional transportation systems to automated ones in many industrial and commercial applications. Recent research has justified that driverless vehicles will considerably reduce traffic congestions, accidents, carbon emissions, and enhance the accessibility of driving to wider cross-section of people and lifestyle choices. However, at present, people's main concerns are about its privacy and security. Since traditional protocol layers based security mechanisms are not so effective for a distributed system, trust value-based security mechanisms, a type of pervasive security, are appearing as popular and promising techniques. A few statistical non-learning based models for measuring the trust level of a driverless are available in the current literature. These are not so effective because of not being able to capture the extremely distributed, dynamic, and complex nature of the traffic systems. To bridge this research gap, in this paper, for the first time, we propose two deep learning-based models that measure the trustworthiness of a driverless car and its major On-Board Unit (OBU) components. The second model also determines its OBU components that were breached during the driving operation. Results produced using real and simulated traffic data demonstrate that our proposed DNN based deep learning models outperform other machine learning models in assessing the trustworthiness of individual car as well as its OBU components. The average precision of detection accuracies for the car, LiDAR, camera, and radar are 0.99, 0.96, 0.81, and 0.83, respectively, which indicates the potential real-life application of our models in assessing the trust level of a driverless car. © 2000-2011 IEEE.
Can intelligent agents improve data quality in online questionnaires? A pilot study
- Authors: Söderström, Arne , Shatte, Adrian , Fuller-Tyszkiewicz, Matthew
- Date: 2021
- Type: Text , Journal article
- Relation: Behavior Research Methods Vol. 53, no. 5 (2021), p. 2238-2251
- Full Text: false
- Reviewed:
- Description: We explored the utility of chatbots for improving data quality arising from collection via sonline surveys. Three-hundred Australian adults sampled via Prolific Academic were randomized across chatbot-supported or unassisted online questionnaire conditions. The questionnaire comprised validated measures, along with challenge items formulated to be confusing yet aligned with the validated targets. The chatbot condition provided optional assistance with item clarity via a virtual support agent. Chatbot use and user satisfaction were measured through session logs and user feedback. Data quality was operationalized as between-group differences in relationships among validated and challenge measures. Findings broadly supported chatbot utility for online surveys, showing that most participants with chatbot access utilized it, found it helpful, and demonstrated modestly improved data quality (vs. controls). Absence of confusion for one challenge item is believed to have contributed to an underestimated effect. Findings show that assistive chatbots can enhance data quality, will be utilized by many participants if available, and are perceived as beneficial by most users. Scope constraints for this pilot study are believed to have led to underestimated effects. Future testing with longer-form questionnaires incorporating expanded item difficulty may further understanding of chatbot utility for survey completion and data quality. © 2021, The Psychonomic Society, Inc.
Detecting outlier patterns with query-based artificially generated searching conditions
- Authors: Yu, Shuo , Xia, Feng , Sun, Yuchen , Tang, Tao , Yan, Xiaoran , Lee, Ivan
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Computational Social Systems Vol. 8, no. 1 (2021), p. 134-147
- Full Text:
- Reviewed:
- Description: In the age of social computing, finding interesting network patterns or motifs is significant and critical for various areas, such as decision intelligence, intrusion detection, medical diagnosis, social network analysis, fake news identification, and national security. However, subgraph matching remains a computationally challenging problem, let alone identifying special motifs among them. This is especially the case in large heterogeneous real-world networks. In this article, we propose an efficient solution for discovering and ranking human behavior patterns based on network motifs by exploring a user's query in an intelligent way. Our method takes advantage of the semantics provided by a user's query, which in turn provides the mathematical constraint that is crucial for faster detection. We propose an approach to generate query conditions based on the user's query. In particular, we use meta paths between the nodes to define target patterns as well as their similarities, leading to efficient motif discovery and ranking at the same time. The proposed method is examined in a real-world academic network using different similarity measures between the nodes. The experiment result demonstrates that our method can identify interesting motifs and is robust to the choice of similarity measures. © 2014 IEEE.
Educational big data : predictions, applications and challenges
- Authors: Bai, Xiaomei , Zhang, Fuli , Li, Jinzhou , Guo, Teng , Xia, Feng
- Date: 2021
- Type: Text , Journal article , Review
- Relation: Big Data Research Vol. 26, no. (2021), p.
- Full Text:
- Reviewed:
- Description: Educational big data is becoming a strategic educational asset, exceptionally significant in advancing educational reform. The term educational big data stems from the rapidly growing educational data development, including students' inherent attributes, learning behavior, and psychological state. Educational big data has many applications that can be used for educational administration, teaching innovation, and research management. The representative examples of such applications are student academic performance prediction, employment recommendation, and financial support for low-income students. Different empirical studies have shown that it is possible to predict student performance in the courses during the next term. Predictive research for the higher education stage has become an attractive area of study since it allowed us to predict student behavior. In this survey, we will review predictive research, its applications, and its challenges. We first introduce the significance and background of educational big data. Second, we review the students' academic performance prediction research, such as factors influencing students' academic performance, predicting models, evaluating indices. Third, we introduce the applications of educational big data such as prediction, recommendation, and evaluation. Finally, we investigate challenging research issues in this area. This discussion aims to provide a comprehensive overview of educational big data. © 2021 Elsevier Inc. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Feng Xia” is provided in this record**
Efficient deterministic algorithm for huge-sized noisy sensor localization problems via canonical duality theory
- Authors: Latorre, Vittorio , Gao, David
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Cybernetics Vol. 51, no. 10 (2021), p. 5069-5081
- Full Text: false
- Reviewed:
- Description: This paper presents a new deterministic method and a polynomial-time algorithm for solving general huge-sized sensor network localization problems. The problem is first formulated as a nonconvex minimization, which was considered as an NP-hard based on conventional theories. However, by the canonical duality theory, this challenging problem can be equivalently converted into a convex dual problem. By introducing a new optimality measure, a powerful canonical primal-dual interior (CPDI) point algorithm is developed which can solve efficiently huge-sized problems with hundreds of thousands of sensors. The new method is compared with the popular methods in the literature. Results show that the CPDI algorithm is not only faster than the benchmarks but also much more accurate on networks affected by noise on the distances. © 2013 IEEE.
How does saline backflow affect the treatment of saline-infused radiofrequency ablation?
- Authors: Kho, Antony , Ooi, Ean H. , Foo, Ji , Ooi, Ean Tat
- Date: 2021
- Type: Text , Journal article
- Relation: Computer Methods and Programs in Biomedicine Vol. 211, no. (2021), p.
- Full Text: false
- Reviewed:
- Description: Background and objective: Saline infusion is applied together with radiofrequency ablation (RFA) to enlarge the ablation zone. However, one of the issues with saline-infused RFA is backflow, which spreads saline along the insertion track. This raises the concern of not only thermally ablating the tissue within the backflow region, but also the loss of saline from the targeted tissue, which may affect the treatment efficacy. Methods: In the present study, 2D axisymmetric models were developed to investigate how saline backflow influence saline-infused RFA and whether the aforementioned concerns are warranted. Saline-infused RFA was described using the dual porosity-Joule heating model. The hydrodynamics of backflow was described using Poiseuille law by assuming the flow to be similar to that in a thin annulus. Backflow lengths of 3, 4.5, 6 and 9 cm were considered. Results: Results showed that there is no concern of thermally ablating the tissue in the backflow region. This is due to the Joule heating being inversely proportional to distance from the electrode to the fourth power. Results also indicated that larger backflow lengths led to larger growth of thermal damage along the backflow region and greater decrease in coagulation volume. Hence, backflow needs to be controlled to ensure an effective treatment of saline-infused RFA. Conclusions: There is no risk of ablating tissues around the needle insertion track due to backflow. Instead, the risk of underablation as a result of the loss of saline due to backflow was found to be of greater concern. © 2021 Elsevier B.V.
How to optimize an academic team when the outlier member is leaving?
- Authors: Yu, Shuo , Liu, Jiaying , Wei, Haoran , Xia, Feng , Tong, Hanghang
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Intelligent Systems Vol. 36, no. 3 (May-Jun 2021), p. 23-30
- Full Text:
- Reviewed:
- Description: An academic team is a highly cohesive collaboration group of scholars, which has been recognized as an effective way to improve scientific output in terms of both quality and quantity. However, the high staff turnover brings about a series of problems that may have negative influences on team performance. To address this challenge, we first detect the tendency of the member who may potentially leave. Here, the outlierness is defined with respect to familiarity, which is quantified by using collaboration intensity. It is assumed that if a team member has a higher familiarity with scholars outside the team, then this member might probably leave the team. To minimize the influence caused by the leaving of such an outlier member, we propose an optimization solution to find a proper candidate who can replace the outlier member. Based on random walk with graph kernel, our solution involves familiarity matching, skill matching, as well as structure matching. The proposed approach proves to be effective and outperforms existing methods when applied to computer science academic teams.
Image preprocessing in classification and identification of diabetic eye diseases
- Authors: Sarki, Rubina , Ahmed, Khandakar , Wang, Hua , Zhang, Yanchun , Ma, Jiangang , Wang, Kate
- Date: 2021
- Type: Text , Journal article
- Relation: Data Science and Engineering Vol. 6, no. 4 (2021), p. 455-471
- Full Text:
- Reviewed:
- Description: Diabetic eye disease (DED) is a cluster of eye problem that affects diabetic patients. Identifying DED is a crucial activity in retinal fundus images because early diagnosis and treatment can eventually minimize the risk of visual impairment. The retinal fundus image plays a significant role in early DED classification and identification. An accurate diagnostic model’s development using a retinal fundus image depends highly on image quality and quantity. This paper presents a methodical study on the significance of image processing for DED classification. The proposed automated classification framework for DED was achieved in several steps: image quality enhancement, image segmentation (region of interest), image augmentation (geometric transformation), and classification. The optimal results were obtained using traditional image processing methods with a new build convolution neural network (CNN) architecture. The new built CNN combined with the traditional image processing approach presented the best performance with accuracy for DED classification problems. The results of the experiments conducted showed adequate accuracy, specificity, and sensitivity. © 2021, The Author(s).
Intelligent energy prediction techniques for fog computing networks
- Authors: Farooq, Umar , Shabir, Muhammad , Javed, Muhammad , Imran, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: Applied Soft Computing Vol. 111, no. (2021), p.
- Full Text:
- Reviewed:
- Description: Energy Efficiency is a key concern for future fog-enabled Internet of Things (IoT). Since Fog Nodes (FNs) are energy-constrained devices, task offloading techniques must consider the energy consumption of the FNs to maximize the performance of IoT applications. In this context, accurate energy prediction can enable the development of intelligent energy-aware task offloading techniques. In this paper, we present two energy prediction techniques, the first one is based on the Recursive Least Square (RLS) filter and the second one uses the Artificial Neural Network (ANN). Both techniques use inputs such as the number of tasks and size of the tasks to predict the energy consumption at different fog nodes. Simulation results show that both techniques have a root mean square error of less than 3%. However, the ANN-based technique shows up to 20% less root mean square error as compared to the RLS-based technique. © 2021 Elsevier B.V.
Levels of explainable artificial intelligence for human-aligned conversational explanations
- Authors: Dazeley, Richard , Vamplew, Peter , Foale, Cameron , Young, Cameron , Aryal, Sunil , Cruz, Francisco
- Date: 2021
- Type: Text , Journal article
- Relation: Artificial Intelligence Vol. 299, no. (2021), p.
- Full Text:
- Reviewed:
- Description: Over the last few years there has been rapid research growth into eXplainable Artificial Intelligence (XAI) and the closely aligned Interpretable Machine Learning (IML). Drivers for this growth include recent legislative changes and increased investments by industry and governments, along with increased concern from the general public. People are affected by autonomous decisions every day and the public need to understand the decision-making process to accept the outcomes. However, the vast majority of the applications of XAI/IML are focused on providing low-level ‘narrow’ explanations of how an individual decision was reached based on a particular datum. While important, these explanations rarely provide insights into an agent's: beliefs and motivations; hypotheses of other (human, animal or AI) agents' intentions; interpretation of external cultural expectations; or, processes used to generate its own explanation. Yet all of these factors, we propose, are essential to providing the explanatory depth that people require to accept and trust the AI's decision-making. This paper aims to define levels of explanation and describe how they can be integrated to create a human-aligned conversational explanation system. In so doing, this paper will survey current approaches and discuss the integration of different technologies to achieve these levels with Broad eXplainable Artificial Intelligence (Broad-XAI), and thereby move towards high-level ‘strong’ explanations. © 2021 Elsevier B.V.
Malware detection in edge devices with fuzzy oversampling and dynamic class weighting
- Authors: Khoda, Mahbub , Kamruzzaman, Joarder , Gondal, Iqbal , Imam, Tasadduq , Rahman, Ashfaqur
- Date: 2021
- Type: Text , Journal article
- Relation: Applied Soft Computing Vol. 112, no. (2021), p.
- Full Text: false
- Reviewed:
- Description: In Internet-of-things (IoT) domain, edge devices are used increasingly for data accumulation, preprocessing, and analytics. Intelligent integration of edge devices with Artificial Intelligence (AI) facilitates real-time analysis and decision making. However, these devices simultaneously provide additional attack opportunities for malware developers, potentially leading to information and financial loss. Machine learning approaches can detect such attacks but their performance degrades when benign samples substantially outnumber malware samples in training data. Existing approaches for such imbalanced data assume samples represented as continuous features and thus can generate invalid samples when malware applications are represented by binary features. We propose a novel malware oversampling technique that addresses this issue. Further, we propose two approaches for malware detection. Our first approach uses fuzzy set theory, while the second approach dynamically assigns higher priority to malware samples using a novel loss function. Combining our oversampling technique with these approaches, the proposed approach attains over 9% improvement over competing methods in terms of F1_score. Our approaches can, therefore, result in enhanced privacy and security in edge computing services. © 2021 Elsevier B.V.
Mobile robotic sensors for environmental monitoring using gaussian markov random field
- Authors: Nguyen, Linh , Kodagoda, Sarath , Ranasinghe, Ravindra , Dissanayake, Gamini
- Date: 2021
- Type: Text , Journal article
- Relation: Robotica Vol. 39, no. 5 (2021), p. 862-884
- Full Text:
- Reviewed:
- Description: This paper addresses the issue of monitoring spatial environmental phenomena of interest utilizing information collected by a network of mobile, wireless, and noisy sensors that can take discrete measurements as they navigate through the environment. It is proposed to employ Gaussian Markov random field (GMRF) represented on an irregular discrete lattice by using the stochastic partial differential equations method to model the physical spatial field. It then derives a GMRF-based approach to effectively predict the field at unmeasured locations, given available observations, in both centralized and distributed manners. Furthermore, a novel but efficient optimality criterion is then proposed to design centralized and distributed adaptive sampling strategies for the mobile robotic sensors to find the most informative sampling paths in taking future measurements. By taking advantage of conditional independence property in the GMRF, the adaptive sampling optimization problem is proven to be resolved in a deterministic time. The effectiveness of the proposed approach is compared and demonstrated using pre-published data sets with appealing results. Copyright © The Author(s), 2020. Published by Cambridge University Press.
Online dual dictionary learning for visual object tracking
- Authors: Cheng, Xu , Zhang, Yifeng , Zhou, Lin , Lu, Guojun
- Date: 2021
- Type: Text , Journal article
- Relation: Journal of Ambient Intelligence and Humanized Computing Vol. 12, no. 12 (2021), p. 10881-10896
- Full Text: false
- Reviewed:
- Description: Sparse representation method has been widely applied to visual tracking. Most of existing tracking algorithms based on sparse representation exploit the l0 or l1-norm for solving the sparse coefficients. However, it makes the execution of solution very time consuming. In this paper, we propose an effective dual dictionary learning model for visual tracking. The dictionary model is composed of discriminative dictionary and analytic dictionary; they work together to perform the representation and discrimination simultaneously. First, we exploit the object states of the first ten frames of a video to initialize the dual dictionary. In the tracking phase, the dual dictionary model is updated alternatively. Second, the local and global information of the object are integrated into the dual dictionary learning model. Sparse coefficients of the patch are used to encode the local structural information of the object. Furthermore, all the sparse coefficients within one object state form a global object representation. We develop a likelihood function that takes an adaptive threshold into consideration to de-noise the global representation. In addition, the object template is updated via an online scheme to adapt the object appearance changes. The experiments on a number of common benchmark test sets show that our approach is more effective than the existing methods. © 2021, The Author(s), under exclusive licence to Springer-Verlag GmbH, DE part of Springer Nature.
Scholar2vec : vector representation of scholars for lifetime collaborator prediction
- Authors: Wang, Wei , Xia, Feng , Wu, Jian , Gong, Zhiguo , Tong, Hanghang , Davison, Brian
- Date: 2021
- Type: Text , Journal article
- Relation: ACM Transactions on Knowledge Discovery from Data Vol. 15, no. 3 (2021), p.
- Full Text:
- Reviewed:
- Description: While scientific collaboration is critical for a scholar, some collaborators can be more significant than others, e.g., lifetime collaborators. It has been shown that lifetime collaborators are more influential on a scholar's academic performance. However, little research has been done on investigating predicting such special relationships in academic networks. To this end, we propose Scholar2vec, a novel neural network embedding for representing scholar profiles. First, our approach creates scholars' research interest vector from textual information, such as demographics, research, and influence. After bridging research interests with a collaboration network, vector representations of scholars can be gained with graph learning. Meanwhile, since scholars are occupied with various attributes, we propose to incorporate four types of scholar attributes for learning scholar vectors. Finally, the early-stage similarity sequence based on Scholar2vec is used to predict lifetime collaborators with machine learning methods. Extensive experiments on two real-world datasets show that Scholar2vec outperforms state-of-the-art methods in lifetime collaborator prediction. Our work presents a new way to measure the similarity between two scholars by vector representation, which tackles the knowledge between network embedding and academic relationship mining. © 2021 Association for Computing Machinery.
Siamese network for object tracking with multi-granularity appearance representations
- Authors: Zhang, Zhuoyi , Zhang, Yifeng , Cheng, Xu , Lu, Guojun
- Date: 2021
- Type: Text , Journal article
- Relation: Pattern Recognition Vol. 118, no. (2021), p.
- Full Text: false
- Reviewed:
- Description: A reliable tracker has the ability to adapt to change of objects over time, and is robust and accurate. We build such a tracker by extracting semantic features using robust Siamese networks and multi-granularity color features. It incorporates a semantic model that can capture high quality semantic features and an appearance model that can describe object at pixel, local and global levels effectively. Furthermore, we propose a novel selective traverse algorithm to allocate weights to semantic models and appearance models dynamically for better tracking performance. During tracking, our tracker updates appearance representations for objects based on the recent tracking results. The proposed tracker operates at speeds that exceed the real-time requirement, and outperforms nearly all other state-of-the-art trackers on OTB-2013/2015 and VOT-2016/2017 benchmarks. © 2021 Elsevier Ltd
SPEED: A deep learning assisted privacy-preserved framework for intelligent transportation systems
- Authors: Usman, Muhammad , Jan, Mian , Jolfaei, Alireza
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Intelligent Transportation Systems Vol. 22, no. 7 (2021), p. 4376-4384
- Full Text: false
- Reviewed:
- Description: Roadside cameras in an Intelligent Transportation System (ITS) are used for various purposes, e.g., monitoring the speed of vehicles, violations of laws, and detection of suspicious activities in parking lots, streets, and side roads. These cameras generate big multimedia data, and as a result, the ITS faces challenges like data management, redundancy, and privacy breaching in end-to-end communication. To solve these challenges, we propose a framework, called SPEED, based on a multi-level edge computing architecture and machine learning algorithms. In this framework, data captured by end-devices, e.g., smart cameras, is distributed among multiple Level-One Edge Devices (LOEDs) to deal with data management issue and minimize packet drop due to buffer overflowing on end-devices and LOEDs. The data is forwarded from LOEDs to Level-Two Edge Devices (LTEDs) in a compressed sensed format. The LTEDs use an online Least-Squares Support-Vector Machines (LS-SVMs) model to determine distribution characteristics and index values of compressed sensed data to preserve its privacy during transmission between LTEDs and High-Level Edge Devices (HLEDs). The HLEDs estimate the redundancy in forwarded data using a deep learning architecture, i.e., a Convolutional Neural Network (CNN). The CNN is used to detect the presence of moving objects in the forwarded data. If a movement is detected, the data is forwarded to cloud servers for further analysis otherwise discarded. Experimental results show that the use of a multi-level edge computing architecture helps in managing the generated data. The machine learning algorithms help in addressing issues like data redundancy and privacy-preserving in end-to-end communication. © 2000-2011 IEEE.
Tracing the Pace of COVID-19 research : topic modeling and evolution
- Authors: Liu, Jiaying , Nie, Hansong , Li, Shihao , Ren, Jing , Xia, Feng
- Date: 2021
- Type: Text , Journal article
- Relation: Big Data Research Vol. 25, no. (2021), p.
- Full Text:
- Reviewed:
- Description: COVID-19 has been spreading rapidly around the world. With the growing attention on the deadly pandemic, discussions and research on COVID-19 are rapidly increasing to exchange latest findings with the hope to accelerate the pace of finding a cure. As a branch of information technology, artificial intelligence (AI) has greatly expedited the development of human society. In this paper, we investigate and visualize the on-going advancements of early scientific research on COVID-19 from the perspective of AI. By adopting the Latent Dirichlet Allocation (LDA) model, this paper allocates the research articles into 50 key research topics pertinent to COVID-19 according to their abstracts. We present an overview of early studies of the COVID-19 crisis at different scales including referencing/citation behavior, topic variation and their inner interactions. We also identify innovative papers that are regarded as the cornerstones in the development of COVID-19 research. The results unveil the focus of scientific research, thereby giving deep insights into how the academic society contributes to combating the COVID-19 pandemic. © 2021 Elsevier Inc. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Jing Ren and Feng Xia" is provided in this record**
- Description: COVID-19 has been spreading rapidly around the world. With the growing attention on the deadly pandemic, discussions and research on COVID-19 are rapidly increasing to exchange latest findings with the hope to accelerate the pace of finding a cure. As a branch of information technology, artificial intelligence (AI) has greatly expedited the development of human society. In this paper, we investigate and visualize the on-going advancements of early scientific research on COVID-19 from the perspective of AI. By adopting the Latent Dirichlet Allocation (LDA) model, this paper allocates the research articles into 50 key research topics pertinent to COVID-19 according to their abstracts. We present an overview of early studies of the COVID-19 crisis at different scales including referencing/citation behavior, topic variation and their inner interactions. We also identify innovative papers that are regarded as the cornerstones in the development of COVID-19 research. The results unveil the focus of scientific research, thereby giving deep insights into how the academic society contributes to combating the COVID-19 pandemic. © 2021 Elsevier Inc.
Treatment of multiple input uncertainties using the scaled boundary finite element method
- Authors: Dsouza, Shaima , Varghese, Tittu , Ooi, Ean Tat , Natarajan, Sundararajan , Bordas, Stephane
- Date: 2021
- Type: Text , Journal article
- Relation: Applied Mathematical Modelling Vol. 99, no. (2021), p. 538-554
- Full Text: false
- Reviewed:
- Description: This paper presents a non-intrusive scaled boundary finite element method to consider multiple input uncertainties, viz., material and geometry. The types of geometric uncertainties considered include the shape and size of inclusions. The inclusions are implicitly defined, and a robust framework is presented to treat the interfaces, which does not require explicit generation of a conforming mesh or special enrichment techniques. A polynomial chaos expansion is used to represent the input and the output uncertainties. The efficiency and the accuracy of the proposed framework are elucidated in detail with a few problems by comparing the results with the conventional Monte Carlo method. A sensitivity analysis based on Sobol’ indices using the developed framework is presented to identify the critical input parameter that has a higher influence on the output response. © 2021 Elsevier Inc.
Vehicle trajectory clustering based on dynamic representation learning of internet of vehicles
- Authors: Wang, Wei , Xia, Feng , Nie, Hansong , Chen, Zhikui , Gong, Zhiguo
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Intelligent Transportation Systems Vol. 22, no. 6 (2021), p. 3567-3576
- Full Text:
- Reviewed:
- Description: With the widely used Internet of Things, 5G, and smart city technologies, we are able to acquire a variety of vehicle trajectory data. These trajectory data are of great significance which can be used to extract relevant information in order to, for instance, calculate the optimal path from one position to another, detect abnormal behavior, monitor the traffic flow in a city, and predict the next position of an object. One of the key technology is to cluster vehicle trajectory. However, existing methods mainly rely on manually designed metrics which may lead to biased results. Meanwhile, the large scale of vehicle trajectory data has become a challenge because calculating these manually designed metrics will cost more time and space. To address these challenges, we propose to employ network representation learning to achieve accurate vehicle trajectory clustering. Specifically, we first construct the k-nearest neighbor-based internet of vehicles in a dynamic manner. Then we learn the low-dimensional representations of vehicles by performing dynamic network representation learning on the constructed network. Finally, using the learned vehicle vectors, vehicle trajectories are clustered with machine learning methods. Experimental results on the real-word dataset show that our method achieves the best performance compared against baseline methods. © 2000-2011 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Feng Xia” is provided in this record**