A survey on context awareness in big data analytics for business applications
- Authors: Dinh, Loan , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2020
- Type: Text , Journal article
- Relation: Knowledge and Information Systems Vol. 62, no. 9 (2020), p. 3387-3415
- Full Text:
- Reviewed:
- Description: The concept of context awareness has been in existence since the 1990s. Though initially applied exclusively in computer science, over time it has increasingly been adopted by many different application domains such as business, health and military. Contexts change continuously because of objective reasons, such as economic situation, political matter and social issues. The adoption of big data analytics by businesses is facilitating such change at an even faster rate in much complicated ways. The potential benefits of embedding contextual information into an application are already evidenced by the improved outcomes of the existing context-aware methods in those applications. Since big data is growing very rapidly, context awareness in big data analytics has become more important and timely because of its proven efficiency in big data understanding and preparation, contributing to extracting the more and accurate value of big data. Many surveys have been published on context-based methods such as context modelling and reasoning, workflow adaptations, computational intelligence techniques and mobile ubiquitous systems. However, to our knowledge, no survey of context-aware methods on big data analytics for business applications supported by enterprise level software has been published to date. To bridge this research gap, in this paper first, we present a definition of context, its modelling and evaluation techniques, and highlight the importance of contextual information for big data analytics. Second, the works in three key business application areas that are context-aware and/or exploit big data analytics have been thoroughly reviewed. Finally, the paper concludes by highlighting a number of contemporary research challenges, including issues concerning modelling, managing and applying business contexts to big data analytics. © 2020, Springer-Verlag London Ltd., part of Springer Nature.
Assessing cohesion of the rocks proposing a new intelligent technique namely group method of data handling
- Authors: Chen, Wusi , Khandelwal, Manoj , Murlidhar, Bhatawdekar , Bui, Dieu , Tahir, Mahmood , Katebi, Javad
- Date: 2020
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 36, no. 2 (2020), p. 783-793
- Full Text:
- Reviewed:
- Description: In this study, evaluation and prediction of rock cohesion is assessed using multiple regression as well as group method of data handling (GMDH). It is a well-known fact that cohesion is the most crucial rock shear strength parameter, which is a key parameter for the stability evaluation of some geotechnical structures such as rock slope. To fulfill the aim of this study, a database of three model input parameters, i.e., p wave velocity, uniaxial compressive strength and Brazilian tensile strength and one model output, which is cohesion of limestone samples was prepared and utilized by GMDH. Different GMDH models with neurons and layers and selection pressure were tested and assessed. It was found that GMDH model number 4 (with 8 layers) shows the best performance among all of tested models between the input and output parameters for the prediction and assessment of rock cohesion with coefficient of determination (R2) values of 0.928 and 0.929, root mean square error values of 0.3545 and 0.3154 for training and testing datasets, respectively. Multiple regression analysis was also performed on the same database and R2 values were obtained as 0.8173 and 0.8313 between input and output parameters for the training and testing of the models, respectively. The GMDH technique developed in this study is introduced as a new model in field of rock shear strength parameters. © 2019, Springer-Verlag London Ltd., part of Springer Nature.
Data-driven computational social science : A survey
- Authors: Zhang, Jun , Wang, Wei , Xia, Feng , Lin, Yu-Ru , Tong, Hanghang
- Date: 2020
- Type: Text , Journal article
- Relation: Big Data Research Vol. 21, no. (2020), p. 1-22
- Full Text:
- Reviewed:
- Description: Social science concerns issues on individuals, relationships, and the whole society. The complexity of research topics in social science makes it the amalgamation of multiple disciplines, such as economics, political science, and sociology, etc. For centuries, scientists have conducted many studies to understand the mechanisms of the society. However, due to the limitations of traditional research methods, there exist many critical social issues to be explored. To solve those issues, computational social science emerges due to the rapid advancements of computation technologies and the profound studies on social science. With the aids of the advanced research techniques, various kinds of data from diverse areas can be acquired nowadays, and they can help us look into social problems with a new eye. As a result, utilizing various data to reveal issues derived from computational social science area has attracted more and more attentions. In this paper, to the best of our knowledge, we present a survey on datadriven computational social science for the first time which primarily focuses on reviewing application domains involving human dynamics. The state-of-the-art research on human dynamics is reviewed from three aspects: individuals, relationships, and collectives. Specifically, the research methodologies used to address research challenges in aforementioned application domains are summarized. In addition, some important open challenges with respect to both emerging research topics and research methods are discussed.
Depth sequence coding with hierarchical partitioning and spatial-domain quantization
- Authors: Shahriyar, Shampa , Murshed, Manzur , Ali, Mortuza , Paul, Manoranjan
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Transactions on Circuits and Systems for Video Technology Vol. 30, no. 3 (2020), p. 835-849
- Full Text:
- Reviewed:
- Description: Depth coding in 3D-HEVC deforms object shapes due to block-level edge-approximation and lacks efficient techniques to exploit the statistical redundancy, due to the frame-level clustering tendency in depth data, for higher coding gain at near-lossless quality. This paper presents a standalone mono-view depth sequence coder, which preserves edges implicitly by limiting quantization to the spatial-domain and exploits the frame-level clustering tendency efficiently with a novel binary tree-based decomposition (BTBD) technique. The BTBD can exploit the statistical redundancy in frame-level syntax, motion components, and residuals efficiently with fewer block-level prediction/coding modes and simpler context modeling for context-adaptive arithmetic coding. Compared with the depth coder in 3D-HEVC, the proposed one has achieved significantly lower bitrate at lossless to near-lossless quality range for mono-view coding and rendered superior quality synthetic views from the depth maps, compressed at the same bitrate, and the corresponding texture frames. © 1991-2012 IEEE.
Effects of a proper feature selection on prediction and optimization of drilling rate using intelligent techniques
- Authors: Liao, Xiufeng , Khandelwal, Manoj , Yang, Haiqing , Koopialipoor, Mohammadreza , Murlidhar, Bhatawdekar
- Date: 2020
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 36, no. 2 (Apr 2020), p. 499-510
- Full Text:
- Reviewed:
- Description: One of the important factors during drilling times is the rate of penetration (ROP), which is controlled based on different variables. Factors affecting different drillings are of paramount importance. In the current research, an attempt was made to better recognize drilling parameters and optimize them based on an optimization algorithm. For this purpose, 618 data sets, including RPM, flushing media, and compressive strength parameters, were measured and collected. After an initial investigation, the compressive strength feature of samples, which is an important parameter from the rocks, was used as a proper criterion for classification. Then using intelligent systems, three different levels of the rock strength and all data were modeled. The results showed that systems which were classified based on compressive strength showed a better performance for ROP assessment due to the proximity of features. Therefore, these three levels were used for classification. A new artificial bee colony algorithm was used to solve this problem. Optimizations were applied to the selected models under different optimization conditions, and optimal states were determined. As determining drilling machine parameters is important, these parameters were determined based on optimal conditions. The obtained results showed that this intelligent system can well improve drilling conditions and increase the ROP value for three strength levels of the rocks. This modeling system can be used in different drilling operations.
MODEL : motif-based deep feature learning for link prediction
- Authors: Wang, Lei , Ren, Jing , Xu, Bo , Li, Jianxin , Luo, Wei , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Transactions on Computational Social Systems Vol. 7, no. 2 (2020), p. 503-516
- Full Text:
- Reviewed:
- Description: Link prediction plays an important role in network analysis and applications. Recently, approaches for link prediction have evolved from traditional similarity-based algorithms into embedding-based algorithms. However, most existing approaches fail to exploit the fact that real-world networks are different from random networks. In particular, real-world networks are known to contain motifs, natural network building blocks reflecting the underlying network-generating processes. In this article, we propose a novel embedding algorithm that incorporates network motifs to capture higher order structures in the network. To evaluate its effectiveness for link prediction, experiments were conducted on three types of networks: social networks, biological networks, and academic networks. The results demonstrate that our algorithm outperforms both the traditional similarity-based algorithms (by 20%) and the state-of-the-art embedding-based algorithms (by 19%). © 2014 IEEE.
Multimodal memetic framework for low-resolution protein structure prediction
- Authors: Nazmul, Rumana , Chetty, Madhu , Chowdhury, Ashan
- Date: 2020
- Type: Text , Journal article
- Relation: Swarm and Evolutionary Computation Vol. 52, no. (Feb 2020), p. 14
- Full Text: false
- Reviewed:
- Description: In this paper, we propose a systematic design of evolutionary optimization, namely Multimodal Memetic Framework (MMF), to effectively search the vast complex energy landscape. Our proposed memetic framework is implemented in hierarchical stages with the optimization of each stage performed in parallel in three different states: Exploratory, Exploitative and Central. Each state, with its own set of sub-populations, either explores or exploits by beneficial mixing of potential solutions to direct the search towards a global solution. Instead of implementing identical genetic operators, the proposed approach employs different selection and survival criteria in each state according to their designated task. The Exploratory state employs a knowledge-based initial population generation technique with appropriately tuned genetic operators to guide the search to the "nearest peak". The Exploitative state fine-tunes the individuals representing different regions by applying a building block based local search. Finally, by utilizing the imbibed knowledge from different peaks, the Central state carries out information-exchange among the highly fit solutions for exploring the undiscovered regions. The information exchange employs a novel non-random parental selection technique to distribute the reproduction opportunity intelligently among the individuals for making cross-over more effective. The method has been tested on a set of various benchmark protein sequences for 2D and 3D lattice models. The experimental results demonstrate the superiority of the proposed method over other state-of-the-art algorithms.
Simple supervised dissimilarity measure : bolstering iForest-induced similarity with class information without learning
- Authors: Wells, Jonathan , Aryal, Sunil , Ting, Kai
- Date: 2020
- Type: Text , Journal article
- Relation: Knowledge and Information Systems Vol. 62, no. 8 (2020), p. 3203-3216
- Full Text: false
- Reviewed:
- Description: Existing distance metric learning methods require optimisation to learn a feature space to transform data—this makes them computationally expensive in large datasets. In classification tasks, they make use of class information to learn an appropriate feature space. In this paper, we present a simple supervised dissimilarity measure which does not require learning or optimisation. It uses class information to measure dissimilarity of two data instances in the input space directly. It is a supervised version of an existing data-dependent dissimilarity measure called me. Our empirical results in k-NN and LVQ classification tasks show that the proposed simple supervised dissimilarity measure generally produces predictive accuracy better than or at least as good as existing state-of-the-art supervised and unsupervised dissimilarity measures. © 2020, Springer-Verlag London Ltd., part of Springer Nature.
The effects of the no-touch gap on the no-touch bipolar radiofrequency ablation treatment of liver cancer : a numerical study using a two compartment model
- Authors: Yap, Shelley , Cheong, Jason , Foo, Ji , Ooi, Ean Tat , Ooi, Ean Hin
- Date: 2020
- Type: Text , Journal article
- Relation: Applied Mathematical Modelling Vol. 78, no. (2020), p. 134-147
- Full Text: false
- Reviewed:
- Description: The no-touch bipolar radiofrequency ablation (RFA) for cancer treatment is advantageous primarily because of its capability to prevent tumour track seeding (TTS). In this technique, the RF probes are placed at a distance (no-touch gap) away from the tumour boundary. Ideally, the RF probes should be placed sufficiently far from the tumour in order to avoid TTS. However, having a gap that is too large can lead to ineffective ablation. This paper investigates how the selection of the no-touch gap can affect the tissue electrical and thermal responses during the no-touch bipolar RFA treatment. Simulations were carried out on a two compartment model using the finite element method. Results obtained indicated that a gap that is too large may lead to incomplete ablation and failure to achieve significant ablation margin. However, keeping the gap to be too small may not be clinically practical. It was suggested that the incomplete ablation and the insufficient ablation margin observed in some of the cases may require the placement of additional probes around the tumour. The present study stresses on the importance of identifying the optimal no-touch gap that can avoid TTS without compromising the treatment outcome. © 2019 Elsevier Inc.
The gene of scientific success
- Authors: Kong, Xiangjie , Zhang, Jun , Zhang, Da , Bu, Yi , Ding, Ying , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: ACM Transactions on Knowledge Discovery from Data Vol. 14, no. 4 (2020), p.
- Full Text:
- Reviewed:
- Description: This article elaborates how to identify and evaluate causal factors to improve scientific impact. Currently, analyzing scientific impact can be beneficial to various academic activities including funding application, mentor recommendation, discovering potential cooperators, and the like. It is universally acknowledged that high-impact scholars often have more opportunities to receive awards as an encouragement for their hard work. Therefore, scholars spend great efforts in making scientific achievements and improving scientific impact during their academic life. However, what are the determinate factors that control scholars' academic success? The answer to this question can help scholars conduct their research more efficiently. Under this consideration, our article presents and analyzes the causal factors that are crucial for scholars' academic success. We first propose five major factors including article-centered factors, author-centered factors, venue-centered factors, institution-centered factors, and temporal factors. Then, we apply recent advanced machine learning algorithms and jackknife method to assess the importance of each causal factor. Our empirical results show that author-centered and article-centered factors have the highest relevancy to scholars' future success in the computer science area. Additionally, we discover an interesting phenomenon that the h-index of scholars within the same institution or university are actually very close to each other. © 2020 ACM.
UniFlexView : a unified framework for consistent construction of BPMN and BPEL process views
- Authors: Yongchareon, Sira , Liu, Chengfei , Zhao, Xiaohui
- Date: 2020
- Type: Text , Journal article
- Relation: Concurrency Computation Vol. 32, no. 11 (2020), p.
- Full Text:
- Reviewed:
- Description: Process view technologies allow organizations to create different granularity levels of abstraction of their business processes, therefore enabling a more effective business process management, analysis, interoperation, and privacy controls. Existing research proposed view construction and abstraction techniques for block-based (ie, BPEL) and graph-based (ie, BPMN) process models. However, the existing techniques treat each type of the two types of models separately. Especially, this brings in challenges for achieving a consistent process view for a BPEL model that derives from a BPMN model. In this paper, we propose a unified framework, namely UniFlexView, for supporting automatic and consistent process view construction. With our framework, process modelers can use our proposed View Definition Language to specify their view construction requirements disregarding the types of process models. Our UniFlexView's system prototype has been developed as a proof of concept and demonstration of the usability and feasibility of our framework. © 2019 John Wiley & Sons, Ltd.
A computational model to investigate the influence of electrode lengths on the single probe bipolar radiofrequency ablation of the liver
- Authors: Cheong, Jason , Yap, Shelley , Ooi, Ean Tat , Ooi, Ean Hin
- Date: 2019
- Type: Text , Journal article
- Relation: Computer Methods and Programs in Biomedicine Vol. 176, no. (2019), p. 17-32
- Full Text: false
- Reviewed:
- Description: Background and objectives: Recently, there have been calls for RFA to be implemented in the bipolar mode for cancer treatment due to the benefits it offers over the monopolar mode. These include the ability to prevent skin burns at the grounding pad and to avoid tumour track seeding. The usage of bipolar RFA in clinical practice remains uncommon however, as not many research studies have been carried out on bipolar RFA. As such, there is still uncertainty in understanding the effects of the different RF probe configurations on the treatment outcome of RFA. This paper demonstrates that the electrode lengths have a strong influence on the mechanics of bipolar RFA. The information obtained here may lead to further optimization of the system for subsequent uses in the hospitals. Methods: A 2D model in the axisymmetric coordinates was developed to simulate the electro-thermophysiological responses of the tissue during a single probe bipolar RFA. Two different probe configurations were considered, namely the configuration where the active electrode is longer than the ground and the configuration where the ground electrode is longer than the active. The mathematical model was first verified with an existing experimental study found in the literature. Results: Results from the simulations showed that heating is confined only to the region around the shorter electrode, regardless of whether the shorter electrode is the active or the ground. Consequently, thermal coagulation also occurs in the region surrounding the shorter electrode. This opened up the possibility for a better customized treatment through the development of RF probes with adjustable electrode lengths. Conclusions: The electrode length was found to play a significant role on the outcome of single probe bipolar RFA. In particular, the length of the shorter electrode becomes the limiting factor that influences the mechanics of single probe bipolar RFA. Results from this study can be used to further develop and optimize bipolar RFA as an effective and reliable cancer treatment technique. (C) 2019 Elsevier B.V. All rights reserved.
A difference of convex optimization algorithm for piecewise linear regression
- Authors: Bagirov, Adil , Taheri, Sona , Asadi, Soodabeh
- Date: 2019
- Type: Text , Journal article
- Relation: Journal of Industrial and Management Optimization Vol. 15, no. 2 (2019), p. 909-932
- Relation: http://purl.org/au-research/grants/arc/DP140103213
- Full Text: false
- Reviewed:
- Description: The problem of finding a continuous piecewise linear function approximating a regression function is considered. This problem is formulated as a nonconvex nonsmooth optimization problem where the objective function is represented as a difference of convex (DC) functions. Subdifferentials of DC components are computed and an algorithm is designed based on these subdifferentials to find piecewise linear functions. The algorithm is tested using some synthetic and real world data sets and compared with other regression algorithms.
An efficient RANSAC hypothesis evaluation using sufficient statistics for RGB-D pose estimation
- Authors: Senthooran, Ilankalkone , Murshed, Manzur , Barca, Jan , Kamruzzaman, Joarder , Chung, Hoam
- Date: 2019
- Type: Text , Journal article
- Relation: Autonomous Robots Vol. 43, no. 5 (2019), p. 1257-1270
- Full Text:
- Reviewed:
- Description: Achieving autonomous flight in GPS-denied environments begins with pose estimation in three-dimensional space, and this is much more challenging in an MAV in a swarm robotic system due to limited computational resources. In vision-based pose estimation, outlier detection is the most time-consuming step. This usually involves a RANSAC procedure using the reprojection-error method for hypothesis evaluation. Realignment-based hypothesis evaluation method is observed to be more accurate, but the considerably slower speed makes it unsuitable for robots with limited resources. We use sufficient statistics of least-squares minimisation to speed up this process. The additive nature of these sufficient statistics makes it possible to compute pose estimates in each evaluation by reusing previously computed statistics. Thus estimates need not be calculated from scratch each time. The proposed method is tested on standard RANSAC, Preemptive RANSAC and R-RANSAC using benchmark datasets. The results show that the use of sufficient statistics speeds up the outlier detection process with realignment hypothesis evaluation for all RANSAC variants, achieving an execution speed of up to 6.72 times.
Deep Reinforcement Learning for Vehicular Edge Computing: An Intelligent Offloading System
- Authors: Ning, Zhaolong , Dong, Peiran , Wang, Xiaojie , Rodrigues, Joel , Xia, Feng
- Date: 2019
- Type: Text , Journal article
- Relation: ACM Transactions on Intelligent Systems and Technology Vol. 10, no. 6 (Dec 2019), p. 24
- Full Text:
- Reviewed:
- Description: The development of smart vehicles brings drivers and passengers a comfortable and safe environment. Various emerging applications are promising to enrich users' traveling experiences and daily life. However, how to execute computing-intensive applications on resource-constrained vehicles still faces huge challenges. In this article, we construct an intelligent offloading system for vehicular edge computing by leveraging deep reinforcement learning. First, both the communication and computation states are modelled by finite Markov chains. Moreover, the task scheduling and resource allocation strategy is formulated as a joint optimization problem to maximize users' Quality of Experience (QoE). Due to its complexity, the original problem is further divided into two sub-optimization problems. A two-sided matching scheme and a deep reinforcement learning approach are developed to schedule offloading requests and allocate network resources, respectively. Performance evaluations illustrate the effectiveness and superiority of our constructed system.
The evolution of Turing Award Collaboration Network : bibliometric-level and network-level metrics
- Authors: Kong, Xiangjie , Shi, Yajie , Wang, Wei , Ma, Kai , Wan, Liangtian , Xia, Feng
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Transactions on Computational Social Systems Vol. 6, no. 6 (2019), p. 1318-1328
- Full Text:
- Reviewed:
- Description: The year of 2017 for the 50th anniversary of the Turing Award, which represents the top-level award in the computer science field, is a milestone. We study the long-term evolution of the Turing Award Collaboration Network, and it can be considered as a microcosm of the computer science field from 1974 to 2016. First, scholars tend to publish articles by themselves at the early stages, and they began to focus on tight collaboration since the late 1980s. Second, compared with the same scale random network, although the Turing Award Collaboration Network has small-world properties, it is not a scale-free network. The reason may be that the number of collaborators per scholar is limited. It is impossible for scholars to connect to others freely (preferential attachment) as the scale-free network. Third, to measure how far a scholar is from the Turing Award, we propose a metric called the Turing Number (TN) and find that the TN decreases gradually over time. Meanwhile, we discover the phenomenon that scholars prefer to gather into groups to do research with the development of computer science. This article presents a new way to explore the evolution of academic collaboration network in the field of computer science by building and analyzing the Turing Award Collaboration Network for decades. © 2014 IEEE.
A detector of structural similarity for multi-modal microscopic image registration
- Authors: Lv, Guohua , Teng, Shyh , Lu, Guojun
- Date: 2018
- Type: Text , Journal article
- Relation: Multimedia Tools and Applications Vol. 77, no. 6 (2018), p. 7675-7701
- Full Text: false
- Reviewed:
- Description: This paper presents a Detector of Structural Similarity (DSS) to minimize the visual differences between brightfield and confocal microscopic images. The context of this work is that it is very challenging to effectively register such images due to a low structural similarity in image contents. To address this issue, DSS aims to maximize the structural similarity by utilizing the intensity relationships among red-green-blue (RGB) channels in images. Technically, DSS can be combined with any multi-modal image registration technique in registering brightfield and confocal microscopic images. Our experimental results show that DSS significantly increases the visual similarity in such images, thereby improving the registration performance of an existing state-of-the-art multi-modal image registration technique by up to approximately 27%. © 2017, Springer Science+Business Media New York.
Clustering in large data sets with the limited memory bundle method
- Authors: Karmitsa, Napsu , Bagirov, Adil , Taheri, Sona
- Date: 2018
- Type: Text , Journal article
- Relation: Pattern Recognition Vol. 83, no. (2018), p. 245-259
- Relation: http://purl.org/au-research/grants/arc/DP140103213
- Full Text: false
- Reviewed:
- Description: The aim of this paper is to design an algorithm based on nonsmooth optimization techniques to solve the minimum sum-of-squares clustering problems in very large data sets. First, the clustering problem is formulated as a nonsmooth optimization problem. Then the limited memory bundle method [Haarala et al., 2007] is modified and combined with an incremental approach to design a new clustering algorithm. The algorithm is evaluated using real world data sets with both the large number of attributes and the large number of data points. It is also compared with some other optimization based clustering algorithms. The numerical results demonstrate the efficiency of the proposed algorithm for clustering in very large data sets.
COREG : A corner based registration technique for multimodal images
- Authors: Lv, Guohua , Teng, Shyh , Lu, Guojun
- Date: 2018
- Type: Text , Journal article
- Relation: Multimedia Tools and Applications Vol. 77, no. 10 (2018), p. 12607-12634
- Full Text: false
- Reviewed:
- Description: This paper presents a COrner based REGistration technique for multimodal images (referred to as COREG). The proposed technique focuses on addressing large content and scale differences in multimodal images. Unlike traditional multimodal image registration techniques that rely on intensities or gradients for feature representation, we propose to use contour-based corners. First, curvature similarity between corners are for the first time explored for the purpose of multimodal image registration. Second, a novel local descriptor called Distribution of Edge Pixels Along Contour (DEPAC) is proposed to represent the edges in the neighborhood of corners. Third, a simple yet effective way of estimating scale difference is proposed by making use of geometric relationships between corner triplets from the reference and target images. Using a set of benchmark multimodal images and multimodal microscopic images, we will demonstrate that our proposed technique outperforms a state-of-the-art multimodal image registration technique. © 2017, Springer Science+Business Media, LLC.
Enhancing image registration performance by incorporating distribution and spatial distance of local descriptors
- Authors: Lv, Guohua , Teng, Shyh , Lu, Guojun
- Date: 2018
- Type: Text , Journal article
- Relation: Pattern Recognition Letters Vol. 103, no. (2018), p. 46-52
- Full Text: false
- Reviewed:
- Description: A data dependency similarity measure called mp-dissimilarity has been recently proposed. Unlike ℓp-norm distance which is widely used in calculating the similarity between vectors, mp-dissimilarity takes into account the relative positions of the two vectors with respect to the rest of the data. This paper investigates the potential of mp-dissimilarity in matching local image descriptors. Moreover, three new matching strategies are proposed by considering both ℓp-norm distance and mp-dissimilarity. Our proposed matching strategies are extensively evaluated against ℓp-norm distance and mp-dissimilarity on a few benchmark datasets. Experimental results show that mp-dissimilarity is a promising alternative to ℓp-norm distance in matching local descriptors. The proposed matching strategies outperform both ℓp-norm distance and mp-dissimilarity in matching accuracy. One of our proposed matching strategies is comparable to ℓp-norm distance in terms of recall vs 1-precision. © 2018 Elsevier B.V.