More effective web search using bigrams and trigrams
- Johnson, David, Malhotra, Vishy, Vamplew, Peter
- Authors: Johnson, David , Malhotra, Vishy , Vamplew, Peter
- Date: 2006
- Type: Text , Journal article
- Relation: Webology Vol. 3, no. 4 (2006), p.
- Full Text:
- Reviewed:
- Description: This paper investigates the effectiveness of quoted bigrams and trigrams as query terms to target web search. Prior research in this area has largely focused on static corpora each containing only a few million documents, and has reported mixed (usually negative) results. We investigate the bigram/trigram extraction problem and present an extraction algorithm that shows promising results when applied to real-time web search. We also present a prototype augmented search software package that can leverage the results provided by a web search engine to assist the web searcher identify important phrases and related documents quickly. This software has received favourable feedback in a recent user survey. Copyright © 2006, David Johnson, Vishv Malhotra, & Peter Vamplew.
- Description: C1
- Description: 2003001583
- Authors: Johnson, David , Malhotra, Vishy , Vamplew, Peter
- Date: 2006
- Type: Text , Journal article
- Relation: Webology Vol. 3, no. 4 (2006), p.
- Full Text:
- Reviewed:
- Description: This paper investigates the effectiveness of quoted bigrams and trigrams as query terms to target web search. Prior research in this area has largely focused on static corpora each containing only a few million documents, and has reported mixed (usually negative) results. We investigate the bigram/trigram extraction problem and present an extraction algorithm that shows promising results when applied to real-time web search. We also present a prototype augmented search software package that can leverage the results provided by a web search engine to assist the web searcher identify important phrases and related documents quickly. This software has received favourable feedback in a recent user survey. Copyright © 2006, David Johnson, Vishv Malhotra, & Peter Vamplew.
- Description: C1
- Description: 2003001583
New traceability codes and identification algorithm for tracing pirates
- Wu, Xinwen, Watters, Paul, Yearwood, John
- Authors: Wu, Xinwen , Watters, Paul , Yearwood, John
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at 2008 International Symposium on Parallel and Distributed Processing with Applications, ISPA 2008, Sydney, New South Wales : 10th-12th December 2008 p. 719-724
- Full Text:
- Description: With the increasing popularity of digital products, there is a strong desire to protect the rights of owners against illegal redistribution. Traditional encryption schemes alone do not provide a comprehensive solution to digital rights management, since they do not prevent users who are authorized to use a digital product for their own use from transferring the cleartext content to unauthorized users. However, traceability schemes can be used to trace the illegitimate redistributors effectively. Two types of traceability schemes have been proposed in the literature - traceability codes (TA codes), and codes with the identifiable parent properties (IPP codes). TA codes are special IPP codes, and many TA codes implement an efficient identification algorithm which can determine at least one redistributor. However, many IPP codes are not TA codes, in which case, no efficient identification algorithms are available. In this paper, we generalize the definition of TA codes to derive a new family of traceability codes that is much larger than the family of traditional TA codes. By using existing decoding algorithms with respect to the Lee distance, an efficient identification algorithm is proposed for generalized TA codes. Furthermore, we show that the identification algorithm of generalized TA codes can find more redistributors than those of traditional TA codes.
- Description: 2003006288
- Authors: Wu, Xinwen , Watters, Paul , Yearwood, John
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at 2008 International Symposium on Parallel and Distributed Processing with Applications, ISPA 2008, Sydney, New South Wales : 10th-12th December 2008 p. 719-724
- Full Text:
- Description: With the increasing popularity of digital products, there is a strong desire to protect the rights of owners against illegal redistribution. Traditional encryption schemes alone do not provide a comprehensive solution to digital rights management, since they do not prevent users who are authorized to use a digital product for their own use from transferring the cleartext content to unauthorized users. However, traceability schemes can be used to trace the illegitimate redistributors effectively. Two types of traceability schemes have been proposed in the literature - traceability codes (TA codes), and codes with the identifiable parent properties (IPP codes). TA codes are special IPP codes, and many TA codes implement an efficient identification algorithm which can determine at least one redistributor. However, many IPP codes are not TA codes, in which case, no efficient identification algorithms are available. In this paper, we generalize the definition of TA codes to derive a new family of traceability codes that is much larger than the family of traditional TA codes. By using existing decoding algorithms with respect to the Lee distance, an efficient identification algorithm is proposed for generalized TA codes. Furthermore, we show that the identification algorithm of generalized TA codes can find more redistributors than those of traditional TA codes.
- Description: 2003006288
Application of rank correlation, clustering and classification in information security
- Beliakov, Gleb, Yearwood, John, Kelarev, Andrei
- Authors: Beliakov, Gleb , Yearwood, John , Kelarev, Andrei
- Date: 2012
- Type: Text , Journal article
- Relation: Journal of Networks Vol. 7, no. 6 (2012), p. 935-945
- Full Text:
- Reviewed:
- Description: This article is devoted to experimental investigation of a novel application of a clustering technique introduced by the authors recently in order to use robust and stable consensus functions in information security, where it is often necessary to process large data sets and monitor outcomes in real time, as it is required, for example, for intrusion detection. Here we concentrate on a particular case of application to profiling of phishing websites. First, we apply several independent clustering algorithms to a randomized sample of data to obtain independent initial clusterings. Silhouette index is used to determine the number of clusters. Second, rank correlation is used to select a subset of features for dimensionality reduction. We investigate the effectiveness of the Pearson Linear Correlation Coefficient, the Spearman Rank Correlation Coefficient and the Goodman-Kruskal Correlation Coefficient in this application. Third, we use a consensus function to combine independent initial clusterings into one consensus clustering. Fourth, we train fast supervised classification algorithms on the resulting consensus clustering in order to enable them to process the whole large data set as well as new data. The precision and recall of classifiers at the final stage of this scheme are critical for effectiveness of the whole procedure. We investigated various combinations of several correlation coefficients, consensus functions, and a variety of supervised classification algorithms. © 2012 Academy Publisher.
- Description: 2003010277
- Authors: Beliakov, Gleb , Yearwood, John , Kelarev, Andrei
- Date: 2012
- Type: Text , Journal article
- Relation: Journal of Networks Vol. 7, no. 6 (2012), p. 935-945
- Full Text:
- Reviewed:
- Description: This article is devoted to experimental investigation of a novel application of a clustering technique introduced by the authors recently in order to use robust and stable consensus functions in information security, where it is often necessary to process large data sets and monitor outcomes in real time, as it is required, for example, for intrusion detection. Here we concentrate on a particular case of application to profiling of phishing websites. First, we apply several independent clustering algorithms to a randomized sample of data to obtain independent initial clusterings. Silhouette index is used to determine the number of clusters. Second, rank correlation is used to select a subset of features for dimensionality reduction. We investigate the effectiveness of the Pearson Linear Correlation Coefficient, the Spearman Rank Correlation Coefficient and the Goodman-Kruskal Correlation Coefficient in this application. Third, we use a consensus function to combine independent initial clusterings into one consensus clustering. Fourth, we train fast supervised classification algorithms on the resulting consensus clustering in order to enable them to process the whole large data set as well as new data. The precision and recall of classifiers at the final stage of this scheme are critical for effectiveness of the whole procedure. We investigated various combinations of several correlation coefficients, consensus functions, and a variety of supervised classification algorithms. © 2012 Academy Publisher.
- Description: 2003010277
Energy-balanced transmission policies for wireless sensor networks
- Azad, Arman, Kamruzzaman, Joarder
- Authors: Azad, Arman , Kamruzzaman, Joarder
- Date: 2011
- Type: Text , Journal article
- Relation: IEEE Transactions on Mobile Computing Vol. 10, no. 7 (2011), p. 927-940
- Full Text:
- Reviewed:
- Description: Transmission policy, in addition to topology control, routing, and MAC protocols, can play a vital role in extending network lifetime. Existing transmission policies, however, cause an extremely unbalanced energy usage that contributes to early demise of some sensors reducing overall network's lifetime drastically. Considering cocentric rings around the sink, we decompose the transmission distance of traditional multihop scheme into two parts: ring thickness and hop size, analyze the traffic and energy usage distribution among sensors and determine how energy usage varies and critical ring shifts with hop size. Based on above observations, we propose a transmission scheme and determine the optimal ring thickness and hop size by formulating network lifetime as an optimization problem. Numerical results show substantial improvements in terms of network lifetime and energy usage distribution over existing policies. Two other variations of this policy are also presented by redefining the optimization problem considering: 1) concomitant hop size variation by sensors over lifetime along with optimal duty cycles, and 2) a distinct set of hop sizes for sensors in each ring. Both variations bring increasingly uniform energy usage with lower critical energy and further improves lifetime. A heuristic for distributed implementation of each policy is also presented.
- Authors: Azad, Arman , Kamruzzaman, Joarder
- Date: 2011
- Type: Text , Journal article
- Relation: IEEE Transactions on Mobile Computing Vol. 10, no. 7 (2011), p. 927-940
- Full Text:
- Reviewed:
- Description: Transmission policy, in addition to topology control, routing, and MAC protocols, can play a vital role in extending network lifetime. Existing transmission policies, however, cause an extremely unbalanced energy usage that contributes to early demise of some sensors reducing overall network's lifetime drastically. Considering cocentric rings around the sink, we decompose the transmission distance of traditional multihop scheme into two parts: ring thickness and hop size, analyze the traffic and energy usage distribution among sensors and determine how energy usage varies and critical ring shifts with hop size. Based on above observations, we propose a transmission scheme and determine the optimal ring thickness and hop size by formulating network lifetime as an optimization problem. Numerical results show substantial improvements in terms of network lifetime and energy usage distribution over existing policies. Two other variations of this policy are also presented by redefining the optimization problem considering: 1) concomitant hop size variation by sensors over lifetime along with optimal duty cycles, and 2) a distinct set of hop sizes for sensors in each ring. Both variations bring increasingly uniform energy usage with lower critical energy and further improves lifetime. A heuristic for distributed implementation of each policy is also presented.
Wake-up timer and binary exponential backoff for ZigBee-based wireless sensor network for flexible movement control system of a self-lifting scaffold
- Liang, Hua, Yang, Guangxiang, Xu, Ye, Gondal, Iqbal, Wu, Chao
- Authors: Liang, Hua , Yang, Guangxiang , Xu, Ye , Gondal, Iqbal , Wu, Chao
- Date: 2016
- Type: Text , Journal article
- Relation: International Journal of Distributed Sensor Networks Vol. 12, no. 9 (2016), p. 1-12
- Full Text:
- Reviewed:
- Description: Synchronous movement of attached self-lifting scaffolds is traditionally monitored with wired sensors in high-rise building construction, which limits their flexibility of movements. A ZigBee-based wireless sensor system has been suggested in this article to prove the effectiveness of wireless sensor networks in actual implementation. Two optoelectronic sensors are integrated into a ZigBee node for measuring the displacement of attached self-lifting scaffolds. The proposed wireless sensor network combines an end device and a coordinator to allow easy replacement of sensors as compared to a wired network. A wake-up timer algorithm is proposed to reduce the transmitting power during continuous wireless data communication in the wireless sensor network. Furthermore, a variant binary exponential backoff transmission algorithm for data loss avoidance is proposed. The variant binary exponential backoff algorithm reduces packet collisions during simultaneous access by increasing the randomizing moments at nodes attempting to access the wireless channels. The performance of three of the proposed modules - a cable sensor, a 315-MHz sensor, and a ZigBee sensor - is evaluated in terms of packet delivery ratio and the end-to-end delay of a ZigBee-based wireless sensor network. The experimental results show that the proposed variant binary exponential backoff transmission algorithm achieves a higher packet delivery ratio at the cost of higher delays. The average cost of the developed ZigBee-based wireless sensor network decreased by 24% compared with the cable sensor. The power consumption of ZigBee is approximately 53.75% of the 315-MHz sensor. The average current consumption is reduced by approximately 1.5 mA with the wake-up timer algorithm at the same sampling rate. © The Author(s) 2016.
- Authors: Liang, Hua , Yang, Guangxiang , Xu, Ye , Gondal, Iqbal , Wu, Chao
- Date: 2016
- Type: Text , Journal article
- Relation: International Journal of Distributed Sensor Networks Vol. 12, no. 9 (2016), p. 1-12
- Full Text:
- Reviewed:
- Description: Synchronous movement of attached self-lifting scaffolds is traditionally monitored with wired sensors in high-rise building construction, which limits their flexibility of movements. A ZigBee-based wireless sensor system has been suggested in this article to prove the effectiveness of wireless sensor networks in actual implementation. Two optoelectronic sensors are integrated into a ZigBee node for measuring the displacement of attached self-lifting scaffolds. The proposed wireless sensor network combines an end device and a coordinator to allow easy replacement of sensors as compared to a wired network. A wake-up timer algorithm is proposed to reduce the transmitting power during continuous wireless data communication in the wireless sensor network. Furthermore, a variant binary exponential backoff transmission algorithm for data loss avoidance is proposed. The variant binary exponential backoff algorithm reduces packet collisions during simultaneous access by increasing the randomizing moments at nodes attempting to access the wireless channels. The performance of three of the proposed modules - a cable sensor, a 315-MHz sensor, and a ZigBee sensor - is evaluated in terms of packet delivery ratio and the end-to-end delay of a ZigBee-based wireless sensor network. The experimental results show that the proposed variant binary exponential backoff transmission algorithm achieves a higher packet delivery ratio at the cost of higher delays. The average cost of the developed ZigBee-based wireless sensor network decreased by 24% compared with the cable sensor. The power consumption of ZigBee is approximately 53.75% of the 315-MHz sensor. The average current consumption is reduced by approximately 1.5 mA with the wake-up timer algorithm at the same sampling rate. © The Author(s) 2016.
Advances in multimedia sensor networks for health-care and related applications
- Hossain, M. Shamim, Pathan, Al-Sakib, Goebel, Stefan, Rahman, Shawon, Murshed, Manzur
- Authors: Hossain, M. Shamim , Pathan, Al-Sakib , Goebel, Stefan , Rahman, Shawon , Murshed, Manzur
- Date: 2015
- Type: Text , Journal article , Editorial
- Relation: International Journal of Distributed Sensor Networks Vol. 2015, no. (2015), p. 1-2
- Full Text:
- Reviewed:
- Description: Multimedia sensor services and technologies play an important role in seamlessly providing andmanaging health, sports, and other services to anyone, everywhere, and anytime. Media sensors are usually equipped with cameras, microphones, and other devices that produce media content and services. Such services and technologies enable caregivers and related professionals to have immediate access to required information for efficient decision making. Since media sensing technology development is growing, many research opportunities are emerging in a broad spectrum of application domains.
- Authors: Hossain, M. Shamim , Pathan, Al-Sakib , Goebel, Stefan , Rahman, Shawon , Murshed, Manzur
- Date: 2015
- Type: Text , Journal article , Editorial
- Relation: International Journal of Distributed Sensor Networks Vol. 2015, no. (2015), p. 1-2
- Full Text:
- Reviewed:
- Description: Multimedia sensor services and technologies play an important role in seamlessly providing andmanaging health, sports, and other services to anyone, everywhere, and anytime. Media sensors are usually equipped with cameras, microphones, and other devices that produce media content and services. Such services and technologies enable caregivers and related professionals to have immediate access to required information for efficient decision making. Since media sensing technology development is growing, many research opportunities are emerging in a broad spectrum of application domains.
PFARS : Enhancing throughput and lifetime of heterogeneous WSNs through power-aware fusion, aggregation, and routing scheme
- Khan, Rahim, Zakarya, Muhammad, Tan, Zhiyuan, Usman, Muhammad, Jan, Mian, Khan, Mukhtaj
- Authors: Khan, Rahim , Zakarya, Muhammad , Tan, Zhiyuan , Usman, Muhammad , Jan, Mian , Khan, Mukhtaj
- Date: 2019
- Type: Text , Journal article
- Relation: International Journal of Communication Systems Vol. 32, no. 18 (Dec 2019), p. 21
- Full Text:
- Reviewed:
- Description: Heterogeneous wireless sensor networks (WSNs) consist of resource-starving nodes that face a challenging task of handling various issues such as data redundancy, data fusion, congestion control, and energy efficiency. In these networks, data fusion algorithms process the raw data generated by a sensor node in an energy-efficient manner to reduce redundancy, improve accuracy, and enhance the network lifetime. In literature, these issues are addressed individually, and most of the proposed solutions are either application-specific or too complex that make their implementation unrealistic, specifically, in a resource-constrained environment. In this paper, we propose a novel node-level data fusion algorithm for heterogeneous WSNs to detect noisy data and replace them with highly refined data. To minimize the amount of transmitted data, a hybrid data aggregation algorithm is proposed that performs in-network processing while preserving the reliability of gathered data. This combination of data fusion and data aggregation algorithms effectively handle the aforementioned issues by ensuring an efficient utilization of the available resources. Apart from fusion and aggregation, a biased traffic distribution algorithm is introduced that considerably increases the overall lifetime of heterogeneous WSNs. The proposed algorithm performs the tedious task of traffic distribution according to the network's statistics, ie, the residual energy of neighboring nodes and their importance from a network's connectivity perspective. All our proposed algorithms were tested on a real-time dataset obtained through our deployed heterogeneous WSN in an orange orchard and also on publicly available benchmark datasets. Experimental results verify that our proposed algorithms outperform the existing approaches in terms of various performance metrics such as throughput, lifetime, data accuracy, computational time, and delay.
- Authors: Khan, Rahim , Zakarya, Muhammad , Tan, Zhiyuan , Usman, Muhammad , Jan, Mian , Khan, Mukhtaj
- Date: 2019
- Type: Text , Journal article
- Relation: International Journal of Communication Systems Vol. 32, no. 18 (Dec 2019), p. 21
- Full Text:
- Reviewed:
- Description: Heterogeneous wireless sensor networks (WSNs) consist of resource-starving nodes that face a challenging task of handling various issues such as data redundancy, data fusion, congestion control, and energy efficiency. In these networks, data fusion algorithms process the raw data generated by a sensor node in an energy-efficient manner to reduce redundancy, improve accuracy, and enhance the network lifetime. In literature, these issues are addressed individually, and most of the proposed solutions are either application-specific or too complex that make their implementation unrealistic, specifically, in a resource-constrained environment. In this paper, we propose a novel node-level data fusion algorithm for heterogeneous WSNs to detect noisy data and replace them with highly refined data. To minimize the amount of transmitted data, a hybrid data aggregation algorithm is proposed that performs in-network processing while preserving the reliability of gathered data. This combination of data fusion and data aggregation algorithms effectively handle the aforementioned issues by ensuring an efficient utilization of the available resources. Apart from fusion and aggregation, a biased traffic distribution algorithm is introduced that considerably increases the overall lifetime of heterogeneous WSNs. The proposed algorithm performs the tedious task of traffic distribution according to the network's statistics, ie, the residual energy of neighboring nodes and their importance from a network's connectivity perspective. All our proposed algorithms were tested on a real-time dataset obtained through our deployed heterogeneous WSN in an orange orchard and also on publicly available benchmark datasets. Experimental results verify that our proposed algorithms outperform the existing approaches in terms of various performance metrics such as throughput, lifetime, data accuracy, computational time, and delay.
SmartEdge : An end-to-end encryption framework for an edge-enabled smart city application
- Jan, Mian, Zhang, Wenjing, Usman, Muhammad, Tan, Zhiyuan, Khan, Fazlullah, Luo, Entao
- Authors: Jan, Mian , Zhang, Wenjing , Usman, Muhammad , Tan, Zhiyuan , Khan, Fazlullah , Luo, Entao
- Date: 2019
- Type: Text , Journal article
- Relation: Journal of Network and Computer Applications Vol. 137, no. (2019), p. 1-10
- Full Text:
- Reviewed:
- Description: The Internet of Things (IoT) has the potential to transform communities around the globe into smart cities. The massive deployment of sensor-embedded devices in the smart cities generates voluminous amounts of data that need to be stored and processed in an efficient manner. Long-haul data transmission to the remote cloud data centers leads to higher delay and bandwidth consumption. In smart cities, the delay-sensitive applications have stringent requirements in term of response time. To reduce latency and bandwidth consumption, edge computing plays a pivotal role. The resource-constrained smart devices at the network core need to offload computationally complex tasks to the edge devices located in their vicinity and have relatively higher resources. In this paper, we propose an end-to-end encryption framework, SmartEdge, for a smart city application by executing computationally complex tasks at the network edge and cloud data centers. Using a lightweight symmetric encryption technique, we establish a secure connection among the smart core devices for multimedia streaming towards the registered and verified edge devices. Upon receiving the data, the edge devices encrypts the multimedia streams, encodes them, and broadcast to the cloud data centers. Prior to the broadcasting, each edge device establishes a secured connection with a data center that relies on the combination of symmetric and asymmetric encryption techniques. In SmartEdge, the execution of a lightweight encryption technique at the resource-constrained smart devices, and relatively complex encryption techniques at the network edge and cloud data centers reduce the resource utilization of the entire network. The proposed framework reduces the response time, security overhead, computational and communication costs, and has a lower end-to-end encryption delay for participating entities. Moreover, the proposed scheme is highly resilient against various adversarial attacks.
- Authors: Jan, Mian , Zhang, Wenjing , Usman, Muhammad , Tan, Zhiyuan , Khan, Fazlullah , Luo, Entao
- Date: 2019
- Type: Text , Journal article
- Relation: Journal of Network and Computer Applications Vol. 137, no. (2019), p. 1-10
- Full Text:
- Reviewed:
- Description: The Internet of Things (IoT) has the potential to transform communities around the globe into smart cities. The massive deployment of sensor-embedded devices in the smart cities generates voluminous amounts of data that need to be stored and processed in an efficient manner. Long-haul data transmission to the remote cloud data centers leads to higher delay and bandwidth consumption. In smart cities, the delay-sensitive applications have stringent requirements in term of response time. To reduce latency and bandwidth consumption, edge computing plays a pivotal role. The resource-constrained smart devices at the network core need to offload computationally complex tasks to the edge devices located in their vicinity and have relatively higher resources. In this paper, we propose an end-to-end encryption framework, SmartEdge, for a smart city application by executing computationally complex tasks at the network edge and cloud data centers. Using a lightweight symmetric encryption technique, we establish a secure connection among the smart core devices for multimedia streaming towards the registered and verified edge devices. Upon receiving the data, the edge devices encrypts the multimedia streams, encodes them, and broadcast to the cloud data centers. Prior to the broadcasting, each edge device establishes a secured connection with a data center that relies on the combination of symmetric and asymmetric encryption techniques. In SmartEdge, the execution of a lightweight encryption technique at the resource-constrained smart devices, and relatively complex encryption techniques at the network edge and cloud data centers reduce the resource utilization of the entire network. The proposed framework reduces the response time, security overhead, computational and communication costs, and has a lower end-to-end encryption delay for participating entities. Moreover, the proposed scheme is highly resilient against various adversarial attacks.
Contention resolution in wi-fi 6-enabled internet of things based on deep learning
- Chen, Chen, Li, Junchao, Balasubramanian, Venki, Wu, Yongqiang, Zhang, Yongqiang, Wan, Shaohua
- Authors: Chen, Chen , Li, Junchao , Balasubramanian, Venki , Wu, Yongqiang , Zhang, Yongqiang , Wan, Shaohua
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Internet of Things Journal Vol. 8, no. 7 (2021), p. 5309-5320
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) is expected to vastly increase the number of connected devices. As a result, a multitude of IoT devices transmit various information through wireless communication technology, such as the Wi-Fi technology, cellular mobile communication technology, low-power wide-area network (LPWAN) technology. However, even the latest Wi-Fi technology is still ready to accommodate these large amounts of data. Accurately setting the contention window (CW) value significantly affects the efficiency of the Wi-Fi network. Unfortunately, the standard collision resolution used by IEEE 802.11ax networks is nonscalable; thus, it cannot maintain stable throughput for an increasing number of stations, even when Wi-Fi 6 has been designed to improve performance in dense scenarios. To this end, we propose a CW control strategy for Wi-Fi 6 systems. This strategy leverages deep learning to search for optimal configuration of CW under different network conditions. Our deep neural network is trained by data generated from a Wi-Fi 6 simulation system with some varying key parameters, e.g., the number of nodes, short interframe space (SIFS), distributed interframe space (DIFS), and data transmission rate. Numerical results demonstrated that our deep learning scheme could always find the optimal CW adjustment multiple by adaptively perceiving the channel competition status. The finalized performance of our model has been significantly improved in terms of system throughput, average transmission delay, and packet retransmission rate. This makes Wi-Fi 6 better adapted to the access of a large number of IoT devices. © 2014 IEEE.
- Authors: Chen, Chen , Li, Junchao , Balasubramanian, Venki , Wu, Yongqiang , Zhang, Yongqiang , Wan, Shaohua
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Internet of Things Journal Vol. 8, no. 7 (2021), p. 5309-5320
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) is expected to vastly increase the number of connected devices. As a result, a multitude of IoT devices transmit various information through wireless communication technology, such as the Wi-Fi technology, cellular mobile communication technology, low-power wide-area network (LPWAN) technology. However, even the latest Wi-Fi technology is still ready to accommodate these large amounts of data. Accurately setting the contention window (CW) value significantly affects the efficiency of the Wi-Fi network. Unfortunately, the standard collision resolution used by IEEE 802.11ax networks is nonscalable; thus, it cannot maintain stable throughput for an increasing number of stations, even when Wi-Fi 6 has been designed to improve performance in dense scenarios. To this end, we propose a CW control strategy for Wi-Fi 6 systems. This strategy leverages deep learning to search for optimal configuration of CW under different network conditions. Our deep neural network is trained by data generated from a Wi-Fi 6 simulation system with some varying key parameters, e.g., the number of nodes, short interframe space (SIFS), distributed interframe space (DIFS), and data transmission rate. Numerical results demonstrated that our deep learning scheme could always find the optimal CW adjustment multiple by adaptively perceiving the channel competition status. The finalized performance of our model has been significantly improved in terms of system throughput, average transmission delay, and packet retransmission rate. This makes Wi-Fi 6 better adapted to the access of a large number of IoT devices. © 2014 IEEE.
Matching algorithms : fundamentals, applications and challenges
- Ren, Jing, Xia, Feng, Chen, Xiangtai, Liu, Jiaying, Sultanova, Nargiz
- Authors: Ren, Jing , Xia, Feng , Chen, Xiangtai , Liu, Jiaying , Sultanova, Nargiz
- Date: 2021
- Type: Text , Journal article , Review
- Relation: IEEE Transactions on Emerging Topics in Computational Intelligence Vol. 5, no. 3 (2021), p. 332-350
- Full Text:
- Reviewed:
- Description: Matching plays a vital role in the rational allocation of resources in many areas, ranging from market operation to people's daily lives. In economics, the term matching theory is coined for pairing two agents in a specific market to reach a stable or optimal state. In computer science, all branches of matching problems have emerged, such as the question-answer matching in information retrieval, user-item matching in a recommender system, and entity-relation matching in the knowledge graph. A preference list is the core element during a matching process, which can either be obtained directly from the agents or generated indirectly by prediction. Based on the preference list access, matching problems are divided into two categories, i.e., explicit matching and implicit matching. In this paper, we first introduce the matching theory's basic models and algorithms in explicit matching. The existing methods for coping with various matching problems in implicit matching are reviewed, such as retrieval matching, user-item matching, entity-relation matching, and image matching. Furthermore, we look into representative applications in these areas, including marriage and labor markets in explicit matching and several similarity-based matching problems in implicit matching. Finally, this survey paper concludes with a discussion of open issues and promising future directions in the field of matching. © 2017 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Jing Ren, Xia Feng, Nargiz Sultanova" is provided in this record**
- Authors: Ren, Jing , Xia, Feng , Chen, Xiangtai , Liu, Jiaying , Sultanova, Nargiz
- Date: 2021
- Type: Text , Journal article , Review
- Relation: IEEE Transactions on Emerging Topics in Computational Intelligence Vol. 5, no. 3 (2021), p. 332-350
- Full Text:
- Reviewed:
- Description: Matching plays a vital role in the rational allocation of resources in many areas, ranging from market operation to people's daily lives. In economics, the term matching theory is coined for pairing two agents in a specific market to reach a stable or optimal state. In computer science, all branches of matching problems have emerged, such as the question-answer matching in information retrieval, user-item matching in a recommender system, and entity-relation matching in the knowledge graph. A preference list is the core element during a matching process, which can either be obtained directly from the agents or generated indirectly by prediction. Based on the preference list access, matching problems are divided into two categories, i.e., explicit matching and implicit matching. In this paper, we first introduce the matching theory's basic models and algorithms in explicit matching. The existing methods for coping with various matching problems in implicit matching are reviewed, such as retrieval matching, user-item matching, entity-relation matching, and image matching. Furthermore, we look into representative applications in these areas, including marriage and labor markets in explicit matching and several similarity-based matching problems in implicit matching. Finally, this survey paper concludes with a discussion of open issues and promising future directions in the field of matching. © 2017 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Jing Ren, Xia Feng, Nargiz Sultanova" is provided in this record**
How much I can rely on you : measuring trustworthiness of a twitter user
- Das, Rajkumar, Karmakar, Gour, Kamruzzaman, Joarder
- Authors: Das, Rajkumar , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Dependable and Secure Computing Vol. 18, no. 2 (2021), p. 949-966
- Full Text:
- Reviewed:
- Description: Trustworthiness in an online environment is essential because individuals and organizations can easily be misled by false and malicious information receiving from untrustworthy users. Though existing methods assess users' trustworthiness by exploiting Twitter account properties, their efficacy is inadequate because of Twitter's restriction on profile and tweet size, the existence of missing or insufficient profiles, and ease to create fake accounts or relationships to pretend as trustworthy. In this paper, we present a holistic approach by exploiting ideas perceived from real-world organizations for trust estimation along with available Twitter information. Users' trustworthiness is determined by considering their credentials, recommendation from referees and the quality of the information in their Twitter accounts and tweets. We establish the feasibility of our approach analytically and further devise a multi-objective cost function for the A
- Authors: Das, Rajkumar , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Dependable and Secure Computing Vol. 18, no. 2 (2021), p. 949-966
- Full Text:
- Reviewed:
- Description: Trustworthiness in an online environment is essential because individuals and organizations can easily be misled by false and malicious information receiving from untrustworthy users. Though existing methods assess users' trustworthiness by exploiting Twitter account properties, their efficacy is inadequate because of Twitter's restriction on profile and tweet size, the existence of missing or insufficient profiles, and ease to create fake accounts or relationships to pretend as trustworthy. In this paper, we present a holistic approach by exploiting ideas perceived from real-world organizations for trust estimation along with available Twitter information. Users' trustworthiness is determined by considering their credentials, recommendation from referees and the quality of the information in their Twitter accounts and tweets. We establish the feasibility of our approach analytically and further devise a multi-objective cost function for the A
PAAL : a framework based on authentication, aggregation, and local differential privacy for internet of multimedia things
- Usman, Muhammad, Jan, Mian, Puthal, Deepak
- Authors: Usman, Muhammad , Jan, Mian , Puthal, Deepak
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Internet of Things Journal Vol. 7, no. 4 (2020), p. 2501-2508
- Full Text:
- Reviewed:
- Description: Internet of Multimedia Things (IoMT) applications generate huge volumes of multimedia data that are uploaded to cloud servers for storage and processing. During the uploading process, the IoMT applications face three major challenges, i.e., node management, privacy-preserving, and network protection. In this article, we propose a multilayer framework (PAAL) based on a multilevel edge computing architecture to manage end and edge devices, preserve the privacy of end-devices and data, and protect the underlying network from external attacks. The proposed framework has three layers. In the first layer, the underlying network is partitioned into multiple clusters to manage end-devices and level-one edge devices (LOEDs). In the second layer, the LOEDs apply an efficient aggregation technique to reduce the volumes of generated data and preserve the privacy of end-devices. The privacy of sensitive information in aggregated data is protected through a local differential privacy-based technique. In the last layer, the mobile sinks are registered with a level-two edge device via a handshaking mechanism to protect the underlying network from external threats. Experimental results show that the proposed framework performs better as compared to existing frameworks in terms of managing the nodes, preserving the privacy of end-devices and sensitive information, and protecting the underlying network. © 2014 IEEE.
- Authors: Usman, Muhammad , Jan, Mian , Puthal, Deepak
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Internet of Things Journal Vol. 7, no. 4 (2020), p. 2501-2508
- Full Text:
- Reviewed:
- Description: Internet of Multimedia Things (IoMT) applications generate huge volumes of multimedia data that are uploaded to cloud servers for storage and processing. During the uploading process, the IoMT applications face three major challenges, i.e., node management, privacy-preserving, and network protection. In this article, we propose a multilayer framework (PAAL) based on a multilevel edge computing architecture to manage end and edge devices, preserve the privacy of end-devices and data, and protect the underlying network from external attacks. The proposed framework has three layers. In the first layer, the underlying network is partitioned into multiple clusters to manage end-devices and level-one edge devices (LOEDs). In the second layer, the LOEDs apply an efficient aggregation technique to reduce the volumes of generated data and preserve the privacy of end-devices. The privacy of sensitive information in aggregated data is protected through a local differential privacy-based technique. In the last layer, the mobile sinks are registered with a level-two edge device via a handshaking mechanism to protect the underlying network from external threats. Experimental results show that the proposed framework performs better as compared to existing frameworks in terms of managing the nodes, preserving the privacy of end-devices and sensitive information, and protecting the underlying network. © 2014 IEEE.
Random walks : a review of algorithms and applications
- Xia, Feng, Liu, Jiaying, Nie, Hansong, Fu, Yonghao, Wan, Liangtian, Kong, Xiangjie
- Authors: Xia, Feng , Liu, Jiaying , Nie, Hansong , Fu, Yonghao , Wan, Liangtian , Kong, Xiangjie
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Transactions on Emerging Topics in Computational Intelligence Vol. 4, no. 2 (2020), p. 95-107
- Full Text:
- Reviewed:
- Description: A random walk is known as a random process which describes a path including a succession of random steps in the mathematical space. It has increasingly been popular in various disciplines such as mathematics and computer science. Furthermore, in quantum mechanics, quantum walks can be regarded as quantum analogues of classical random walks. Classical random walks and quantum walks can be used to calculate the proximity between nodes and extract the topology in the network. Various random walk related models can be applied in different fields, which is of great significance to downstream tasks such as link prediction, recommendation, computer vision, semi-supervised learning, and network embedding. In this article, we aim to provide a comprehensive review of classical random walks and quantum walks. We first review the knowledge of classical random walks and quantum walks, including basic concepts and some typical algorithms. We also compare the algorithms based on quantum walks and classical random walks from the perspective of time complexity. Then we introduce their applications in the field of computer science. Finally we discuss the open issues from the perspectives of efficiency, main-memory volume, and computing time of existing algorithms. This study aims to contribute to this growing area of research by exploring random walks and quantum walks together. © 2017 IEEE.
- Authors: Xia, Feng , Liu, Jiaying , Nie, Hansong , Fu, Yonghao , Wan, Liangtian , Kong, Xiangjie
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Transactions on Emerging Topics in Computational Intelligence Vol. 4, no. 2 (2020), p. 95-107
- Full Text:
- Reviewed:
- Description: A random walk is known as a random process which describes a path including a succession of random steps in the mathematical space. It has increasingly been popular in various disciplines such as mathematics and computer science. Furthermore, in quantum mechanics, quantum walks can be regarded as quantum analogues of classical random walks. Classical random walks and quantum walks can be used to calculate the proximity between nodes and extract the topology in the network. Various random walk related models can be applied in different fields, which is of great significance to downstream tasks such as link prediction, recommendation, computer vision, semi-supervised learning, and network embedding. In this article, we aim to provide a comprehensive review of classical random walks and quantum walks. We first review the knowledge of classical random walks and quantum walks, including basic concepts and some typical algorithms. We also compare the algorithms based on quantum walks and classical random walks from the perspective of time complexity. Then we introduce their applications in the field of computer science. Finally we discuss the open issues from the perspectives of efficiency, main-memory volume, and computing time of existing algorithms. This study aims to contribute to this growing area of research by exploring random walks and quantum walks together. © 2017 IEEE.
A dynamic content distribution scheme for decentralized sharing in tourist hotspots
- Kaisar, Shahriar, Kamruzzaman, Joarder, Karmakar, Gour
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour
- Date: 2019
- Type: Text , Journal article
- Relation: Journal of Network and Computer Applications Vol. 129, no. (2019), p. 9-24
- Full Text:
- Reviewed:
- Description: Decentralized content sharing (DCS) is emerging as a suitable platform for smart mobile device users to generate and share contents seamlessly without the requirement of a centralized server. This feature is particularly important for places that lack Internet coverage such as tourist attractions where users can form an ad-hoc network and communicate opportunistically to share contents. Existing DCS approaches when applied for such type of places suffer from low delivery success rate and high latency. Although a handful of recent approaches have specifically targeted improvement of content delivery service in tourist spot like scenario, these and other DCS approaches do not focus on contents’ demand and supply which vary considerably due to visitor in-and-out flow and occurrence of influencing events. This is further compounded by the lack of any content distribution (replication) scheme. The content delivery service will be improved if contents can be proactively distributed in strategic positions based on dynamic demand and supply and medium access contention. In this paper, we propose a dynamic content distribution scheme (DCDS) considering these practical issues for sharing contents in tourist attractions. Simulation results show that the proposed approach significantly improves (7
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour
- Date: 2019
- Type: Text , Journal article
- Relation: Journal of Network and Computer Applications Vol. 129, no. (2019), p. 9-24
- Full Text:
- Reviewed:
- Description: Decentralized content sharing (DCS) is emerging as a suitable platform for smart mobile device users to generate and share contents seamlessly without the requirement of a centralized server. This feature is particularly important for places that lack Internet coverage such as tourist attractions where users can form an ad-hoc network and communicate opportunistically to share contents. Existing DCS approaches when applied for such type of places suffer from low delivery success rate and high latency. Although a handful of recent approaches have specifically targeted improvement of content delivery service in tourist spot like scenario, these and other DCS approaches do not focus on contents’ demand and supply which vary considerably due to visitor in-and-out flow and occurrence of influencing events. This is further compounded by the lack of any content distribution (replication) scheme. The content delivery service will be improved if contents can be proactively distributed in strategic positions based on dynamic demand and supply and medium access contention. In this paper, we propose a dynamic content distribution scheme (DCDS) considering these practical issues for sharing contents in tourist attractions. Simulation results show that the proposed approach significantly improves (7
UniFlexView : a unified framework for consistent construction of BPMN and BPEL process views
- Yongchareon, Sira, Liu, Chengfei, Zhao, Xiaohui
- Authors: Yongchareon, Sira , Liu, Chengfei , Zhao, Xiaohui
- Date: 2020
- Type: Text , Journal article
- Relation: Concurrency Computation Vol. 32, no. 11 (2020), p.
- Full Text:
- Reviewed:
- Description: Process view technologies allow organizations to create different granularity levels of abstraction of their business processes, therefore enabling a more effective business process management, analysis, interoperation, and privacy controls. Existing research proposed view construction and abstraction techniques for block-based (ie, BPEL) and graph-based (ie, BPMN) process models. However, the existing techniques treat each type of the two types of models separately. Especially, this brings in challenges for achieving a consistent process view for a BPEL model that derives from a BPMN model. In this paper, we propose a unified framework, namely UniFlexView, for supporting automatic and consistent process view construction. With our framework, process modelers can use our proposed View Definition Language to specify their view construction requirements disregarding the types of process models. Our UniFlexView's system prototype has been developed as a proof of concept and demonstration of the usability and feasibility of our framework. © 2019 John Wiley & Sons, Ltd.
- Authors: Yongchareon, Sira , Liu, Chengfei , Zhao, Xiaohui
- Date: 2020
- Type: Text , Journal article
- Relation: Concurrency Computation Vol. 32, no. 11 (2020), p.
- Full Text:
- Reviewed:
- Description: Process view technologies allow organizations to create different granularity levels of abstraction of their business processes, therefore enabling a more effective business process management, analysis, interoperation, and privacy controls. Existing research proposed view construction and abstraction techniques for block-based (ie, BPEL) and graph-based (ie, BPMN) process models. However, the existing techniques treat each type of the two types of models separately. Especially, this brings in challenges for achieving a consistent process view for a BPEL model that derives from a BPMN model. In this paper, we propose a unified framework, namely UniFlexView, for supporting automatic and consistent process view construction. With our framework, process modelers can use our proposed View Definition Language to specify their view construction requirements disregarding the types of process models. Our UniFlexView's system prototype has been developed as a proof of concept and demonstration of the usability and feasibility of our framework. © 2019 John Wiley & Sons, Ltd.
Great South Coast ICT survey, 2011
- Thompson, Helen, Fong, George
- Authors: Thompson, Helen , Fong, George
- Date: 2011
- Type: Text , Dataset
- Full Text:
- Description: A combination of qualitative and quantitative research methods were utilised to collect information from across the Great South Coast(GSC) region Victoria which included (5 municipalities:- Warrnambool City and the Shires of Corangamite, Glenelg, Moyne and Southern Grampians) and were aimed at information regarding telecommunications and broadband access and services, barriers and usage at local levels. Data collection methods included key stakeholder interviews, the online survey, case studies and spatial mapping of the responses and feedback garnered mainly from the surveys. Anticipated NBN access infrastructure has also been mapped.The adopted consultation and research methodology was designed to assess demand and support from business operators, local residents and other stakeholders for next generation broadband for the GSC region. The online survey was a major instrument for gathering data in the period to July 2011. The largest contributions to the 598 valid responses came from Warrnambool (n=166), Hamilton (n=94), Camperdown (n=29) and Portland (n=23). Summary available online. Qualitative data may be available by contacting CeCC.
- Authors: Thompson, Helen , Fong, George
- Date: 2011
- Type: Text , Dataset
- Full Text:
- Description: A combination of qualitative and quantitative research methods were utilised to collect information from across the Great South Coast(GSC) region Victoria which included (5 municipalities:- Warrnambool City and the Shires of Corangamite, Glenelg, Moyne and Southern Grampians) and were aimed at information regarding telecommunications and broadband access and services, barriers and usage at local levels. Data collection methods included key stakeholder interviews, the online survey, case studies and spatial mapping of the responses and feedback garnered mainly from the surveys. Anticipated NBN access infrastructure has also been mapped.The adopted consultation and research methodology was designed to assess demand and support from business operators, local residents and other stakeholders for next generation broadband for the GSC region. The online survey was a major instrument for gathering data in the period to July 2011. The largest contributions to the 598 valid responses came from Warrnambool (n=166), Hamilton (n=94), Camperdown (n=29) and Portland (n=23). Summary available online. Qualitative data may be available by contacting CeCC.
- «
- ‹
- 1
- ›
- »