FogAuthChain: A secure location-based authentication scheme in fog computing environments using Blockchain
- Authors: Patwary, Abdullah Al-Noman , Fu, Anmin , Battula, Sudheer , Naha, Ranesh , Garg, Saurabh , Mahanti, Aniket
- Date: 2020
- Type: Text , Journal article
- Relation: Computer communications Vol. 162, no. (2020), p. 212-224
- Full Text: false
- Reviewed:
- Description: Fog computing is an emerging computing paradigm which expands cloud-based computing services near the network edge. With this new computing paradigm, new challenges arise in terms of security and privacy. These concerns are due to the distributed ownership of Fog devices. Because of the large scale distributed nature of devices at the Fog layer, secure authentication for communication among these devices is a major challenge. The traditional authentication methods (password-based, certificate-based and biometric-based) are not directly applicable due to the unique architecture and characteristics of the Fog. Moreover, the traditional authentication methods consume significantly more computation power and incur high latency, and this does not meet the key requirements of the Fog. To fill this gap, this article proposes a secure decentralised location-based device to device (D2D) authentication model in which Fog devices can mutually authenticate each other at the Fog layer by using Blockchain. We considered an Ethereum Blockchain platform for the Fog device registration, authentication, attestation and data storage. We presented the overall system architecture, various participants and their transactions and message interaction between the participants. We validated the proposed model by comparing it with the existing method results showed that the proposed authentication mechanism was efficient and secure. From the performance evaluation, it was found that the proposed method is computationally efficient and secure in a highly distributed Fog network.
A blockchain-based framework for automatic SLA management in fog computing environments
- Authors: Battula, Sudheer , Garg, Saurabh , Naha, Ranesh , Amin, Muhammad , Kang, Byeong , Aghasian, Erfan
- Date: 2022
- Type: Text , Journal article
- Relation: The Journal of supercomputing Vol. 78, no. 15 (2022), p. 16647-16677
- Full Text: false
- Reviewed:
- Description: Fog computing has become a prominent paradigm in providing shared resources to serve different applications near the edge. Similar to other computing paradigms such as cloud and grid, in fog computing, service-level agreements (SLAs) are essential between fog providers and end-users to guarantee the quality of service (QoS). However, due to the unique characteristics of fog resources, such as being highly distributed and heterogeneous, with their dynamic nature having nonrestrictive provider participation, SLA management techniques and frameworks, which are available for Clouds and Grids, are not directly applicable. The availability of the resources in the cloud is much more controllable and predictable compared to fog. Moreover, due to the multiple ownership of fog infrastructure and unrestricted environment, autonomous end-devices are allowed to participate with different SLAs to serve the applications near the edge as a result is a lack of trust exists between the entities and managing and enforcing SLAs according to the application QoS in this environment is a complex task. Thus, the SLA management must be undertaken in a more trustworthy manner to ensure that agreement. To fill this gap, this paper proposes an automated SLA management framework for fog computing that utilizes Smart contracts and blockchain technology to monitor and enforce SLAs in a more trustworthy manner. The results obtained from the experiments, which were conducted in the blockchain private network, show that the framework can ensure precise and efficient SLAs enforcement in the fog. The performance of the proposed framework is better than existing work in terms of transaction cost and time.
SMOaaS: a Scalable Matrix Operation as a Service model in Cloud
- Authors: Ujjwal, K. C. , Battula, Sudheer , Garg, Saurabh , Naha, Ranesh , Patwary, Md Anwarul , Brown, Alexander
- Date: 2021
- Type: Text , Journal article
- Relation: The Journal of supercomputing Vol. 77, no. 4 (2021), p. 3381-3401
- Full Text: false
- Reviewed:
- Description: Matrix operations are fundamental to a wide range of scientific applications such as Graph Theory, Linear Equation System, Image Processing, Geometric Optics, and Probability Analysis. As the workload in these applications has increased, the sizes of matrices involved have also significantly increased. Parallel execution of matrix operations in existing cluster-based systems performs effectively for relatively small matrices but significantly suffers as matrices become larger due to limited resources. Cloud Computing offers scalable resources to handle this limitation however, the benefits of having access to almost-infinite scalable resources in the Cloud also come with challenges of ensuring time and resource-efficient matrix operations. To the best of our knowledge, there is no specific Cloud service that optimizes the efficiency of matrix operations on Cloud infrastructure. To address this gap and offer convenient service of matrix operations, the paper proposes a novel scalable service framework called Scalable Matrix Operation as a Service. Our framework uses Dynamic Matrix Partition techniques, based on matrix operation and sizes, to achieve efficient work distribution, and scales based on demand to achieve time and resource-efficient operations. The framework also embraces the basic features of security, fault tolerance, and reliability. Experimental results show that the adopted dynamic partitioning technique ensures faster and better performance when compared to the existing static partitioning technique.
IoTSim‐Edge: A simulation framework for modeling the behavior of Internet of Things and edge computing environments
- Authors: Jha, Devki Nandan , Alwasel, Khaled , Alshoshan, Areeb , Huang, Xianghua , Naha, Ranesh , Battula, Sudheer , Garg, Saurabh , Puthal, Deepak , James, Philip , Zomaya, Albert , Dustdar, Schahram , Ranjan, Rajiv
- Date: 2020
- Type: Text , Journal article
- Relation: Software, practice & experience Vol. 50, no. 6 (2020), p. 844-867
- Full Text: false
- Reviewed:
- Description: Summary With the proliferation of Internet of Things (IoT) and edge computing paradigms, billions of IoT devices are being networked to support data‐driven and real‐time decision making across numerous application domains, including smart homes, smart transport, and smart buildings. These ubiquitously distributed IoT devices send the raw data to their respective edge device (eg, IoT gateways) or the cloud directly. The wide spectrum of possible application use cases make the design and networking of IoT and edge computing layers a very tedious process due to the: (i) complexity and heterogeneity of end‐point networks (eg, Wi‐Fi, 4G, and Bluetooth) (ii) heterogeneity of edge and IoT hardware resources and software stack (iv) mobility of IoT devices and (iii) the complex interplay between the IoT and edge layers. Unlike cloud computing, where researchers and developers seeking to test capacity planning, resource selection, network configuration, computation placement, and security management strategies had access to public cloud infrastructure (eg, Amazon and Azure), establishing an IoT and edge computing testbed that offers a high degree of verisimilitude is not only complex, costly, and resource‐intensive but also time‐intensive. Moreover, testing in real IoT and edge computing environments is not feasible due to the high cost and diverse domain knowledge required in order to reason about their diversity, scalability, and usability. To support performance testing and validation of IoT and edge computing configurations and algorithms at scale, simulation frameworks should be developed. Hence, this article proposes a novel simulator IoTSim‐Edge, which captures the behavior of heterogeneous IoT and edge computing infrastructure and allows users to test their infrastructure and framework in an easy and configurable manner. IoTSim‐Edge extends the capability of CloudSim to incorporate the different features of edge and IoT devices. The effectiveness of IoTSim‐Edge is described using three test cases. Results show the varying capability of IoTSim‐Edge in terms of application composition, battery‐oriented modeling, heterogeneous protocols modeling, and mobility modeling along with the resources provisioning for IoT applications.
A micro-level compensation-based cost model for resource allocation in a fog environment
- Authors: Battula, Sudheer , Garg, Saurabh , Naha, Ranesh , Thulasiraman, Parimala , Thulasiram, Ruppa
- Date: 2019
- Type: Text , Journal article
- Relation: Sensors Vol. 19, no. 13 (2019), p. 2954
- Full Text:
- Reviewed:
- Description: Fog computing aims to support applications requiring low latency and high scalability by using resources at the edge level. In general, fog computing comprises several autonomous mobile or static devices that share their idle resources to run different services. The providers of these devices also need to be compensated based on their device usage. In any fog-based resource-allocation problem, both cost and performance need to be considered for generating an efficient resource-allocation plan. Estimating the cost of using fog devices prior to the resource allocation helps to minimize the cost and maximize the performance of the system. In the fog computing domain, recent research works have proposed various resource-allocation algorithms without considering the compensation to resource providers and the cost estimation of the fog resources. Moreover, the existing cost models in similar paradigms such as in the cloud are not suitable for fog environments as the scaling of different autonomous resources with heterogeneity and variety of offerings is much more complicated. To fill this gap, this study first proposes a micro-level compensation cost model and then proposes a new resource-allocation method based on the cost model, which benefits both providers and users. Experimental results show that the proposed algorithm ensures better resource-allocation performance and lowers application processing costs when compared to the existing best-fit algorithm.