A semantic method to information extraction for decision support systems
- Ofoghi, Bahadorreza, Yearwood, John, Ghosh, Ranadhir
- Authors: Ofoghi, Bahadorreza , Yearwood, John , Ghosh, Ranadhir
- Date: 2006
- Type: Text , Conference proceedings
- Full Text: false
- Description: In this paper, we describe a novel schema for a more semantic text mining process which results in more comprehensive decision making activity by decision support systems via providing more effective and accurate textual information. The utility of two semantic lexical resources; Frame Net and Word Net, in extracting required text snippets from unstructured free texts yields a better and more accurate information extraction process to deliver more precise information either to a DSS or to a decision maker. We explain how the usage of these lexical resources could elevate a focused text mining process which could be applied to an information provider system in a decision support paradigm. The preliminary results obtained after a starter experiment show that the hybrid information extraction schema performs well on some semantic failure situations.
- Description: 2003010644
Automatic sleep stage identification: difficulties and possible solutions
- Sukhorukova, Nadezda, Stranieri, Andrew, Ofoghi, Bahadorreza, Vamplew, Peter, Saleem, Muhammad Saad, Ma, Liping, Ugon, Adrien, Ugon, Julien, Muecke, Nial, Amiel, Hélène, Philippe, Carole, Bani-Mustafa, Ahmed, Huda, Shamsul, Bertoli, Marcello, Levy, P, Ganascia, J.G
- Authors: Sukhorukova, Nadezda , Stranieri, Andrew , Ofoghi, Bahadorreza , Vamplew, Peter , Saleem, Muhammad Saad , Ma, Liping , Ugon, Adrien , Ugon, Julien , Muecke, Nial , Amiel, Hélène , Philippe, Carole , Bani-Mustafa, Ahmed , Huda, Shamsul , Bertoli, Marcello , Levy, P , Ganascia, J.G
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: The diagnosis of many sleep disorders is a labour intensive task that involves the specialised interpretation of numerous signals including brain wave, breath and heart rate captured in overnight polysomnogram sessions. The automation of diagnoses is challenging for data mining algorithms because the data sets are extremely large and noisy, the signals are complex and specialist's analyses vary. This work reports on the adaptation of approaches from four fields; neural networks, mathematical optimisation, financial forecasting and frequency domain analysis to the problem of automatically determing a patient's stage of sleep. Results, though preliminary, are promising and indicate that combined approaches may prove more fruitful than the reliance on a approach.
- Authors: Sukhorukova, Nadezda , Stranieri, Andrew , Ofoghi, Bahadorreza , Vamplew, Peter , Saleem, Muhammad Saad , Ma, Liping , Ugon, Adrien , Ugon, Julien , Muecke, Nial , Amiel, Hélène , Philippe, Carole , Bani-Mustafa, Ahmed , Huda, Shamsul , Bertoli, Marcello , Levy, P , Ganascia, J.G
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: The diagnosis of many sleep disorders is a labour intensive task that involves the specialised interpretation of numerous signals including brain wave, breath and heart rate captured in overnight polysomnogram sessions. The automation of diagnoses is challenging for data mining algorithms because the data sets are extremely large and noisy, the signals are complex and specialist's analyses vary. This work reports on the adaptation of approaches from four fields; neural networks, mathematical optimisation, financial forecasting and frequency domain analysis to the problem of automatically determing a patient's stage of sleep. Results, though preliminary, are promising and indicate that combined approaches may prove more fruitful than the reliance on a approach.
Performance evaluation of multi-tier ensemble classifiers for phishing websites
- Abawajy, Jemal, Beliakov, Gleb, Kelarev, Andrei, Yearwood, John
- Authors: Abawajy, Jemal , Beliakov, Gleb , Kelarev, Andrei , Yearwood, John
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: This article is devoted to large multi-tier ensemble classifiers generated as ensembles of ensembles and applied to phishing websites. Our new ensemble construction is a special case of the general and productive multi-tier approach well known in information security. Many efficient multi-tier classifiers have been considered in the literature. Our new contribution is in generating new large systems as ensembles of ensembles by linking a top-tier ensemble to another middletier ensemble instead of a base classifier so that the toptier ensemble can generate the whole system. This automatic generation capability includes many large ensemble classifiers in two tiers simultaneously and automatically combines them into one hierarchical unified system so that one ensemble is an integral part of another one. This new construction makes it easy to set up and run such large systems. The present article concentrates on the investigation of performance of these new multi-tier ensembles for the example of detection of phishing websites. We carried out systematic experiments evaluating several essential ensemble techniques as well as more recent approaches and studying their performance as parts of multi-level ensembles with three tiers. The results presented here demonstrate that new three-tier ensemble classifiers performed better than the base classifiers and standard ensembles included in the system. This example of application to the classification of phishing websites shows that the new method of combining diverse ensemble techniques into a unified hierarchical three-tier ensemble can be applied to increase the performance of classifiers in situations where data can be processed on a large computer.
- Authors: Abawajy, Jemal , Beliakov, Gleb , Kelarev, Andrei , Yearwood, John
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: This article is devoted to large multi-tier ensemble classifiers generated as ensembles of ensembles and applied to phishing websites. Our new ensemble construction is a special case of the general and productive multi-tier approach well known in information security. Many efficient multi-tier classifiers have been considered in the literature. Our new contribution is in generating new large systems as ensembles of ensembles by linking a top-tier ensemble to another middletier ensemble instead of a base classifier so that the toptier ensemble can generate the whole system. This automatic generation capability includes many large ensemble classifiers in two tiers simultaneously and automatically combines them into one hierarchical unified system so that one ensemble is an integral part of another one. This new construction makes it easy to set up and run such large systems. The present article concentrates on the investigation of performance of these new multi-tier ensembles for the example of detection of phishing websites. We carried out systematic experiments evaluating several essential ensemble techniques as well as more recent approaches and studying their performance as parts of multi-level ensembles with three tiers. The results presented here demonstrate that new three-tier ensemble classifiers performed better than the base classifiers and standard ensembles included in the system. This example of application to the classification of phishing websites shows that the new method of combining diverse ensemble techniques into a unified hierarchical three-tier ensemble can be applied to increase the performance of classifiers in situations where data can be processed on a large computer.
Designing a pulsed eddy current sensing set-up for cast iron thickness assessment
- Ulapane, Nalika, Nguyen, Linh, Miro, Jaime Valls, Alempijevic, Alen, Dissanayake, Gamini
- Authors: Ulapane, Nalika , Nguyen, Linh , Miro, Jaime Valls , Alempijevic, Alen , Dissanayake, Gamini
- Date: 2017
- Type: Text , Conference proceedings
- Relation: 2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA); Siem Reap, Cambodia; 18-20 June 2017 p. 901-906
- Full Text:
- Reviewed:
- Description: Pulsed Eddy Current (PEC) sensors possess proven functionality in measuring ferromagnetic material thickness. However, most commercial PEC service providers as well as researchers have investigated and claim functionality of sensors on homogeneous structural steels (steel grade Q235 for example). In this paper, we present design steps for a PEC sensing set-up to measure thickness of cast iron, which is unlike steel, is a highly inhomogeneous and non-linear ferromagnetic material. The setup includes a PEC sensor, sensor excitation and reception circuits, and a unique signal processing method. The signal processing method yields a signal feature which behaves as a function of thickness. The signal feature has a desirable characteristic of being lowly influenced by lift-off. Experimental results show that the set-up is usable for Non-destructive Evaluation (NDE) applications such as cast iron water pipe assessment.
- Authors: Ulapane, Nalika , Nguyen, Linh , Miro, Jaime Valls , Alempijevic, Alen , Dissanayake, Gamini
- Date: 2017
- Type: Text , Conference proceedings
- Relation: 2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA); Siem Reap, Cambodia; 18-20 June 2017 p. 901-906
- Full Text:
- Reviewed:
- Description: Pulsed Eddy Current (PEC) sensors possess proven functionality in measuring ferromagnetic material thickness. However, most commercial PEC service providers as well as researchers have investigated and claim functionality of sensors on homogeneous structural steels (steel grade Q235 for example). In this paper, we present design steps for a PEC sensing set-up to measure thickness of cast iron, which is unlike steel, is a highly inhomogeneous and non-linear ferromagnetic material. The setup includes a PEC sensor, sensor excitation and reception circuits, and a unique signal processing method. The signal processing method yields a signal feature which behaves as a function of thickness. The signal feature has a desirable characteristic of being lowly influenced by lift-off. Experimental results show that the set-up is usable for Non-destructive Evaluation (NDE) applications such as cast iron water pipe assessment.
Data exchange in delay tolerant networks using joint inter- and intra-flow network coding
- Ostovari, Pouya, Wu, Jie, Jolfaei, Alireza
- Authors: Ostovari, Pouya , Wu, Jie , Jolfaei, Alireza
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 37th IEEE International Performance Computing and Communications Conference, IPCCC 2018; Orlando, United States; 17th-19th November 2018 p. 1-8
- Full Text:
- Reviewed:
- Description: Data transmission in delay tolerant networks (DTNs) is a challenging problem due to the lack of continuous network connectivity and nondeterministic mobility of the nodes. Epidemic routing and spray-and-wait methods are two popular mechanisms that are proposed for DTNs. In order to reduce the transmission delay in DTNs, some previous works combine intra-flow network coding with the routing protocols. In this paper, we propose two routing mechanisms using systematic joint inter- and intra-flow network coding for the purpose of data exchange between the nodes. We discuss the reasons why inter-flow network coding helps to reduce the delivery delay of the packets, and we also analyze the delays related with only using intra-flow coding, and joint inter- and intra-flow coding methods. We empirically show the benefit of joint coding over just intra-flow coding. Based on our simulation, joint coding can reduce the delay up to 40%, compared to only intra-flow coding.
- Description: 2018 IEEE 37th International Performance Computing and Communications Conference, IPCCC 2018
- Authors: Ostovari, Pouya , Wu, Jie , Jolfaei, Alireza
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 37th IEEE International Performance Computing and Communications Conference, IPCCC 2018; Orlando, United States; 17th-19th November 2018 p. 1-8
- Full Text:
- Reviewed:
- Description: Data transmission in delay tolerant networks (DTNs) is a challenging problem due to the lack of continuous network connectivity and nondeterministic mobility of the nodes. Epidemic routing and spray-and-wait methods are two popular mechanisms that are proposed for DTNs. In order to reduce the transmission delay in DTNs, some previous works combine intra-flow network coding with the routing protocols. In this paper, we propose two routing mechanisms using systematic joint inter- and intra-flow network coding for the purpose of data exchange between the nodes. We discuss the reasons why inter-flow network coding helps to reduce the delivery delay of the packets, and we also analyze the delays related with only using intra-flow coding, and joint inter- and intra-flow coding methods. We empirically show the benefit of joint coding over just intra-flow coding. Based on our simulation, joint coding can reduce the delay up to 40%, compared to only intra-flow coding.
- Description: 2018 IEEE 37th International Performance Computing and Communications Conference, IPCCC 2018
Understanding victims of identity theft: Preliminary insights
- Turville, Kylie, Yearwood, John, Miller, Charlynn
- Authors: Turville, Kylie , Yearwood, John , Miller, Charlynn
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Identity theft is not a new crime, however changes in society and the way that business is conducted have made it an easier, attractive and more lucrative crime. When a victim discovers the misuse of their identity they must then begin the process of recovery, including fixing any issues that may have been created by the misuse. For some victims this may only take a small amount of time and effort, however for others they may continue to experience issues for many years after the initial moment of discovery. To date, little research has been conducted within Australia or internationally regarding what a victim experiences as they work through the recovery process. This paper presents a summary of the identity theft domain with an emphasis on research conducted within Australia, and identifies a number of issues regarding research in this area. The paper also provides an overview of the research project currently being undertaken by the authors in obtaining an understanding of what victims of identity theft experience during the recovery process; particularly their experiences when dealing with organizations. Finally, it reports on some of the preliminary work that has already been conducted for the research project. © 2010 IEEE.
- Authors: Turville, Kylie , Yearwood, John , Miller, Charlynn
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Identity theft is not a new crime, however changes in society and the way that business is conducted have made it an easier, attractive and more lucrative crime. When a victim discovers the misuse of their identity they must then begin the process of recovery, including fixing any issues that may have been created by the misuse. For some victims this may only take a small amount of time and effort, however for others they may continue to experience issues for many years after the initial moment of discovery. To date, little research has been conducted within Australia or internationally regarding what a victim experiences as they work through the recovery process. This paper presents a summary of the identity theft domain with an emphasis on research conducted within Australia, and identifies a number of issues regarding research in this area. The paper also provides an overview of the research project currently being undertaken by the authors in obtaining an understanding of what victims of identity theft experience during the recovery process; particularly their experiences when dealing with organizations. Finally, it reports on some of the preliminary work that has already been conducted for the research project. © 2010 IEEE.
Feature selection using misclassification counts
- Bagirov, Adil, Yatsko, Andrew, Stranieri, Andrew
- Authors: Bagirov, Adil , Yatsko, Andrew , Stranieri, Andrew
- Date: 2011
- Type: Conference proceedings , Unpublished work
- Relation: Proceedings of the 9th Australasian Data Mining Conference (AusDM 2011), 51-62. Conferences in Research and Practice in Information Technology (CRPIT), Vol. 121.
- Full Text:
- Description: Dimensionality reduction of the problem space through detection and removal of variables, contributing little or not at all to classification, is able to relieve the computational load and instance acquisition effort, considering all the data attributes accessed each time around. The approach to feature selection in this paper is based on the concept of coherent accumulation of data about class centers with respect to coordinates of informative features. Ranking is done on the degree to which different variables exhibit random characteristics. The results are being verified using the Nearest Neighbor classifier. This also helps to address the feature irrelevance and redundancy, what ranking does not immediately decide. Additionally, feature ranking methods from different independent sources are called in for the direct comparison.
- Description: Dimensionality reduction of the problem space through detection and removal of variables, contributing little or not at all to classification, is able to relieve the computational load and the data acquisition effort, considering all data components being accessed each time around. The approach to feature selection in this paper is based on the concept of coherent accumulation of data about class centers with respect to coordinates of informative features. Ranking is done on the degree, to which different variables exhibit random characteristics. The results are being verified using the Nearest Neighbor classifier. This also helps to address the feature irrelevance, what ranking does not immediately decide. Additionally, feature ranking methods available from different independent sources are called in for direct comparison.
- Authors: Bagirov, Adil , Yatsko, Andrew , Stranieri, Andrew
- Date: 2011
- Type: Conference proceedings , Unpublished work
- Relation: Proceedings of the 9th Australasian Data Mining Conference (AusDM 2011), 51-62. Conferences in Research and Practice in Information Technology (CRPIT), Vol. 121.
- Full Text:
- Description: Dimensionality reduction of the problem space through detection and removal of variables, contributing little or not at all to classification, is able to relieve the computational load and instance acquisition effort, considering all the data attributes accessed each time around. The approach to feature selection in this paper is based on the concept of coherent accumulation of data about class centers with respect to coordinates of informative features. Ranking is done on the degree to which different variables exhibit random characteristics. The results are being verified using the Nearest Neighbor classifier. This also helps to address the feature irrelevance and redundancy, what ranking does not immediately decide. Additionally, feature ranking methods from different independent sources are called in for the direct comparison.
- Description: Dimensionality reduction of the problem space through detection and removal of variables, contributing little or not at all to classification, is able to relieve the computational load and the data acquisition effort, considering all data components being accessed each time around. The approach to feature selection in this paper is based on the concept of coherent accumulation of data about class centers with respect to coordinates of informative features. Ranking is done on the degree, to which different variables exhibit random characteristics. The results are being verified using the Nearest Neighbor classifier. This also helps to address the feature irrelevance, what ranking does not immediately decide. Additionally, feature ranking methods available from different independent sources are called in for direct comparison.
Unsupervised authorship analysis of phishing webpages
- Layton, Robert, Watters, Paul, Dazeley, Richard
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: Authorship analysis on phishing websites enables the investigation of phishing attacks, beyond basic analysis. In authorship analysis, salient features from documents are used to determine properties about the author, such as which of a set of candidate authors wrote a given document. In unsupervised authorship analysis, the aim is to group documents such that all documents by one author are grouped together. Applying this to cyber-attacks shows the size and scope of attacks from specific groups. This in turn allows investigators to focus their attention on specific attacking groups rather than trying to profile multiple independent attackers. In this paper, we analyse phishing websites using the current state of the art unsupervised authorship analysis method, called NUANCE. The results indicate that the application produces clusters which correlate strongly to authorship, evaluated using expert knowledge and external information as well as showing an improvement over a previous approach with known flaws. © 2012 IEEE.
- Description: 2003010678
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: Authorship analysis on phishing websites enables the investigation of phishing attacks, beyond basic analysis. In authorship analysis, salient features from documents are used to determine properties about the author, such as which of a set of candidate authors wrote a given document. In unsupervised authorship analysis, the aim is to group documents such that all documents by one author are grouped together. Applying this to cyber-attacks shows the size and scope of attacks from specific groups. This in turn allows investigators to focus their attention on specific attacking groups rather than trying to profile multiple independent attackers. In this paper, we analyse phishing websites using the current state of the art unsupervised authorship analysis method, called NUANCE. The results indicate that the application produces clusters which correlate strongly to authorship, evaluated using expert knowledge and external information as well as showing an improvement over a previous approach with known flaws. © 2012 IEEE.
- Description: 2003010678
Fast intermode selection for HEVC video coding using phase correlation
- Podder, Pallab, Paul, Manoranjan, Murshed, Manzur, Chakraborty, Subrata
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur , Chakraborty, Subrata
- Date: 2015
- Type: Text , Conference proceedings , Conference paper
- Relation: 2014 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2014; Wollongong, Australia; 25th-27th November 2014 p. 1-8
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text:
- Reviewed:
- Description: The recent High Efficiency Video Coding (HEVC) Standard demonstrates higher rate-distortion (RD) performance compared to its predecessor H.264/AVC using different new tools especially larger and asymmetric inter-mode variable size motion estimation and compensation. This requires more than 4 times computational time compared to H.264/AVC. As a result it has always been a big concern for the researchers to reduce the amount of time while maintaining the standard quality of the video. The reduction of computational time by smart selection of the appropriate modes in HEVC is our motivation. To accomplish this task in this paper, we use phase correlation to approximate the motion information between current and reference blocks by comparing with a number of different binary pattern templates and then select a subset of motion estimation modes without exhaustively exploring all possible modes. The experimental results exhibit that the proposed HEVC-PC (HEVC with Phase Correlation) scheme outperforms the standard HEVC scheme in terms of computational time while preserving-the same quality of the video sequences. More specifically, around 40% encoding time is reduced compared to the exhaustive mode selection in HEVC. © 2014 IEEE.
- Description: 2014 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2014
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur , Chakraborty, Subrata
- Date: 2015
- Type: Text , Conference proceedings , Conference paper
- Relation: 2014 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2014; Wollongong, Australia; 25th-27th November 2014 p. 1-8
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text:
- Reviewed:
- Description: The recent High Efficiency Video Coding (HEVC) Standard demonstrates higher rate-distortion (RD) performance compared to its predecessor H.264/AVC using different new tools especially larger and asymmetric inter-mode variable size motion estimation and compensation. This requires more than 4 times computational time compared to H.264/AVC. As a result it has always been a big concern for the researchers to reduce the amount of time while maintaining the standard quality of the video. The reduction of computational time by smart selection of the appropriate modes in HEVC is our motivation. To accomplish this task in this paper, we use phase correlation to approximate the motion information between current and reference blocks by comparing with a number of different binary pattern templates and then select a subset of motion estimation modes without exhaustively exploring all possible modes. The experimental results exhibit that the proposed HEVC-PC (HEVC with Phase Correlation) scheme outperforms the standard HEVC scheme in terms of computational time while preserving-the same quality of the video sequences. More specifically, around 40% encoding time is reduced compared to the exhaustive mode selection in HEVC. © 2014 IEEE.
- Description: 2014 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2014
On unified modeling, theory, and method for solving multi-scale global optimization problems
- Authors: Gao, David
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2nd International Conference on Numerical Computations: Theory and Algorithms, NUMTA 2016; Pizzo Calabro; Italy; 19th-25th June 2016; published in AIP Conference Proceedings Vol. 1776, p. 1-8
- Full Text:
- Reviewed:
- Description: A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.
- Authors: Gao, David
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2nd International Conference on Numerical Computations: Theory and Algorithms, NUMTA 2016; Pizzo Calabro; Italy; 19th-25th June 2016; published in AIP Conference Proceedings Vol. 1776, p. 1-8
- Full Text:
- Reviewed:
- Description: A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.
A surrogate model for evaluation of maximum normalized dynamic load factor in moving load model for pipeline spanning due to slug flow
- Sultan, Ibrahim, Reda, Ahmed, Forbes, Gareth
- Authors: Sultan, Ibrahim , Reda, Ahmed , Forbes, Gareth
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: Understanding the problem of slug-flow-induced fatigue damage is of particular importance to the reliable operation of pipelines. Slug flow, across unsupported pipeline spans, produces dynamic vibrations in the pipeline resulting in cyclical fatigue stresses. These dynamic effects will cause the pipeline to fail at a point of stress concentration if proper design procedure is not followed. The response of a pipeline span, under the passage of slug flow, can be represented by dynamic load factors that are functions of the speed ratio and damping characteristics of the span. The aspects of these functional relationships are investigated, in this paper by conducting multiple simulations at different speed ratios and damping factors. The data obtained from the steady state Fourier expansion will, consequently, be used to produce a surrogate model with a level of accuracy that adequately qualifies it for use in determining dynamic loading of pipelines. The closed-form surrogate model can be used to eliminate the need to employ costly mathematical procedures or finite element packages for the analysis. The model will also provide a solid ground for optimization studies and help designers gain an insight into how various model parameters impact the system response. This paper will demonstrate the aspects of a proposed surrogate model and endeavor to obtain parameter domains within which the model's reliability is ensured. A numerical example will be demonstrated to prove the concepts presented in the paper and confirm the validity of the proposed model. Copyright © 2012 by ASME.
- Description: C1
- Authors: Sultan, Ibrahim , Reda, Ahmed , Forbes, Gareth
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: Understanding the problem of slug-flow-induced fatigue damage is of particular importance to the reliable operation of pipelines. Slug flow, across unsupported pipeline spans, produces dynamic vibrations in the pipeline resulting in cyclical fatigue stresses. These dynamic effects will cause the pipeline to fail at a point of stress concentration if proper design procedure is not followed. The response of a pipeline span, under the passage of slug flow, can be represented by dynamic load factors that are functions of the speed ratio and damping characteristics of the span. The aspects of these functional relationships are investigated, in this paper by conducting multiple simulations at different speed ratios and damping factors. The data obtained from the steady state Fourier expansion will, consequently, be used to produce a surrogate model with a level of accuracy that adequately qualifies it for use in determining dynamic loading of pipelines. The closed-form surrogate model can be used to eliminate the need to employ costly mathematical procedures or finite element packages for the analysis. The model will also provide a solid ground for optimization studies and help designers gain an insight into how various model parameters impact the system response. This paper will demonstrate the aspects of a proposed surrogate model and endeavor to obtain parameter domains within which the model's reliability is ensured. A numerical example will be demonstrated to prove the concepts presented in the paper and confirm the validity of the proposed model. Copyright © 2012 by ASME.
- Description: C1
An efficient selective miner consensus protocol in blockchain oriented iot smart monitoring
- Uddin, Ashraf, Stranieri, Andrew, Gondal, Iqbal, Balasubramanian, Venki
- Authors: Uddin, Ashraf , Stranieri, Andrew , Gondal, Iqbal , Balasubramanian, Venki
- Date: 2019
- Type: Text , Conference proceedings , Conference paper
- Relation: 2019 IEEE International Conference on Industrial Technology, ICIT 2019; Melbourne; Australia; 13th-15th February 2019 Vol. 2019-February, p. 1135-1142
- Full Text:
- Reviewed:
- Description: Blockchains have been widely used in Internet of Things(IoT) applications including smart cities, smart home and smart governance to provide high levels of security and privacy. In this article, we advance a Blockchain based decentralized architecture for the storage of IoT data produced from smart home/cities. The architecture includes a secure communication protocol using a sign-encryption technique between power constrained IoT devices and a Gateway. The sign encryption also preserves privacy. We propose that a Software Agent executing on the Gateway selects a Miner node using performance parameters of Miners. Simulations demonstrate that the recommended Miner selection outperforms Proof of Works selection used in Bitcoin and Random Miner Selection.
- Description: Proceedings of the IEEE International Conference on Industrial Technology
- Authors: Uddin, Ashraf , Stranieri, Andrew , Gondal, Iqbal , Balasubramanian, Venki
- Date: 2019
- Type: Text , Conference proceedings , Conference paper
- Relation: 2019 IEEE International Conference on Industrial Technology, ICIT 2019; Melbourne; Australia; 13th-15th February 2019 Vol. 2019-February, p. 1135-1142
- Full Text:
- Reviewed:
- Description: Blockchains have been widely used in Internet of Things(IoT) applications including smart cities, smart home and smart governance to provide high levels of security and privacy. In this article, we advance a Blockchain based decentralized architecture for the storage of IoT data produced from smart home/cities. The architecture includes a secure communication protocol using a sign-encryption technique between power constrained IoT devices and a Gateway. The sign encryption also preserves privacy. We propose that a Software Agent executing on the Gateway selects a Miner node using performance parameters of Miners. Simulations demonstrate that the recommended Miner selection outperforms Proof of Works selection used in Bitcoin and Random Miner Selection.
- Description: Proceedings of the IEEE International Conference on Industrial Technology
Joint texture and depth coding using cuboid data compression
- Paul, Manoranjan, Chakraborty, Subrata, Murshed, Manzur, Podder, Pallab
- Authors: Paul, Manoranjan , Chakraborty, Subrata , Murshed, Manzur , Podder, Pallab
- Date: 2015
- Type: Text , Conference proceedings
- Relation: 2015 18th International Conference on Computer and Information Technology (ICCIT); Dhaka, Bangladesh; 21st-23rd December 2015 p. 138-143
- Full Text:
- Reviewed:
- Description: The latest multiview video coding (MVC) standards such as 3D-HEVC and H.264/MVC normally encodes texture and depth videos separately. Significant amount of rate-distortion performance and computational performance are sacrificed due to separate encoding due to the lack of exploitation of joint information. Obviously, separate encoding also creates synchronization issue for 3D scene formation in the decoder. Moreover, the hierarchical frame referencing architecture in the MVC creates random access frame delay. In this paper we develop an encoder and decoder framework where we can encode texture and depth video jointly by forming and encoding 3D cuboid using high dimensional entropy coding. The results from our experiments show that our proposed framework outperforms the 3D-HEVC in rate-distortion performance and reduces the computational time significantly by reducing random access frame delay.
- Authors: Paul, Manoranjan , Chakraborty, Subrata , Murshed, Manzur , Podder, Pallab
- Date: 2015
- Type: Text , Conference proceedings
- Relation: 2015 18th International Conference on Computer and Information Technology (ICCIT); Dhaka, Bangladesh; 21st-23rd December 2015 p. 138-143
- Full Text:
- Reviewed:
- Description: The latest multiview video coding (MVC) standards such as 3D-HEVC and H.264/MVC normally encodes texture and depth videos separately. Significant amount of rate-distortion performance and computational performance are sacrificed due to separate encoding due to the lack of exploitation of joint information. Obviously, separate encoding also creates synchronization issue for 3D scene formation in the decoder. Moreover, the hierarchical frame referencing architecture in the MVC creates random access frame delay. In this paper we develop an encoder and decoder framework where we can encode texture and depth video jointly by forming and encoding 3D cuboid using high dimensional entropy coding. The results from our experiments show that our proposed framework outperforms the 3D-HEVC in rate-distortion performance and reduces the computational time significantly by reducing random access frame delay.
Performance evaluation of a process bus architecture in a zone substation based on IEC 61850-9-2
- Kumar, Shantanu, Das, Narottam, Islam, Syed
- Authors: Kumar, Shantanu , Das, Narottam , Islam, Syed
- Date: 2016
- Type: Text , Conference proceedings , Conference paper
- Relation: IEEE PES Asia-Pacific Power and Energy Engineering Conference, APPEEC 2015; Brisbane, Australia; 15th-18th November 2015 Vol. 2016, p. 1-5
- Full Text:
- Reviewed:
- Description: Ethernet communication has been the back bone of high speed communication in digital substations from protection relaying, control and automation perspective. Major substation manufacturers have been constantly upgrading softwares and adding new features in their Intelligent Electronic Devices (IED's) to carry out multiple functions in process bus devices. This paper presents simulation results with respect to the delay in packets transfer in an Ethernet environment. Understanding the delay in packet transfer of Generic Object Oriented Substation Event (GOOSE) and Sampled Values (SV) shall assist the user in understanding the substation automation, control and protection of substation primary plants such as current transformers (CT's), voltage transformers (VT's), circuit breakers etc. connected in the network during a fault condition Conventional substation uses Merging Units (MU's) to communicate with the IED's featuring IEC 61850-9-2 standard. This standard exhibits transparency and standardization of data communication while addressing issues related to reliability, packet sharing, and maintainability, etc. However, process bus architecture is yet to be widely accepted in the industry and needs further validation due to lack of confidence. This paper evaluates the performance of a digital protection scheme in a zone substation operating at 132kV, featuring IEC 61850-9-2 IED's and using an optimized network engineering tool (OPNET) simulator. Understanding the delay in receiving time critical GOOSE and sampled value SV messages from protection perspective is critical as loss of data could cause malfunction in the protection jeopardizing vital substation plants.
- Authors: Kumar, Shantanu , Das, Narottam , Islam, Syed
- Date: 2016
- Type: Text , Conference proceedings , Conference paper
- Relation: IEEE PES Asia-Pacific Power and Energy Engineering Conference, APPEEC 2015; Brisbane, Australia; 15th-18th November 2015 Vol. 2016, p. 1-5
- Full Text:
- Reviewed:
- Description: Ethernet communication has been the back bone of high speed communication in digital substations from protection relaying, control and automation perspective. Major substation manufacturers have been constantly upgrading softwares and adding new features in their Intelligent Electronic Devices (IED's) to carry out multiple functions in process bus devices. This paper presents simulation results with respect to the delay in packets transfer in an Ethernet environment. Understanding the delay in packet transfer of Generic Object Oriented Substation Event (GOOSE) and Sampled Values (SV) shall assist the user in understanding the substation automation, control and protection of substation primary plants such as current transformers (CT's), voltage transformers (VT's), circuit breakers etc. connected in the network during a fault condition Conventional substation uses Merging Units (MU's) to communicate with the IED's featuring IEC 61850-9-2 standard. This standard exhibits transparency and standardization of data communication while addressing issues related to reliability, packet sharing, and maintainability, etc. However, process bus architecture is yet to be widely accepted in the industry and needs further validation due to lack of confidence. This paper evaluates the performance of a digital protection scheme in a zone substation operating at 132kV, featuring IEC 61850-9-2 IED's and using an optimized network engineering tool (OPNET) simulator. Understanding the delay in receiving time critical GOOSE and sampled value SV messages from protection perspective is critical as loss of data could cause malfunction in the protection jeopardizing vital substation plants.
Master control unit based power exchange strategy for interconnected microgrids
- Batool, Munira, Islam, Syed, Shahnia, Farhad
- Authors: Batool, Munira , Islam, Syed , Shahnia, Farhad
- Date: 2017
- Type: Text , Conference proceedings , Conference paper
- Relation: 2017 Australasian Universities Power Engineering Conference, AUPEC 2017; Melbourne, Australia; 19th-22nd November 2017 Vol. 2017, p. 1-6
- Full Text:
- Reviewed:
- Description: Large remote area networks normally have self-suffi-cient electricity systems. These systems also rely on non-dispatchable DGs (N-DGs) for overall reduction in cost of electricity production. It is a fact that uncertainties included in the nature of N-DGs as well as load demand can cause cost burden on islanded microgrids (MGs). This paper proposes development of power exchange strategy for an interconnected MGs (IMG) system as part of large remote area network with optimized controls of dispatchable (D-DGs) which are members of master control unit (MCU). MCU analysis includes equal cost increment principle to give idea about the amount of power exchange which could take place with neighbor MGs in case of overloading situation. Sudden changes in N-DGs and load are defined as interruptions and are part of analysis too. Optimization problem is formulated on the basis of MCU adjustment for overloading or under loading situation and suitability of support MG (S-MG) in IMG system for power exchange along with key features of low cost and minimum technical impacts. Mixed integer linear programming (MILP) technique is applied to solve the formulated problem. The impact of proposed strategy is assessed by numerical analysis in MATLAB programming under stochastic environment.
- Authors: Batool, Munira , Islam, Syed , Shahnia, Farhad
- Date: 2017
- Type: Text , Conference proceedings , Conference paper
- Relation: 2017 Australasian Universities Power Engineering Conference, AUPEC 2017; Melbourne, Australia; 19th-22nd November 2017 Vol. 2017, p. 1-6
- Full Text:
- Reviewed:
- Description: Large remote area networks normally have self-suffi-cient electricity systems. These systems also rely on non-dispatchable DGs (N-DGs) for overall reduction in cost of electricity production. It is a fact that uncertainties included in the nature of N-DGs as well as load demand can cause cost burden on islanded microgrids (MGs). This paper proposes development of power exchange strategy for an interconnected MGs (IMG) system as part of large remote area network with optimized controls of dispatchable (D-DGs) which are members of master control unit (MCU). MCU analysis includes equal cost increment principle to give idea about the amount of power exchange which could take place with neighbor MGs in case of overloading situation. Sudden changes in N-DGs and load are defined as interruptions and are part of analysis too. Optimization problem is formulated on the basis of MCU adjustment for overloading or under loading situation and suitability of support MG (S-MG) in IMG system for power exchange along with key features of low cost and minimum technical impacts. Mixed integer linear programming (MILP) technique is applied to solve the formulated problem. The impact of proposed strategy is assessed by numerical analysis in MATLAB programming under stochastic environment.
A review on chemical diagnosis techniques for transformer paper insulation degradation
- Abu Bakar, Norazhar, Abu Siada, Ahmed, Islam, Syed
- Authors: Abu Bakar, Norazhar , Abu Siada, Ahmed , Islam, Syed
- Date: 2013
- Type: Text , Conference proceedings , Conference paper
- Relation: 2013 Australasian Universities Power Engineering Conference, AUPEC 2013; Hobart, Australia; 29th September-3rd October 2013 p. 1-6
- Full Text:
- Reviewed:
- Description: Energized parts within power transformer are isolated using paper insulation and are immersed in insulating oil. Hence, transformer oil and paper insulation are essential sources to detect incipient and fast developing power transformer faults. Several chemical diagnoses techniques are developed to examine the condition of paper insulation such as degree of polymerization, carbon oxides, furanic compounds and methanol. The principle and limitation of these diagnoses are discussed and compared in this paper.
- Authors: Abu Bakar, Norazhar , Abu Siada, Ahmed , Islam, Syed
- Date: 2013
- Type: Text , Conference proceedings , Conference paper
- Relation: 2013 Australasian Universities Power Engineering Conference, AUPEC 2013; Hobart, Australia; 29th September-3rd October 2013 p. 1-6
- Full Text:
- Reviewed:
- Description: Energized parts within power transformer are isolated using paper insulation and are immersed in insulating oil. Hence, transformer oil and paper insulation are essential sources to detect incipient and fast developing power transformer faults. Several chemical diagnoses techniques are developed to examine the condition of paper insulation such as degree of polymerization, carbon oxides, furanic compounds and methanol. The principle and limitation of these diagnoses are discussed and compared in this paper.
Identity crime : The challenges in the regulation of identity crime
- Authors: Holm, Eric
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: This paper discusses the unique challenges of regulating identity crime. Identity crime involves the use of personal identification information to perpetrate crimes of fraud. As such, the identity crime involves using personal and private information to perpetrate crime. This article considers the two significant issues that obstruct responses to this crime; firstly, the reporting of crime. Secondly the paper considers the issue of jurisdiction. Finally, the paper explores some responses to this crime. The paper then explores some of the current responses to identity crime. © 2012 IEEE.
- Authors: Holm, Eric
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: This paper discusses the unique challenges of regulating identity crime. Identity crime involves the use of personal identification information to perpetrate crimes of fraud. As such, the identity crime involves using personal and private information to perpetrate crime. This article considers the two significant issues that obstruct responses to this crime; firstly, the reporting of crime. Secondly the paper considers the issue of jurisdiction. Finally, the paper explores some responses to this crime. The paper then explores some of the current responses to identity crime. © 2012 IEEE.
Industry type and business size on economic growth: Comparing Australia's Regional and Metropolitan areas
- Authors: Mardaneh, Karim
- Date: 2011
- Type: Text , Conference proceedings
- Relation: 56th Annual ICSB World Conference; Back to the Future - Changes in Perspectives of Global Entrepreneurship and Innovation,Stockholm, Sweden, 15-18 June, 2011
- Full Text:
- Reviewed:
- Description: While the main body of literature regarding small-to-medium enterprises is focused on formation and growth, there is insufficient research about the role of both (a) firm size and (b) location on economic growth. The role of firm size and industrial structure on economic growth has been examined by some researchers. Pagano (2003) and Pagano and Schivardi (2000) identified a positive association between average firm size and growth and Carree and Thurik (1999) found evidence that the low number of large firms in an industry could lead to a higher value added growth. The current study attempts to investigate the impact of industry structure and businesses operating within these industries on economic growth. This paper uses “k-means” clustering algorithm to cluster Statistical Local Areas. Regression analysis is utilised to identify drivers of economic growth. Preliminary results suggest that size of business may act as a driver of economic growth but the impact could vary based on location.
- Authors: Mardaneh, Karim
- Date: 2011
- Type: Text , Conference proceedings
- Relation: 56th Annual ICSB World Conference; Back to the Future - Changes in Perspectives of Global Entrepreneurship and Innovation,Stockholm, Sweden, 15-18 June, 2011
- Full Text:
- Reviewed:
- Description: While the main body of literature regarding small-to-medium enterprises is focused on formation and growth, there is insufficient research about the role of both (a) firm size and (b) location on economic growth. The role of firm size and industrial structure on economic growth has been examined by some researchers. Pagano (2003) and Pagano and Schivardi (2000) identified a positive association between average firm size and growth and Carree and Thurik (1999) found evidence that the low number of large firms in an industry could lead to a higher value added growth. The current study attempts to investigate the impact of industry structure and businesses operating within these industries on economic growth. This paper uses “k-means” clustering algorithm to cluster Statistical Local Areas. Regression analysis is utilised to identify drivers of economic growth. Preliminary results suggest that size of business may act as a driver of economic growth but the impact could vary based on location.
Management to insulate ecosystem services from the effects of catchment development
- Authors: Gell, Peter
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 2nd International Conference on Energy, Environmental and Information System, ICENIS 2017; Semarang, Indonesia; 15th-16th August 2017; published in E3S Web of Conferences Vol. 31, p. 1-6
- Full Text:
- Reviewed:
- Description: Natural ecosystems provide amenity to human populations in the form of ecosystem services. These services are grouped into four broad categories: Provisioning-food and water production; regulating-control of climate and disease; supporting-crop pollination; and cultural-spiritual and recreational benefits. Aquatic systems provide considerable service through the provision of potable water, fisheries and aquaculture production, nutrient mitigation and the psychological benefits that accrue from the aesthetic amenity provided from lakes, rivers and other wetlands. Further, littoral and riparian ecosystems, and aquifers, protect human communities from sea level encroachment, and tidal and river flooding. Catchment and water development provides critical resources for human consumption. Where these provisioning services are prioritized over others, the level and quality of production may be impacted. Further, the benefits from these provisioning services comes with the opportunity cost of diminishing regulating, supporting and cultural services. This imbalance flags concerns for humanity as it exceeds recognised safe operating spaces. These concepts are explored by reference to long term records of change in some of the world's largest river catchments and lessons are drawn that may enable other communities to consider the balance of ecosystems services in natural resource management.
- Authors: Gell, Peter
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 2nd International Conference on Energy, Environmental and Information System, ICENIS 2017; Semarang, Indonesia; 15th-16th August 2017; published in E3S Web of Conferences Vol. 31, p. 1-6
- Full Text:
- Reviewed:
- Description: Natural ecosystems provide amenity to human populations in the form of ecosystem services. These services are grouped into four broad categories: Provisioning-food and water production; regulating-control of climate and disease; supporting-crop pollination; and cultural-spiritual and recreational benefits. Aquatic systems provide considerable service through the provision of potable water, fisheries and aquaculture production, nutrient mitigation and the psychological benefits that accrue from the aesthetic amenity provided from lakes, rivers and other wetlands. Further, littoral and riparian ecosystems, and aquifers, protect human communities from sea level encroachment, and tidal and river flooding. Catchment and water development provides critical resources for human consumption. Where these provisioning services are prioritized over others, the level and quality of production may be impacted. Further, the benefits from these provisioning services comes with the opportunity cost of diminishing regulating, supporting and cultural services. This imbalance flags concerns for humanity as it exceeds recognised safe operating spaces. These concepts are explored by reference to long term records of change in some of the world's largest river catchments and lessons are drawn that may enable other communities to consider the balance of ecosystems services in natural resource management.
A biometric based authentication and encryption Framework for Sensor Health Data in Cloud
- Sharma, Surender, Balasubramanian, Venki
- Authors: Sharma, Surender , Balasubramanian, Venki
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: Use of remote healthcare monitoring application (HMA) can not only enable healthcare seeker to live a normal life while receiving treatment but also prevent critical healthcare situation through early intervention. For this to happen, the HMA have to provide continuous monitoring through sensors attached to the patient's body or in close proximity to the patient. Owing to elasticity nature of the cloud, recently, the implementation of HMA in cloud is of intense research. Although, cloud-based implementation provides scalability for implementation, the health data of patient is super-sensitive and requires high level of privacy and security for cloud-based shared storage. In addition, protection of real-time arrival of large volume of sensor data from continuous monitoring of patient poses bigger challenge. In this work, we propose a self-protective security framework for our cloud-based HMA. Our framework enable the sensor data in the cloud from (1) unauthorized access and (2) self-protect the data in case of breached access using biometrics. The framework is detailed in the paper using mathematical formulation and algorithms. © 2014 IEEE.
- Authors: Sharma, Surender , Balasubramanian, Venki
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: Use of remote healthcare monitoring application (HMA) can not only enable healthcare seeker to live a normal life while receiving treatment but also prevent critical healthcare situation through early intervention. For this to happen, the HMA have to provide continuous monitoring through sensors attached to the patient's body or in close proximity to the patient. Owing to elasticity nature of the cloud, recently, the implementation of HMA in cloud is of intense research. Although, cloud-based implementation provides scalability for implementation, the health data of patient is super-sensitive and requires high level of privacy and security for cloud-based shared storage. In addition, protection of real-time arrival of large volume of sensor data from continuous monitoring of patient poses bigger challenge. In this work, we propose a self-protective security framework for our cloud-based HMA. Our framework enable the sensor data in the cloud from (1) unauthorized access and (2) self-protect the data in case of breached access using biometrics. The framework is detailed in the paper using mathematical formulation and algorithms. © 2014 IEEE.