Disconnection time and sequence of rooftop PVs under short-circuit faults in low voltage networks
- Yengejeh, Hadi, Shahnia, Farhad, Islam, Syed
- Authors: Yengejeh, Hadi , Shahnia, Farhad , Islam, Syed
- Date: 2015
- Type: Text , Conference proceedings , Conference paper
- Relation: North American Power Symposium, NAPS 2015; Charlotte, United States; 4th-6th October 2015 p. 1-6
- Full Text:
- Reviewed:
- Description: This paper presents an analysis on the disconnection time of single-phase rooftop PVs, located in a three-phase four-wire low voltage distribution feeder after a line-to-ground short-circuit fault on the low voltage feeder. The paper aims to evaluate and discuss the disconnection time and sequence of PVs in a network with 100% PV penetration level. The impact of different parameters such as the location of the fault, impedance of the fault and the ratio of PVs generation capacity to the load demand are considered. Furthermore, the effect of the system earthing in the form of multiple earthed neutral and non-effectively grounded systems are evaluated on the PVs disconnection time. The analyses intend to figure out the conditions under which the PVs in the feeder may fail to disconnect after a line-to-ground fault and keep feeding the fault. The analyses are carried out in PSCAD/EMTDC software.
- Authors: Yengejeh, Hadi , Shahnia, Farhad , Islam, Syed
- Date: 2015
- Type: Text , Conference proceedings , Conference paper
- Relation: North American Power Symposium, NAPS 2015; Charlotte, United States; 4th-6th October 2015 p. 1-6
- Full Text:
- Reviewed:
- Description: This paper presents an analysis on the disconnection time of single-phase rooftop PVs, located in a three-phase four-wire low voltage distribution feeder after a line-to-ground short-circuit fault on the low voltage feeder. The paper aims to evaluate and discuss the disconnection time and sequence of PVs in a network with 100% PV penetration level. The impact of different parameters such as the location of the fault, impedance of the fault and the ratio of PVs generation capacity to the load demand are considered. Furthermore, the effect of the system earthing in the form of multiple earthed neutral and non-effectively grounded systems are evaluated on the PVs disconnection time. The analyses intend to figure out the conditions under which the PVs in the feeder may fail to disconnect after a line-to-ground fault and keep feeding the fault. The analyses are carried out in PSCAD/EMTDC software.
Fast coding strategy for HEVC by motion features and saliency applied on difference between successive image blocks
- Podder, Pallab, Paul, Manoranjan, Murshed, Manzur
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2015
- Type: Text , Conference proceedings
- Relation: ConferencePacific-Rim Symposium on Image and Video Technology, Auckland, 23-27th Nov, 2016, In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics).9431 p. 175-186
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text:
- Reviewed:
- Description: Introducing a number of innovative and powerful coding tools, the High Efficiency Video Coding (HEVC) standard promises double compression efficiency, compared to its predecessor H.264, with similar perceptual quality. The increased computational time complexity is an important issue for the video coding research community as well. An attempt to reduce this complexity of HEVC is adopted in this paper, by efficient selection of appropriate block-partitioning modes based on motion features and the saliency applied to the difference between successive image blocks. As this difference gives us the explicit visible motion and salient information, we develop a cost function by combining the motion features and image difference salient feature. The combined features are then converted into area of interest (AOI) based binary pattern for the current block. This pattern is then compared with a previously defined codebook of binary pattern templates for a subset of mode selection. Motion estimation (ME) and motion compensation (MC) are performed only on the selected subset of modes, without exhaustive exploration of all modes available in HEVC. The experimental results reveal a reduction of 42% encoding time complexity of HEVC encoder with similar subjective and objective image quality.
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2015
- Type: Text , Conference proceedings
- Relation: ConferencePacific-Rim Symposium on Image and Video Technology, Auckland, 23-27th Nov, 2016, In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics).9431 p. 175-186
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text:
- Reviewed:
- Description: Introducing a number of innovative and powerful coding tools, the High Efficiency Video Coding (HEVC) standard promises double compression efficiency, compared to its predecessor H.264, with similar perceptual quality. The increased computational time complexity is an important issue for the video coding research community as well. An attempt to reduce this complexity of HEVC is adopted in this paper, by efficient selection of appropriate block-partitioning modes based on motion features and the saliency applied to the difference between successive image blocks. As this difference gives us the explicit visible motion and salient information, we develop a cost function by combining the motion features and image difference salient feature. The combined features are then converted into area of interest (AOI) based binary pattern for the current block. This pattern is then compared with a previously defined codebook of binary pattern templates for a subset of mode selection. Motion estimation (ME) and motion compensation (MC) are performed only on the selected subset of modes, without exhaustive exploration of all modes available in HEVC. The experimental results reveal a reduction of 42% encoding time complexity of HEVC encoder with similar subjective and objective image quality.
Fast intermode selection for HEVC video coding using phase correlation
- Podder, Pallab, Paul, Manoranjan, Murshed, Manzur, Chakraborty, Subrata
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur , Chakraborty, Subrata
- Date: 2015
- Type: Text , Conference proceedings , Conference paper
- Relation: 2014 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2014; Wollongong, Australia; 25th-27th November 2014 p. 1-8
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text:
- Reviewed:
- Description: The recent High Efficiency Video Coding (HEVC) Standard demonstrates higher rate-distortion (RD) performance compared to its predecessor H.264/AVC using different new tools especially larger and asymmetric inter-mode variable size motion estimation and compensation. This requires more than 4 times computational time compared to H.264/AVC. As a result it has always been a big concern for the researchers to reduce the amount of time while maintaining the standard quality of the video. The reduction of computational time by smart selection of the appropriate modes in HEVC is our motivation. To accomplish this task in this paper, we use phase correlation to approximate the motion information between current and reference blocks by comparing with a number of different binary pattern templates and then select a subset of motion estimation modes without exhaustively exploring all possible modes. The experimental results exhibit that the proposed HEVC-PC (HEVC with Phase Correlation) scheme outperforms the standard HEVC scheme in terms of computational time while preserving-the same quality of the video sequences. More specifically, around 40% encoding time is reduced compared to the exhaustive mode selection in HEVC. © 2014 IEEE.
- Description: 2014 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2014
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur , Chakraborty, Subrata
- Date: 2015
- Type: Text , Conference proceedings , Conference paper
- Relation: 2014 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2014; Wollongong, Australia; 25th-27th November 2014 p. 1-8
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text:
- Reviewed:
- Description: The recent High Efficiency Video Coding (HEVC) Standard demonstrates higher rate-distortion (RD) performance compared to its predecessor H.264/AVC using different new tools especially larger and asymmetric inter-mode variable size motion estimation and compensation. This requires more than 4 times computational time compared to H.264/AVC. As a result it has always been a big concern for the researchers to reduce the amount of time while maintaining the standard quality of the video. The reduction of computational time by smart selection of the appropriate modes in HEVC is our motivation. To accomplish this task in this paper, we use phase correlation to approximate the motion information between current and reference blocks by comparing with a number of different binary pattern templates and then select a subset of motion estimation modes without exhaustively exploring all possible modes. The experimental results exhibit that the proposed HEVC-PC (HEVC with Phase Correlation) scheme outperforms the standard HEVC scheme in terms of computational time while preserving-the same quality of the video sequences. More specifically, around 40% encoding time is reduced compared to the exhaustive mode selection in HEVC. © 2014 IEEE.
- Description: 2014 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2014
Fusion of LiDAR data and multispectral imagery for effective building detection based on graph and connected component analysis
- Gilani, Alinaqi, Awrangjeb, Mohammad, Lu, Guojun
- Authors: Gilani, Alinaqi , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2015
- Type: Text , Conference proceedings
- Full Text:
- Description: Building detection in complex scenes is a non-trivial exercise due to building shape variability, irregular terrain, shadows, and occlusion by highly dense vegetation. In this research, we present a graph based algorithm, which combines multispectral imagery and airborne LiDAR information to completely delineate the building boundaries in urban and densely vegetated area. In the first phase, LiDAR data is divided into two groups: ground and non-ground data, using ground height from a bare-earth DEM. A mask, known as the primary building mask, is generated from the non-ground LiDAR points where the black region represents the elevated area (buildings and trees), while the white region describes the ground (earth). The second phase begins with the process of Connected Component Analysis (CCA) where the number of objects present in the test scene are identified followed by initial boundary detection and labelling. Additionally, a graph from the connected components is generated, where each black pixel corresponds to a node. An edge of a unit distance is defined between a black pixel and a neighbouring black pixel, if any. An edge does not exist from a black pixel to a neighbouring white pixel, if any. This phenomenon produces a disconnected components graph, where each component represents a prospective building or a dense vegetation (a contiguous block of black pixels from the primary mask). In the third phase, a clustering process clusters the segmented lines, extracted from multispectral imagery, around the graph components, if possible. In the fourth step, NDVI, image entropy, and LiDAR data are utilised to discriminate between vegetation, buildings, and isolated building's occluded parts. Finally, the initially extracted building boundary is extended pixel-wise using NDVI, entropy, and LiDAR data to completely delineate the building and to maximise the boundary reach towards building edges. The proposed technique is evaluated using two Australian data sets: Aitkenvale and Hervey Bay, for object-based and pixel-based completeness, correctness, and quality. The proposed technique detects buildings larger than 50 m2 and 10 m2 in the Aitkenvale site with 100% and 91% accuracy, respectively, while in the Hervey Bay site it performs better with 100% accuracy for buildings larger than 10 m2 in area.
- Authors: Gilani, Alinaqi , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2015
- Type: Text , Conference proceedings
- Full Text:
- Description: Building detection in complex scenes is a non-trivial exercise due to building shape variability, irregular terrain, shadows, and occlusion by highly dense vegetation. In this research, we present a graph based algorithm, which combines multispectral imagery and airborne LiDAR information to completely delineate the building boundaries in urban and densely vegetated area. In the first phase, LiDAR data is divided into two groups: ground and non-ground data, using ground height from a bare-earth DEM. A mask, known as the primary building mask, is generated from the non-ground LiDAR points where the black region represents the elevated area (buildings and trees), while the white region describes the ground (earth). The second phase begins with the process of Connected Component Analysis (CCA) where the number of objects present in the test scene are identified followed by initial boundary detection and labelling. Additionally, a graph from the connected components is generated, where each black pixel corresponds to a node. An edge of a unit distance is defined between a black pixel and a neighbouring black pixel, if any. An edge does not exist from a black pixel to a neighbouring white pixel, if any. This phenomenon produces a disconnected components graph, where each component represents a prospective building or a dense vegetation (a contiguous block of black pixels from the primary mask). In the third phase, a clustering process clusters the segmented lines, extracted from multispectral imagery, around the graph components, if possible. In the fourth step, NDVI, image entropy, and LiDAR data are utilised to discriminate between vegetation, buildings, and isolated building's occluded parts. Finally, the initially extracted building boundary is extended pixel-wise using NDVI, entropy, and LiDAR data to completely delineate the building and to maximise the boundary reach towards building edges. The proposed technique is evaluated using two Australian data sets: Aitkenvale and Hervey Bay, for object-based and pixel-based completeness, correctness, and quality. The proposed technique detects buildings larger than 50 m2 and 10 m2 in the Aitkenvale site with 100% and 91% accuracy, respectively, while in the Hervey Bay site it performs better with 100% accuracy for buildings larger than 10 m2 in area.
High performance communication redundancy in a digital substation based on IEC 62439-3 with a station bus configuration
- Kumar, Shantanu, Das, Narottam, Islam, Syed
- Authors: Kumar, Shantanu , Das, Narottam , Islam, Syed
- Date: 2015
- Type: Text , Conference proceedings , Conference paper
- Relation: 25th Australasian Universities Power Engineering Conference, AUPEC 2015; Wollongong, Australia; 27th-30th September 2015 p. 1-5
- Full Text:
- Reviewed:
- Description: High speed communication is critical in a digital substation from protection, control and automation perspectives. Although International Electro-Technical Commission (IEC) 61850 standard has proven to be a reliable guide for the substation automation and communication systems, yet it has few shortcomings in offering redundancies in the protection architecture, which has been addressed better in IEC 62439-3 standard encompassing Parallel Redundancy Protocol (PRP) and High-Availability Seamless Redundancy (HSR). Due to single port failure, data losses and interoperability issues related to multi-vendor equipment, IEC working committee had to look beyond IEC 61850 standard. The enhanced features in a Doubly Attached Node components based on IEC 62439-3 provides redundancy in protection having two active frames circulating data packets in the ring. These frames send out copies in the ring and should one of the frame is lost, the other copy manages to reach the destination node via an alternate path, ensuring flawless data transfer at a significant faster speed using multi-vendor equipment and fault resilient circuits. The PRP and HSR topologies provides higher performance in a digitally protected substation and promise better future over the IEC 61850 standard due to its faster processing capabilities, increased availability and minimum delay in data packet transfer and wireless communication in the network. This paper exhibits the performance of PRP and HSR topologies focusing on the redundancy achievement within the network and at the end nodes of a station bus ring architecture, based on IEC 62439-3.
- Authors: Kumar, Shantanu , Das, Narottam , Islam, Syed
- Date: 2015
- Type: Text , Conference proceedings , Conference paper
- Relation: 25th Australasian Universities Power Engineering Conference, AUPEC 2015; Wollongong, Australia; 27th-30th September 2015 p. 1-5
- Full Text:
- Reviewed:
- Description: High speed communication is critical in a digital substation from protection, control and automation perspectives. Although International Electro-Technical Commission (IEC) 61850 standard has proven to be a reliable guide for the substation automation and communication systems, yet it has few shortcomings in offering redundancies in the protection architecture, which has been addressed better in IEC 62439-3 standard encompassing Parallel Redundancy Protocol (PRP) and High-Availability Seamless Redundancy (HSR). Due to single port failure, data losses and interoperability issues related to multi-vendor equipment, IEC working committee had to look beyond IEC 61850 standard. The enhanced features in a Doubly Attached Node components based on IEC 62439-3 provides redundancy in protection having two active frames circulating data packets in the ring. These frames send out copies in the ring and should one of the frame is lost, the other copy manages to reach the destination node via an alternate path, ensuring flawless data transfer at a significant faster speed using multi-vendor equipment and fault resilient circuits. The PRP and HSR topologies provides higher performance in a digitally protected substation and promise better future over the IEC 61850 standard due to its faster processing capabilities, increased availability and minimum delay in data packet transfer and wireless communication in the network. This paper exhibits the performance of PRP and HSR topologies focusing on the redundancy achievement within the network and at the end nodes of a station bus ring architecture, based on IEC 62439-3.
Joint texture and depth coding using cuboid data compression
- Paul, Manoranjan, Chakraborty, Subrata, Murshed, Manzur, Podder, Pallab
- Authors: Paul, Manoranjan , Chakraborty, Subrata , Murshed, Manzur , Podder, Pallab
- Date: 2015
- Type: Text , Conference proceedings
- Relation: 2015 18th International Conference on Computer and Information Technology (ICCIT); Dhaka, Bangladesh; 21st-23rd December 2015 p. 138-143
- Full Text:
- Reviewed:
- Description: The latest multiview video coding (MVC) standards such as 3D-HEVC and H.264/MVC normally encodes texture and depth videos separately. Significant amount of rate-distortion performance and computational performance are sacrificed due to separate encoding due to the lack of exploitation of joint information. Obviously, separate encoding also creates synchronization issue for 3D scene formation in the decoder. Moreover, the hierarchical frame referencing architecture in the MVC creates random access frame delay. In this paper we develop an encoder and decoder framework where we can encode texture and depth video jointly by forming and encoding 3D cuboid using high dimensional entropy coding. The results from our experiments show that our proposed framework outperforms the 3D-HEVC in rate-distortion performance and reduces the computational time significantly by reducing random access frame delay.
- Authors: Paul, Manoranjan , Chakraborty, Subrata , Murshed, Manzur , Podder, Pallab
- Date: 2015
- Type: Text , Conference proceedings
- Relation: 2015 18th International Conference on Computer and Information Technology (ICCIT); Dhaka, Bangladesh; 21st-23rd December 2015 p. 138-143
- Full Text:
- Reviewed:
- Description: The latest multiview video coding (MVC) standards such as 3D-HEVC and H.264/MVC normally encodes texture and depth videos separately. Significant amount of rate-distortion performance and computational performance are sacrificed due to separate encoding due to the lack of exploitation of joint information. Obviously, separate encoding also creates synchronization issue for 3D scene formation in the decoder. Moreover, the hierarchical frame referencing architecture in the MVC creates random access frame delay. In this paper we develop an encoder and decoder framework where we can encode texture and depth video jointly by forming and encoding 3D cuboid using high dimensional entropy coding. The results from our experiments show that our proposed framework outperforms the 3D-HEVC in rate-distortion performance and reduces the computational time significantly by reducing random access frame delay.
Predicting and controlling the dynamics of infectious diseases
- Evans, Robin, Mammadov, Musa
- Authors: Evans, Robin , Mammadov, Musa
- Date: 2015
- Type: Text , Conference proceedings
- Relation: 54th IEEE Conference on Decision and Control, CDC 2015; Osaka, Japan; 15th-18th December 2015; Published in Proceedings of the IEEE Conference on Decision and Control; p. 5378-5383
- Full Text:
- Description: This paper introduces a new optimal control model to describe and control the dynamics of infectious diseases. In the present model, the average time to isolation (i.e. hospitalization) of infectious population is the main time-dependent parameter that defines the spread of infection. All the preventive measures aim to decrease the average time to isolation under given constraints. The model suggested allows one to generate a small number of possible future scenarios and to determine corresponding trajectories of infected population in different regions. Then, this information is used to find an optimal distribution of bed capabilities across countries/regions according to each scenario. © 2015 IEEE.
- Authors: Evans, Robin , Mammadov, Musa
- Date: 2015
- Type: Text , Conference proceedings
- Relation: 54th IEEE Conference on Decision and Control, CDC 2015; Osaka, Japan; 15th-18th December 2015; Published in Proceedings of the IEEE Conference on Decision and Control; p. 5378-5383
- Full Text:
- Description: This paper introduces a new optimal control model to describe and control the dynamics of infectious diseases. In the present model, the average time to isolation (i.e. hospitalization) of infectious population is the main time-dependent parameter that defines the spread of infection. All the preventive measures aim to decrease the average time to isolation under given constraints. The model suggested allows one to generate a small number of possible future scenarios and to determine corresponding trajectories of infected population in different regions. Then, this information is used to find an optimal distribution of bed capabilities across countries/regions according to each scenario. © 2015 IEEE.
A biometric based authentication and encryption Framework for Sensor Health Data in Cloud
- Sharma, Surender, Balasubramanian, Venki
- Authors: Sharma, Surender , Balasubramanian, Venki
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: Use of remote healthcare monitoring application (HMA) can not only enable healthcare seeker to live a normal life while receiving treatment but also prevent critical healthcare situation through early intervention. For this to happen, the HMA have to provide continuous monitoring through sensors attached to the patient's body or in close proximity to the patient. Owing to elasticity nature of the cloud, recently, the implementation of HMA in cloud is of intense research. Although, cloud-based implementation provides scalability for implementation, the health data of patient is super-sensitive and requires high level of privacy and security for cloud-based shared storage. In addition, protection of real-time arrival of large volume of sensor data from continuous monitoring of patient poses bigger challenge. In this work, we propose a self-protective security framework for our cloud-based HMA. Our framework enable the sensor data in the cloud from (1) unauthorized access and (2) self-protect the data in case of breached access using biometrics. The framework is detailed in the paper using mathematical formulation and algorithms. © 2014 IEEE.
- Authors: Sharma, Surender , Balasubramanian, Venki
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: Use of remote healthcare monitoring application (HMA) can not only enable healthcare seeker to live a normal life while receiving treatment but also prevent critical healthcare situation through early intervention. For this to happen, the HMA have to provide continuous monitoring through sensors attached to the patient's body or in close proximity to the patient. Owing to elasticity nature of the cloud, recently, the implementation of HMA in cloud is of intense research. Although, cloud-based implementation provides scalability for implementation, the health data of patient is super-sensitive and requires high level of privacy and security for cloud-based shared storage. In addition, protection of real-time arrival of large volume of sensor data from continuous monitoring of patient poses bigger challenge. In this work, we propose a self-protective security framework for our cloud-based HMA. Our framework enable the sensor data in the cloud from (1) unauthorized access and (2) self-protect the data in case of breached access using biometrics. The framework is detailed in the paper using mathematical formulation and algorithms. © 2014 IEEE.
An adaptive approach to opportunistic data forwarding in underwater acoustic sensor networks
- Nowsheen, Nusrat, Karmakar, Gour, Kamruzzaman, Joarder
- Authors: Nowsheen, Nusrat , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: Reliable data transfer for underwater acoustic sensor networks (UASNs) is a major research challenge in applications such as pollution monitoring, oceanic data collection, and surveillance due to the long propagation delay and high error rate of the acoustic channel. To address this issue, an opportunistic data forwarding protocol was proposed which achieves high packet delivery success ratio with less routing overhead and energy consumption by selecting the next hop forwarder among a set of candidates based on its link reliability and data transfer reach ability. However, the protocol relies on fixed data hold time approach, i.e., Each node holds data packets for a fixed amount of time before a forwarder discovery process is initiated. Depending on the value of the fixed hold time and deployment contextual scenario, this may incur large end-to-end delay. Moreover, lack of consideration of network condition in hold time limits its performance. In this paper, we propose an adaptive technique to improve its performance. The adaptive approach calculates data hold time at each node dynamically considering a number of 'node and network' metrics including current buffer occupancy, delay experienced by stored data packets, arrival and service rate, neighbors' data transmissions and reach ability. Simulation results show that compared with fixed hold time approach, our adaptive technique reduces end-to-end delay significantly, achieves considerably higher data delivery and less energy consumption per successful packet delivery.
- Authors: Nowsheen, Nusrat , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: Reliable data transfer for underwater acoustic sensor networks (UASNs) is a major research challenge in applications such as pollution monitoring, oceanic data collection, and surveillance due to the long propagation delay and high error rate of the acoustic channel. To address this issue, an opportunistic data forwarding protocol was proposed which achieves high packet delivery success ratio with less routing overhead and energy consumption by selecting the next hop forwarder among a set of candidates based on its link reliability and data transfer reach ability. However, the protocol relies on fixed data hold time approach, i.e., Each node holds data packets for a fixed amount of time before a forwarder discovery process is initiated. Depending on the value of the fixed hold time and deployment contextual scenario, this may incur large end-to-end delay. Moreover, lack of consideration of network condition in hold time limits its performance. In this paper, we propose an adaptive technique to improve its performance. The adaptive approach calculates data hold time at each node dynamically considering a number of 'node and network' metrics including current buffer occupancy, delay experienced by stored data packets, arrival and service rate, neighbors' data transmissions and reach ability. Simulation results show that compared with fixed hold time approach, our adaptive technique reduces end-to-end delay significantly, achieves considerably higher data delivery and less energy consumption per successful packet delivery.
An efficient video coding technique using a novel non-parametric background model
- Chakraborty, Subrata, Paul, Manoranjan, Murshed, Manzur, Ali, Mortuza
- Authors: Chakraborty, Subrata , Paul, Manoranjan , Murshed, Manzur , Ali, Mortuza
- Date: 2014
- Type: Text , Conference proceedings
- Relation: 2014 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2014; Chengdu; China; 14th-18th July 2014 p. 1-6
- Full Text:
- Reviewed:
- Description: Video coding technique with a background frame, extracted from mixture of Gaussian (MoG) based background modeling, provides better rate distortion performance by exploiting coding efficiency in uncovered background areas compared to the latest video coding standard. However, it suffers from high computation time, low coding efficiency for dynamic videos, and prior knowledge requirement of video content. In this paper, we present a novel adaptive weighted non-parametric (WNP) background modeling technique and successfully embed it into HEVC video coding standard. Being non-parametric (NP), the proposed technique naturally exhibits superior performance in dynamic background scenarios compared to MoG-based technique without a priori knowledge of video data distribution. In addition, the WNP technique significantly reduces noise-related drawbacks of existing NP techniques to provide better quality video coding with much lower computation time as demonstrated through extensive comparative studies against NP, MoG and HEVC techniques.
- Authors: Chakraborty, Subrata , Paul, Manoranjan , Murshed, Manzur , Ali, Mortuza
- Date: 2014
- Type: Text , Conference proceedings
- Relation: 2014 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2014; Chengdu; China; 14th-18th July 2014 p. 1-6
- Full Text:
- Reviewed:
- Description: Video coding technique with a background frame, extracted from mixture of Gaussian (MoG) based background modeling, provides better rate distortion performance by exploiting coding efficiency in uncovered background areas compared to the latest video coding standard. However, it suffers from high computation time, low coding efficiency for dynamic videos, and prior knowledge requirement of video content. In this paper, we present a novel adaptive weighted non-parametric (WNP) background modeling technique and successfully embed it into HEVC video coding standard. Being non-parametric (NP), the proposed technique naturally exhibits superior performance in dynamic background scenarios compared to MoG-based technique without a priori knowledge of video data distribution. In addition, the WNP technique significantly reduces noise-related drawbacks of existing NP techniques to provide better quality video coding with much lower computation time as demonstrated through extensive comparative studies against NP, MoG and HEVC techniques.
An evaluation of emergency plans and procedures in fitness facilities in Australia: Implications for policy and practice
- Sekendiz, Betul, Norton, Kevin, Keyzer, Patrick, Dietrich, Joachim, Coyle, Ian, Jones, Veronica, Finch, Caroline
- Authors: Sekendiz, Betul , Norton, Kevin , Keyzer, Patrick , Dietrich, Joachim , Coyle, Ian , Jones, Veronica , Finch, Caroline
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: In 2007-08, fitness facilities contributed $872.9 million to the Australian economy and provided savings in direct health care costs estimated up to $107.9 million through their positive impact on physical inactivity and associated diseases (1). In 2011-12, more than 4.3 million Australians participated in sport and physical recreation at indoor sports or fitness facilities (2). However, research across Queensland (3) and in Victoria (4) showed low compliance with emergency plans and safety practices in fitness facilities. The aim of this study was to analyse emergency plans and procedures in fitness facilities in Australia. A nationwide online risk management survey of fitness professionals (n=1178, mean age=39.9), and observational audits at randomly selected regional and metropolitan fitness facilities (n=11) in New South Wales, South Australia, Victoria and Queensland were conducted. The findings indicated that most of the fitness professionals (68.1%) rated the emergency evacuation plans and other emergency procedures in their facilities as extremely/very good (n=640). Yet, more than one fourth (27.4%) of fitness professionals were somewhat aware (n=152), or very unaware/not at all aware (n=49) of the emergency evacuation plans and other emergency procedures in their facilities. The observational audits showed that most of the fitness facilities did not clearly display their emergency response plans (73%, n=8), emergency evacuation procedures (55%, n=6) or emergency telephone numbers (91%, n=10). Many fitness facilities (36.4%, n=4) did not have an appropriate first aid kit accessible by all staff. Our study shows a lack of emergency preparedness in many fitness facilities in Australia. Emergency response capability is crucial for fitness facility managers to satisfy their duty of care to manage risks of medical emergencies and disasters such as fire, explosion, and floods. Our study has implications for policy development and education of fitness facility managers to improve emergency plans and procedures in fitness facilities in Australia.
- Authors: Sekendiz, Betul , Norton, Kevin , Keyzer, Patrick , Dietrich, Joachim , Coyle, Ian , Jones, Veronica , Finch, Caroline
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: In 2007-08, fitness facilities contributed $872.9 million to the Australian economy and provided savings in direct health care costs estimated up to $107.9 million through their positive impact on physical inactivity and associated diseases (1). In 2011-12, more than 4.3 million Australians participated in sport and physical recreation at indoor sports or fitness facilities (2). However, research across Queensland (3) and in Victoria (4) showed low compliance with emergency plans and safety practices in fitness facilities. The aim of this study was to analyse emergency plans and procedures in fitness facilities in Australia. A nationwide online risk management survey of fitness professionals (n=1178, mean age=39.9), and observational audits at randomly selected regional and metropolitan fitness facilities (n=11) in New South Wales, South Australia, Victoria and Queensland were conducted. The findings indicated that most of the fitness professionals (68.1%) rated the emergency evacuation plans and other emergency procedures in their facilities as extremely/very good (n=640). Yet, more than one fourth (27.4%) of fitness professionals were somewhat aware (n=152), or very unaware/not at all aware (n=49) of the emergency evacuation plans and other emergency procedures in their facilities. The observational audits showed that most of the fitness facilities did not clearly display their emergency response plans (73%, n=8), emergency evacuation procedures (55%, n=6) or emergency telephone numbers (91%, n=10). Many fitness facilities (36.4%, n=4) did not have an appropriate first aid kit accessible by all staff. Our study shows a lack of emergency preparedness in many fitness facilities in Australia. Emergency response capability is crucial for fitness facility managers to satisfy their duty of care to manage risks of medical emergencies and disasters such as fire, explosion, and floods. Our study has implications for policy development and education of fitness facility managers to improve emergency plans and procedures in fitness facilities in Australia.
Automatic building extraction from LIDAR data covering complex urban scenes
- Awrangjeb, Mohammad, Lu, Guojun, Fraser, Clive
- Authors: Awrangjeb, Mohammad , Lu, Guojun , Fraser, Clive
- Date: 2014
- Type: Text , Conference proceedings
- Relation: ISPRS Technical Commission III Symposium; Zurich, Switzerland; 5th-7th September 2014; published in The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Vol. XL-3, p. 25-32
- Relation: http://purl.org/au-research/grants/arc/DE120101778
- Full Text:
- Reviewed:
- Description: This paper presents a new method for segmentation of LIDAR point cloud data for automatic building extraction. Using the ground height from a DEM (Digital Elevation Model), the non-ground points (mainly buildings and trees) are separated from the ground points. Points on walls are removed from the set of non-ground points by applying the following two approaches: If a plane fitted at a point and its neighbourhood is perpendicular to a fictitious horizontal plane, then this point is designated as a wall point. When LIDAR points are projected on a dense grid, points within a narrow area close to an imaginary vertical line on the wall should fall into the same grid cell. If three or more points fall into the same cell, then the intermediate points are removed as wall points. The remaining non-ground points are then divided into clusters based on height and local neighbourhood. One or more clusters are initialised based on the maximum height of the points and then each cluster is extended by applying height and neighbourhood constraints. Planar roof segments are extracted from each cluster of points following a region-growing technique. Planes are initialised using coplanar points as seed points and then grown using plane compatibility tests. If the estimated height of a point is similar to its LIDAR generated height, or if its normal distance to a plane is within a predefined limit, then the point is added to the plane. Once all the planar segments are extracted, the common points between the neghbouring planes are assigned to the appropriate planes based on the plane intersection line, locality and the angle between the normal at a common point and the corresponding plane. A rule-based procedure is applied to remove tree planes which are small in size and randomly oriented. The neighbouring planes are then merged to obtain individual building boundaries, which are regularised based on long line segments. Experimental results on ISPRS benchmark data sets show that the proposed method offers higher building detection and roof plane extraction rates than many existing methods, especially in complex urban scenes.
- Authors: Awrangjeb, Mohammad , Lu, Guojun , Fraser, Clive
- Date: 2014
- Type: Text , Conference proceedings
- Relation: ISPRS Technical Commission III Symposium; Zurich, Switzerland; 5th-7th September 2014; published in The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Vol. XL-3, p. 25-32
- Relation: http://purl.org/au-research/grants/arc/DE120101778
- Full Text:
- Reviewed:
- Description: This paper presents a new method for segmentation of LIDAR point cloud data for automatic building extraction. Using the ground height from a DEM (Digital Elevation Model), the non-ground points (mainly buildings and trees) are separated from the ground points. Points on walls are removed from the set of non-ground points by applying the following two approaches: If a plane fitted at a point and its neighbourhood is perpendicular to a fictitious horizontal plane, then this point is designated as a wall point. When LIDAR points are projected on a dense grid, points within a narrow area close to an imaginary vertical line on the wall should fall into the same grid cell. If three or more points fall into the same cell, then the intermediate points are removed as wall points. The remaining non-ground points are then divided into clusters based on height and local neighbourhood. One or more clusters are initialised based on the maximum height of the points and then each cluster is extended by applying height and neighbourhood constraints. Planar roof segments are extracted from each cluster of points following a region-growing technique. Planes are initialised using coplanar points as seed points and then grown using plane compatibility tests. If the estimated height of a point is similar to its LIDAR generated height, or if its normal distance to a plane is within a predefined limit, then the point is added to the plane. Once all the planar segments are extracted, the common points between the neghbouring planes are assigned to the appropriate planes based on the plane intersection line, locality and the angle between the normal at a common point and the corresponding plane. A rule-based procedure is applied to remove tree planes which are small in size and randomly oriented. The neighbouring planes are then merged to obtain individual building boundaries, which are regularised based on long line segments. Experimental results on ISPRS benchmark data sets show that the proposed method offers higher building detection and roof plane extraction rates than many existing methods, especially in complex urban scenes.
Automatic Extraction of Buildings in an Urban Region
- Siddiqui, Fasahat, Teng, Shyh, Lu, Guojun, Awrangjeb, Mohammad
- Authors: Siddiqui, Fasahat , Teng, Shyh , Lu, Guojun , Awrangjeb, Mohammad
- Date: 2014
- Type: Text , Conference proceedings
- Relation: 29th International Conference on Image and Vision Computing New Zealand, IVCNZ 2014; Hamilton; New Zealand; 19th-21st November 2014; published in ACM International Conference Proceeding Series p. 178-183
- Full Text:
- Reviewed:
- Description: There are currently several automatic building extraction methods introduced in the literature, but none of them are capable to completely extract portions of a building that are below a pre-defined building minimum height threshold. This paper proposes a systematic method which analyzes the height differences between the extracted adjacent planes above and below the height threshold as well as the planes' connectivity, thereby, extracting all portions belonging to buildings more completely. In general, the height difference between the edges of the adjacent planes above and below the height threshold that belong to the same building is more uniform. In addition, the extracted planes below the height threshold that belong to a building and their adjacent ground planes also have a clear height difference. The proposed method incorporates such information to achieve better performance in building extraction. We have compared our proposed method to a current state-of-the-art building extraction method qualitatively and quantitatively. Our experimental results show that our proposed method successfully recovers portions of a building below the height threshold, thereby achieving relatively higher average completeness (an improvement of 1.14%) and quality (an improvement of 0.93%).
- Authors: Siddiqui, Fasahat , Teng, Shyh , Lu, Guojun , Awrangjeb, Mohammad
- Date: 2014
- Type: Text , Conference proceedings
- Relation: 29th International Conference on Image and Vision Computing New Zealand, IVCNZ 2014; Hamilton; New Zealand; 19th-21st November 2014; published in ACM International Conference Proceeding Series p. 178-183
- Full Text:
- Reviewed:
- Description: There are currently several automatic building extraction methods introduced in the literature, but none of them are capable to completely extract portions of a building that are below a pre-defined building minimum height threshold. This paper proposes a systematic method which analyzes the height differences between the extracted adjacent planes above and below the height threshold as well as the planes' connectivity, thereby, extracting all portions belonging to buildings more completely. In general, the height difference between the edges of the adjacent planes above and below the height threshold that belong to the same building is more uniform. In addition, the extracted planes below the height threshold that belong to a building and their adjacent ground planes also have a clear height difference. The proposed method incorporates such information to achieve better performance in building extraction. We have compared our proposed method to a current state-of-the-art building extraction method qualitatively and quantitatively. Our experimental results show that our proposed method successfully recovers portions of a building below the height threshold, thereby achieving relatively higher average completeness (an improvement of 1.14%) and quality (an improvement of 0.93%).
Collaboration success in the dataverse : Libraries as digital humanities research partners
- Owen, Sue, Verhoeven, Deb, Horn, Anne, Robertson, Sabina
- Authors: Owen, Sue , Verhoeven, Deb , Horn, Anne , Robertson, Sabina
- Date: 2014
- Type: Text , Conference proceedings , Conference paper
- Relation: 35th International Association of Scientific and Technological University Libraries Conference (IATUL 2014); Espoo, Finland; 2nd-5th June 2014 p. 1-9
- Full Text:
- Reviewed:
- Description: At Deakin, the Humanities Networked Infrastructure project (HuNI), has paved new ground for facilitating the effective use and re-use of humanities research data. HuNI is one of the first largescale eResearch infrastructure projects for the humanities in Australia and the first national, crossdisciplinary Virtual Laboratory (VL) worldwide. HuNI provides new information infrastructure services for both humanities researchers and members of the public. Its development has been funded by the National eResearch Collaboration Tools and Resources project (NeCTAR) and undertaken by a consortium of thirteen institutions led by Deakin University. A Deakin University Library team with skills in data description, curation, retrieval and preservation is exploring with digital humanities researchers and developers effective means to support and maintain the HuNI project. HuNI ingests and aggregates data from a total of 31 different Australian cultural datasets which cover a wide range of disciplines in the humanities and creative arts. The HuNI VL also provides a number of online research capabilities for humanities researchers to discover and work with the large-scale aggregation of data. The HuNI VL enables researchers to create, save and publish selections of data; to analyse and manipulate the data; share findings and to export the data for reuse in external environments. In a major innovation, HuNI also enables researchers to assert relationships between entities in the form of ‘socially linked’ data. This capability contributes to the building of a ‘vernacular’ network of associations between HuNI records that embody diverse perspectives on knowledge and ramify avenues for research discovery beyond keyword and phrase searches. This paper reports on key milestones in this project, the future role of Libraries as digital humanities research partners and the challenges and sustainability issues that face national digital humanities research projects that are developed in strategic library settings.
- Authors: Owen, Sue , Verhoeven, Deb , Horn, Anne , Robertson, Sabina
- Date: 2014
- Type: Text , Conference proceedings , Conference paper
- Relation: 35th International Association of Scientific and Technological University Libraries Conference (IATUL 2014); Espoo, Finland; 2nd-5th June 2014 p. 1-9
- Full Text:
- Reviewed:
- Description: At Deakin, the Humanities Networked Infrastructure project (HuNI), has paved new ground for facilitating the effective use and re-use of humanities research data. HuNI is one of the first largescale eResearch infrastructure projects for the humanities in Australia and the first national, crossdisciplinary Virtual Laboratory (VL) worldwide. HuNI provides new information infrastructure services for both humanities researchers and members of the public. Its development has been funded by the National eResearch Collaboration Tools and Resources project (NeCTAR) and undertaken by a consortium of thirteen institutions led by Deakin University. A Deakin University Library team with skills in data description, curation, retrieval and preservation is exploring with digital humanities researchers and developers effective means to support and maintain the HuNI project. HuNI ingests and aggregates data from a total of 31 different Australian cultural datasets which cover a wide range of disciplines in the humanities and creative arts. The HuNI VL also provides a number of online research capabilities for humanities researchers to discover and work with the large-scale aggregation of data. The HuNI VL enables researchers to create, save and publish selections of data; to analyse and manipulate the data; share findings and to export the data for reuse in external environments. In a major innovation, HuNI also enables researchers to assert relationships between entities in the form of ‘socially linked’ data. This capability contributes to the building of a ‘vernacular’ network of associations between HuNI records that embody diverse perspectives on knowledge and ramify avenues for research discovery beyond keyword and phrase searches. This paper reports on key milestones in this project, the future role of Libraries as digital humanities research partners and the challenges and sustainability issues that face national digital humanities research projects that are developed in strategic library settings.
Contributions of single–phase rooftop PVs on short circuits faults in residential feeders
- Yengejeh, Hadi, Shahnia, Farhad, Islam, Syed
- Authors: Yengejeh, Hadi , Shahnia, Farhad , Islam, Syed
- Date: 2014
- Type: Text , Conference proceedings , Conference paper
- Relation: 24th Australasian Universities Power Engineering Conference, AUPEC 2014; Perth, Australia; 28th September-1st October 2014 p. 1-6
- Full Text:
- Reviewed:
- Description: Sensitivity analysis results are presented to investigate the presence of single–phase rooftop Photovoltaic Cells (PV) in low voltage residential feeders, during short circuits in the overhead lines. The PV rating and location in the feeder and the fault location are considered as the variables of the sensitivity analysis. The single–phase faults are the main focus of this paper and the PV effect on fault current, current in distribution transformer secondary and the voltage at each bus of the feeder are investigated, during fault. Furthermore, to analyze the bus voltages and fault current in the presence of multiple PVs, each with different rating and location, a stochastic analysis is carried out to investigate the expected probability density function of these parameters, considering the uncertainties of PV rating and location as well as fault location.
- Authors: Yengejeh, Hadi , Shahnia, Farhad , Islam, Syed
- Date: 2014
- Type: Text , Conference proceedings , Conference paper
- Relation: 24th Australasian Universities Power Engineering Conference, AUPEC 2014; Perth, Australia; 28th September-1st October 2014 p. 1-6
- Full Text:
- Reviewed:
- Description: Sensitivity analysis results are presented to investigate the presence of single–phase rooftop Photovoltaic Cells (PV) in low voltage residential feeders, during short circuits in the overhead lines. The PV rating and location in the feeder and the fault location are considered as the variables of the sensitivity analysis. The single–phase faults are the main focus of this paper and the PV effect on fault current, current in distribution transformer secondary and the voltage at each bus of the feeder are investigated, during fault. Furthermore, to analyze the bus voltages and fault current in the presence of multiple PVs, each with different rating and location, a stochastic analysis is carried out to investigate the expected probability density function of these parameters, considering the uncertainties of PV rating and location as well as fault location.
Efficient HEVC scheme using motion type categorization
- Podder, Pallab, Paul, Manoranjan, Murshed, Manzur
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2014
- Type: Text , Conference proceedings
- Relation: 10th International Conference on emerging Networking EXperiments and Technologies (CoNEXT); Sydney, Australia; 2nd-5th December 2014; published in Proceedings of the 2014 Workshop on Design, Quality and Deployment of Adaptive Video Streaming p. 41-42
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text:
- Reviewed:
- Description: High Efficiency Video Coding (HEVC) standard introduces a number of innovative tools which can reduce approximately 50% bit-rate compared to its predecessor H.264/AVC at the same perceptual video quality whereas the computational time has increased multiple times. To reduce the encoding time while preserving the expected video quality has become a real challenge today for video transmission and streaming especially using low-powered devices. Motion estimation (ME) and motion compensation (MC) using variable-size blocks (i.e., intermodes) require 60-80% of total computational time. In this paper we propose a new efficient intermode selection technique based on phase correlation and incorporate into HEVC framework to predict ME and MC modes and perform faster intermode selection based on three dissimilar motion types in different videos. Instead of exploring all the modes exhaustively we select a subset of modes using motion type and the final mode is selected based on the Lagrangian cost function. The experimental results show that compared to HEVC the average computational time can be downscaled by 34% while providing the similar rate-distortion (RD) performance.
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2014
- Type: Text , Conference proceedings
- Relation: 10th International Conference on emerging Networking EXperiments and Technologies (CoNEXT); Sydney, Australia; 2nd-5th December 2014; published in Proceedings of the 2014 Workshop on Design, Quality and Deployment of Adaptive Video Streaming p. 41-42
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text:
- Reviewed:
- Description: High Efficiency Video Coding (HEVC) standard introduces a number of innovative tools which can reduce approximately 50% bit-rate compared to its predecessor H.264/AVC at the same perceptual video quality whereas the computational time has increased multiple times. To reduce the encoding time while preserving the expected video quality has become a real challenge today for video transmission and streaming especially using low-powered devices. Motion estimation (ME) and motion compensation (MC) using variable-size blocks (i.e., intermodes) require 60-80% of total computational time. In this paper we propose a new efficient intermode selection technique based on phase correlation and incorporate into HEVC framework to predict ME and MC modes and perform faster intermode selection based on three dissimilar motion types in different videos. Instead of exploring all the modes exhaustively we select a subset of modes using motion type and the final mode is selected based on the Lagrangian cost function. The experimental results show that compared to HEVC the average computational time can be downscaled by 34% while providing the similar rate-distortion (RD) performance.
Library learning spaces in the digital age
- Horn, Anne, Lingham, Bernadette, Owen, Sue
- Authors: Horn, Anne , Lingham, Bernadette , Owen, Sue
- Date: 2014
- Type: Text , Conference proceedings , Conference paper
- Relation: 35th International Association of Scientific and Technological University Libraries Conference (IATUL 2014); Espoo, Finland; 2nd-5th June 2014 p. 1-9
- Full Text:
- Reviewed:
- Description: Students describe the Library as being central to their learning, offering focus and inspiration, enabling access to information and technologies, and collaboration with peers. Deakin University Library’s building redevelopment program has been integral to the Library’s re-imagined value proposition for students learning in the digital age. The introduction of new generation library and learning spaces strengthens the University’s offer to students for a brilliant education where you are and where you want to go through premium cloud and located learning experiences that are personal, engaging and relevant. The Library’s building projects are distinctive in terms of location and the built environment, as well as the characteristics of the university campus communities. Each progressive project has brought new aspirations and challenges. Through joint research with Deakin University’s School of Architecture and Built Environment, the Library has developed a quality framework for planning and assessing library and learning spaces. This paper will discuss the research findings to date on the quality framework and the need to continually review and assess indicators of quality in a highly dynamic digital environment. The Library’s experiences in introducing high-end multimedia provide some insights into planning for and delivering enduring value. The next steps in exploring the question of how library spaces assist students in achieving their learning goals are introduced.
- Authors: Horn, Anne , Lingham, Bernadette , Owen, Sue
- Date: 2014
- Type: Text , Conference proceedings , Conference paper
- Relation: 35th International Association of Scientific and Technological University Libraries Conference (IATUL 2014); Espoo, Finland; 2nd-5th June 2014 p. 1-9
- Full Text:
- Reviewed:
- Description: Students describe the Library as being central to their learning, offering focus and inspiration, enabling access to information and technologies, and collaboration with peers. Deakin University Library’s building redevelopment program has been integral to the Library’s re-imagined value proposition for students learning in the digital age. The introduction of new generation library and learning spaces strengthens the University’s offer to students for a brilliant education where you are and where you want to go through premium cloud and located learning experiences that are personal, engaging and relevant. The Library’s building projects are distinctive in terms of location and the built environment, as well as the characteristics of the university campus communities. Each progressive project has brought new aspirations and challenges. Through joint research with Deakin University’s School of Architecture and Built Environment, the Library has developed a quality framework for planning and assessing library and learning spaces. This paper will discuss the research findings to date on the quality framework and the need to continually review and assess indicators of quality in a highly dynamic digital environment. The Library’s experiences in introducing high-end multimedia provide some insights into planning for and delivering enduring value. The next steps in exploring the question of how library spaces assist students in achieving their learning goals are introduced.
Optimal operation of a multi-quality water distribution system with changing turbidity and salinity levels in source reservoirs
- Mala-Jetmarova, Helena, Barton, Andrew, Bagirov, Adil
- Authors: Mala-Jetmarova, Helena , Barton, Andrew , Bagirov, Adil
- Date: 2014
- Type: Text , Conference proceedings
- Relation: http://purl.org/au-research/grants/arc/LP0990908
- Relation: 16th International Conference on Water Distribution System Analysis, WDSA 2014; Bari, Italy; 14th-17th July 2014
- Full Text:
- Description: Impact of water quality conditions in sources on the optimal operation of a regional multiquality water distribution system is analysed. Three operational objectives are concurrently minimised, being pump energy costs, turbidity and salinity deviations at customer nodes. The optimisation problem is solved using GANetXL (NSGA-II) linked with EPANet. The example network incorporates scenarios with different water quality in sources. It was discovered that two types of tradeoffs, competing and non-competing, exist between the objectives and that the type of tradeoff is not unique between a particular pair of objectives across scenarios. The findings may be used for system operational planning.
- Authors: Mala-Jetmarova, Helena , Barton, Andrew , Bagirov, Adil
- Date: 2014
- Type: Text , Conference proceedings
- Relation: http://purl.org/au-research/grants/arc/LP0990908
- Relation: 16th International Conference on Water Distribution System Analysis, WDSA 2014; Bari, Italy; 14th-17th July 2014
- Full Text:
- Description: Impact of water quality conditions in sources on the optimal operation of a regional multiquality water distribution system is analysed. Three operational objectives are concurrently minimised, being pump energy costs, turbidity and salinity deviations at customer nodes. The optimisation problem is solved using GANetXL (NSGA-II) linked with EPANet. The example network incorporates scenarios with different water quality in sources. It was discovered that two types of tradeoffs, competing and non-competing, exist between the objectives and that the type of tradeoff is not unique between a particular pair of objectives across scenarios. The findings may be used for system operational planning.
Proceedings of the Australia-China Wetland Network Research Partnership Symposium
- Authors: Kattel, Giri
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: This publication is a compilation of short papers presented at the Australia-China Wetland Network Research Partnership Symposium, held in China at the Nanjing International Conference Hotel, 24 March 2014. The symposium, jointly organised by the Collaborative Research Network (CRN) of Federation University Australia and the Nanjing Institute of Geography and Limnology Chinese Academy of Sciences (NIGLAS), brought together a range of scientists including the neo-ecologists, palaeoecologists and hydrologists from both Australia and China. More than 100 students and scientists from across China attended the symposium. A majority of papers presented at the symposium have overlapping themes between ecology and hydrology of the large river and wetland systems that are exposed to a range of impacts posed by humans and recent climate change. The research focus of this volume is around the topic highlighting the conservation and management of degraded wetlands in Australia and China and the maintenance of a long term ecological resilience.
- Authors: Kattel, Giri
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: This publication is a compilation of short papers presented at the Australia-China Wetland Network Research Partnership Symposium, held in China at the Nanjing International Conference Hotel, 24 March 2014. The symposium, jointly organised by the Collaborative Research Network (CRN) of Federation University Australia and the Nanjing Institute of Geography and Limnology Chinese Academy of Sciences (NIGLAS), brought together a range of scientists including the neo-ecologists, palaeoecologists and hydrologists from both Australia and China. More than 100 students and scientists from across China attended the symposium. A majority of papers presented at the symposium have overlapping themes between ecology and hydrology of the large river and wetland systems that are exposed to a range of impacts posed by humans and recent climate change. The research focus of this volume is around the topic highlighting the conservation and management of degraded wetlands in Australia and China and the maintenance of a long term ecological resilience.
Progressive data stream mining and transaction classification for workload-aware incremental database repartitioning
- Kamal, Joarder, Murshed, Manzur, Gaber, Mohamed
- Authors: Kamal, Joarder , Murshed, Manzur , Gaber, Mohamed
- Date: 2014
- Type: Text , Conference proceedings
- Relation: IEEE/ACM International Symposium on Big Data Computing, BDC 2014; London, United Kingdom; 8th-11th December 2014; p. 8-15
- Full Text:
- Reviewed:
- Description: Minimising the impact of distributed transactions (DTs) in a shared-nothing distributed database is extremely challenging for transactional workloads. With dynamic workload nature and rapid growth in data volume the underlying database requires incremental repartitioning to maintain acceptable level of DTs and data load balance with minimum physical data migrations. In a workload-aware repartitioning scheme transactional workload is modelled as graph or hyper graph, and subsequently perform k-way min-cut clustering guaranteeing minimum edge cuts can reduce the impact of DTs significantly by mapping the workload clusters into logical database partitions. However, without exploring the inherent workload characteristics, the overall processing and computing times for large-scale workload networks increase in polynomial orders. In this paper, a workload-aware incremental database repartitioning technique is proposed, which effectively exploits proactive transaction classification and workload stream mining techniques. Workload batches are modelled in graph, hyper graph, and compressed hyper graph then repartitioned to produce a fresh tuple-to-partition data migration plan for every incremental cycle. Experimental studies in a simulated TPC-C environment demonstrate that the proposed model can be effectively adopted in managing rapid data growth and dynamic workloads, thus progressively reduce the overall processing time required to operate over the workload networks.
- Authors: Kamal, Joarder , Murshed, Manzur , Gaber, Mohamed
- Date: 2014
- Type: Text , Conference proceedings
- Relation: IEEE/ACM International Symposium on Big Data Computing, BDC 2014; London, United Kingdom; 8th-11th December 2014; p. 8-15
- Full Text:
- Reviewed:
- Description: Minimising the impact of distributed transactions (DTs) in a shared-nothing distributed database is extremely challenging for transactional workloads. With dynamic workload nature and rapid growth in data volume the underlying database requires incremental repartitioning to maintain acceptable level of DTs and data load balance with minimum physical data migrations. In a workload-aware repartitioning scheme transactional workload is modelled as graph or hyper graph, and subsequently perform k-way min-cut clustering guaranteeing minimum edge cuts can reduce the impact of DTs significantly by mapping the workload clusters into logical database partitions. However, without exploring the inherent workload characteristics, the overall processing and computing times for large-scale workload networks increase in polynomial orders. In this paper, a workload-aware incremental database repartitioning technique is proposed, which effectively exploits proactive transaction classification and workload stream mining techniques. Workload batches are modelled in graph, hyper graph, and compressed hyper graph then repartitioned to produce a fresh tuple-to-partition data migration plan for every incremental cycle. Experimental studies in a simulated TPC-C environment demonstrate that the proposed model can be effectively adopted in managing rapid data growth and dynamic workloads, thus progressively reduce the overall processing time required to operate over the workload networks.