Timeless principles of taxpayer protection: how they adapt to digital disruption
- Authors: Bentley, Duncan
- Date: 2019
- Type: Text , Journal article
- Relation: eJournal of Tax Research Vol. 16, no. 3 (2019), p. 679-713
- Full Text:
- Reviewed:
- Description: Digital transformation will pose growing challenges to tax revenues and systems of taxation that were designed for another century. The tax rules may hasten slowly, but the record of response to the challenges of electronic commerce, and of base erosion and profit shifting, shows that tax administration is more adaptable. This article identifies the detailed nature of technological changes in electronics and systems; big data, automation and artificial intelligence; and security, including blockchain; as those changes affect tax administration. It highlights the critical taxpayer rights issues and applies accepted taxpayer rights frameworks. The article concludes that taxpayer rights principles are both highly adaptable to a digital world, and provide useful guidance to where urgent action and further research are required. © 2019 UNSW Business School™.
- Authors: Bentley, Duncan
- Date: 2019
- Type: Text , Journal article
- Relation: eJournal of Tax Research Vol. 16, no. 3 (2019), p. 679-713
- Full Text:
- Reviewed:
- Description: Digital transformation will pose growing challenges to tax revenues and systems of taxation that were designed for another century. The tax rules may hasten slowly, but the record of response to the challenges of electronic commerce, and of base erosion and profit shifting, shows that tax administration is more adaptable. This article identifies the detailed nature of technological changes in electronics and systems; big data, automation and artificial intelligence; and security, including blockchain; as those changes affect tax administration. It highlights the critical taxpayer rights issues and applies accepted taxpayer rights frameworks. The article concludes that taxpayer rights principles are both highly adaptable to a digital world, and provide useful guidance to where urgent action and further research are required. © 2019 UNSW Business School™.
Service quality assessment of internet banking : empirical evidences from Namibia
- Mutesi, Johannes, Mutingi, Michael, Chakraborty, Ayon
- Authors: Mutesi, Johannes , Mutingi, Michael , Chakraborty, Ayon
- Date: 2016
- Type: Text , Journal article
- Relation: E-service journal Vol. 10, no. 1 (2016), p. 42-65
- Full Text:
- Reviewed:
- Description: SERVQUAL as a model has for long attracted researchers to apply in different contexts. The objective of this research is to focus on e-service quality in the absence of face-to-face encounter for commercial banks in an emerging economy such as Namibia. The focus is to understand both customer perception about Internet banking and the usability of the banking website. Based on prior literature on service quality assessment and website usability a priory model was developed. The model is then tested through a questionnaire survey of customers of commercial banks in Namibia. Using factor analysis, a refined model for assessing service quality of Internet banking was developed. The refined model includes three service quality dimensions: service performance, communication and website design. Service performance was most preferred on satisfaction level whereas in communication customers were dissatisfied as well as indifferent towards a 24 hours customer's service. This is in contrary to the existing literature. The third dimension, i.e. website design was high on customer satisfaction level. Finally, on usability evaluation of banking website it was found that acceptance level of the Internet banking website for the commercial banks in Namibia is marginally high.
- Authors: Mutesi, Johannes , Mutingi, Michael , Chakraborty, Ayon
- Date: 2016
- Type: Text , Journal article
- Relation: E-service journal Vol. 10, no. 1 (2016), p. 42-65
- Full Text:
- Reviewed:
- Description: SERVQUAL as a model has for long attracted researchers to apply in different contexts. The objective of this research is to focus on e-service quality in the absence of face-to-face encounter for commercial banks in an emerging economy such as Namibia. The focus is to understand both customer perception about Internet banking and the usability of the banking website. Based on prior literature on service quality assessment and website usability a priory model was developed. The model is then tested through a questionnaire survey of customers of commercial banks in Namibia. Using factor analysis, a refined model for assessing service quality of Internet banking was developed. The refined model includes three service quality dimensions: service performance, communication and website design. Service performance was most preferred on satisfaction level whereas in communication customers were dissatisfied as well as indifferent towards a 24 hours customer's service. This is in contrary to the existing literature. The third dimension, i.e. website design was high on customer satisfaction level. Finally, on usability evaluation of banking website it was found that acceptance level of the Internet banking website for the commercial banks in Namibia is marginally high.
Building change detection from LIDAR point cloud data based on connected component analysis
- Awrangjeb, Mohammad, Fraser, Clive, Lu, Guojun
- Authors: Awrangjeb, Mohammad , Fraser, Clive , Lu, Guojun
- Date: 2015
- Type: Text , Conference proceedings
- Relation: Isprs Geospatial Week 2015; La Grande Motte, France; 28th September-3rd October 2015; published in International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences Vol. II-3, p. 393-400
- Full Text:
- Reviewed:
- Description: Building data are one of the important data types in a topographic database. Building change detection after a period of time is necessary for many applications, such as identification of informal settlements. Based on the detected changes, the database has to be updated to ensure its usefulness. This paper proposes an improved building detection technique, which is a prerequisite for many building change detection techniques. The improved technique examines the gap between neighbouring buildings in the building mask in order to avoid under segmentation errors. Then, a new building change detection technique from LIDAR point cloud data is proposed. Buildings which are totally new or demolished are directly added to the change detection output. However, for demolished or extended building parts, a connected component analysis algorithm is applied and for each connected component its area, width and height are estimated in order to ascertain if it can be considered as a demolished or new building part. Finally, a graphical user interface (GUI) has been developed to update detected changes to the existing building map. Experimental results show that the improved building detection technique can offer not only higher performance in terms of completeness and correctness, but also a lower number of under-segmentation errors as compared to its original counterpart. The proposed change detection technique produces no omission errors and thus it can be exploited for enhanced automated building information updating within a topographic database. Using the developed GUI, the user can quickly examine each suggested change and indicate his/her decision with a minimum number of mouse clicks.
- Authors: Awrangjeb, Mohammad , Fraser, Clive , Lu, Guojun
- Date: 2015
- Type: Text , Conference proceedings
- Relation: Isprs Geospatial Week 2015; La Grande Motte, France; 28th September-3rd October 2015; published in International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences Vol. II-3, p. 393-400
- Full Text:
- Reviewed:
- Description: Building data are one of the important data types in a topographic database. Building change detection after a period of time is necessary for many applications, such as identification of informal settlements. Based on the detected changes, the database has to be updated to ensure its usefulness. This paper proposes an improved building detection technique, which is a prerequisite for many building change detection techniques. The improved technique examines the gap between neighbouring buildings in the building mask in order to avoid under segmentation errors. Then, a new building change detection technique from LIDAR point cloud data is proposed. Buildings which are totally new or demolished are directly added to the change detection output. However, for demolished or extended building parts, a connected component analysis algorithm is applied and for each connected component its area, width and height are estimated in order to ascertain if it can be considered as a demolished or new building part. Finally, a graphical user interface (GUI) has been developed to update detected changes to the existing building map. Experimental results show that the improved building detection technique can offer not only higher performance in terms of completeness and correctness, but also a lower number of under-segmentation errors as compared to its original counterpart. The proposed change detection technique produces no omission errors and thus it can be exploited for enhanced automated building information updating within a topographic database. Using the developed GUI, the user can quickly examine each suggested change and indicate his/her decision with a minimum number of mouse clicks.
Automated analysis of performance and energy consumption for cloud applications
- Chen, Feifei, Grundy, John, Schneider, Jean-Guy, Yang, Yun, He, Qiang
- Authors: Chen, Feifei , Grundy, John , Schneider, Jean-Guy , Yang, Yun , He, Qiang
- Date: 2014
- Type: Text , Conference paper
- Relation: Proceedings of the 5th ACM/SPEC international conference on Performance engineering p. 39-50
- Full Text:
- Reviewed:
- Description: In cloud environments, IT solutions are delivered to users via shared infrastructure. One consequence of this model is that large cloud data centres consume large amounts of energy and produce significant carbon footprints. A key objective of cloud providers is thus to develop resource provisioning and management solutions at minimum energy consumption while still guaranteeing Service Level Agreements (SLAs). However, a thorough understanding of both system performance and energy consumption patterns in complex cloud systems is imperative to achieve a balance of energy efficiency and acceptable performance. In this paper, we present StressCloud, a performance and energy consumption analysis tool for cloud systems. StressCloud can automatically generate load tests and profile system performance and energy consumption data. Using StressCloud, we have conducted extensive experiments to profile and analyse system performance and energy consumption with different types and mixes of runtime tasks. We collected fine-grained energy consumption and performance data with different resource allocation strategies, system configurations and workloads. The experimental results show the correlation coefficients of energy consumption, system resource allocation strategies and workload, as well as the performance of the cloud applications. Our results can be used to guide the design and deployment of cloud applications to balance energy and performance requirements.
- Authors: Chen, Feifei , Grundy, John , Schneider, Jean-Guy , Yang, Yun , He, Qiang
- Date: 2014
- Type: Text , Conference paper
- Relation: Proceedings of the 5th ACM/SPEC international conference on Performance engineering p. 39-50
- Full Text:
- Reviewed:
- Description: In cloud environments, IT solutions are delivered to users via shared infrastructure. One consequence of this model is that large cloud data centres consume large amounts of energy and produce significant carbon footprints. A key objective of cloud providers is thus to develop resource provisioning and management solutions at minimum energy consumption while still guaranteeing Service Level Agreements (SLAs). However, a thorough understanding of both system performance and energy consumption patterns in complex cloud systems is imperative to achieve a balance of energy efficiency and acceptable performance. In this paper, we present StressCloud, a performance and energy consumption analysis tool for cloud systems. StressCloud can automatically generate load tests and profile system performance and energy consumption data. Using StressCloud, we have conducted extensive experiments to profile and analyse system performance and energy consumption with different types and mixes of runtime tasks. We collected fine-grained energy consumption and performance data with different resource allocation strategies, system configurations and workloads. The experimental results show the correlation coefficients of energy consumption, system resource allocation strategies and workload, as well as the performance of the cloud applications. Our results can be used to guide the design and deployment of cloud applications to balance energy and performance requirements.
Automated unsupervised authorship analysis using evidence accumulation clustering
- Layton, Robert, Watters, Paul, Dazeley, Richard
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2013
- Type: Text , Journal article
- Relation: Natural Language Engineering Vol. 19, no. 1 (2013), p. 95-120
- Full Text:
- Reviewed:
- Description: Authorship Analysis aims to extract information about the authorship of documents from features within those documents. Typically, this is performed as a classification task with the aim of identifying the author of a document, given a set of documents of known authorship. Alternatively, unsupervised methods have been developed primarily as visualisation tools to assist the manual discovery of clusters of authorship within a corpus by analysts. However, there is a need in many fields for more sophisticated unsupervised methods to automate the discovery, profiling and organisation of related information through clustering of documents by authorship. An automated and unsupervised methodology for clustering documents by authorship is proposed in this paper. The methodology is named NUANCE, for n-gram Unsupervised Automated Natural Cluster Ensemble. Testing indicates that the derived clusters have a strong correlation to the true authorship of unseen documents. © 2011 Cambridge University Press.
- Description: 2003010584
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2013
- Type: Text , Journal article
- Relation: Natural Language Engineering Vol. 19, no. 1 (2013), p. 95-120
- Full Text:
- Reviewed:
- Description: Authorship Analysis aims to extract information about the authorship of documents from features within those documents. Typically, this is performed as a classification task with the aim of identifying the author of a document, given a set of documents of known authorship. Alternatively, unsupervised methods have been developed primarily as visualisation tools to assist the manual discovery of clusters of authorship within a corpus by analysts. However, there is a need in many fields for more sophisticated unsupervised methods to automate the discovery, profiling and organisation of related information through clustering of documents by authorship. An automated and unsupervised methodology for clustering documents by authorship is proposed in this paper. The methodology is named NUANCE, for n-gram Unsupervised Automated Natural Cluster Ensemble. Testing indicates that the derived clusters have a strong correlation to the true authorship of unseen documents. © 2011 Cambridge University Press.
- Description: 2003010584
Integration of LIDAR data and orthoimage for automatic 3D building roof plane extraction
- Awrangjeb, Mohammad, Fraser, Clive, Lu, Guojun
- Authors: Awrangjeb, Mohammad , Fraser, Clive , Lu, Guojun
- Date: 2013
- Type: Text , Conference paper
- Relation: 2013 IEEE International Conference on Multimedia and Expo (ICME)
- Full Text:
- Reviewed:
- Description: Automatic 3D extraction of building roofs from remotely sensed data is important for many applications including city modeling. This paper proposes a new method for automatic 3D roof extraction through an effective integration of LIDAR (Light Detection And Ranging) data and multispectral orthoimagery. Using the ground height from a DEM (Digital Elevation Model), the raw LIDAR points are separated into two groups. The first group contains the ground points that are exploited to constitute a `ground mask'. The second group contains the non-ground points which are segmented using an innovative image line guided segmentation technique to extract the roof planes. The image lines extracted from the grey-scale version of the orthoimage are classified into several classes such as `ground', `tree', `roof edge' and `roof ridge' using the ground mask and colour and texture information from the orthoimagery. During roof plane extraction the lines from the later two classes are used to fit roof planes to the neighbouring non-ground LIDAR points. Finally, a new rule-based procedure is applied to remove planes constructed on trees. Experimental results show that the proposed method successfully removes vegetation and offers high extraction rates.
- Authors: Awrangjeb, Mohammad , Fraser, Clive , Lu, Guojun
- Date: 2013
- Type: Text , Conference paper
- Relation: 2013 IEEE International Conference on Multimedia and Expo (ICME)
- Full Text:
- Reviewed:
- Description: Automatic 3D extraction of building roofs from remotely sensed data is important for many applications including city modeling. This paper proposes a new method for automatic 3D roof extraction through an effective integration of LIDAR (Light Detection And Ranging) data and multispectral orthoimagery. Using the ground height from a DEM (Digital Elevation Model), the raw LIDAR points are separated into two groups. The first group contains the ground points that are exploited to constitute a `ground mask'. The second group contains the non-ground points which are segmented using an innovative image line guided segmentation technique to extract the roof planes. The image lines extracted from the grey-scale version of the orthoimage are classified into several classes such as `ground', `tree', `roof edge' and `roof ridge' using the ground mask and colour and texture information from the orthoimagery. During roof plane extraction the lines from the later two classes are used to fit roof planes to the neighbouring non-ground LIDAR points. Finally, a new rule-based procedure is applied to remove planes constructed on trees. Experimental results show that the proposed method successfully removes vegetation and offers high extraction rates.
Towards understanding malware behaviour by the extraction of API calls
- Alazab, Mamoun, Venkatraman, Sitalakshmi, Watters, Paul
- Authors: Alazab, Mamoun , Venkatraman, Sitalakshmi , Watters, Paul
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: One of the recent trends adopted by malware authors is to use packers or software tools that instigate code obfuscation in order to evade detection by antivirus scanners. With evasion techniques such as polymorphism and metamorphism malware is able to fool current detection techniques. Thus, security researchers and the anti-virus industry are facing a herculean task in extracting payloads hidden within packed executables. It is a common practice to use manual unpacking or static unpacking using some software tools and analyse the application programming interface (API) calls for malware detection. However, extracting these features from the unpacked executables for reverse obfuscation is labour intensive and requires deep knowledge of low-level programming that includes kernel and assembly language. This paper presents an automated method of extracting API call features and analysing them in order to understand their use for malicious purpose. While some research has been conducted in arriving at file birthmarks using API call features and the like, there is a scarcity of work that relates to features in malcodes. To address this gap, we attempt to automatically analyse and classify the behavior of API function calls based on the malicious intent hidden within any packed program. This paper uses four-step methodology for developing a fully automated system to arrive at six main categories of suspicious behavior of API call features. © 2010 IEEE.
- Authors: Alazab, Mamoun , Venkatraman, Sitalakshmi , Watters, Paul
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: One of the recent trends adopted by malware authors is to use packers or software tools that instigate code obfuscation in order to evade detection by antivirus scanners. With evasion techniques such as polymorphism and metamorphism malware is able to fool current detection techniques. Thus, security researchers and the anti-virus industry are facing a herculean task in extracting payloads hidden within packed executables. It is a common practice to use manual unpacking or static unpacking using some software tools and analyse the application programming interface (API) calls for malware detection. However, extracting these features from the unpacked executables for reverse obfuscation is labour intensive and requires deep knowledge of low-level programming that includes kernel and assembly language. This paper presents an automated method of extracting API call features and analysing them in order to understand their use for malicious purpose. While some research has been conducted in arriving at file birthmarks using API call features and the like, there is a scarcity of work that relates to features in malcodes. To address this gap, we attempt to automatically analyse and classify the behavior of API function calls based on the malicious intent hidden within any packed program. This paper uses four-step methodology for developing a fully automated system to arrive at six main categories of suspicious behavior of API call features. © 2010 IEEE.
- «
- ‹
- 1
- ›
- »