A conceptual framework for a theory of liquidity
- Authors: Culham, James
- Date: 2018
- Type: Text , Thesis , Masters , PhD
- Full Text:
- Description: This study contributes to the understanding of liquidity in two ways. First, it considers the multifaceted nature of liquidity and its relationship with money. Second, it constructs a conceptual framework for a theory of liquidity. The first contribution is achieved by clarifying and categorising the various forms of liquidity to identify those overlooked by the existing literature. The second contribution consists of a realist critique of the literature on liquidity and money to highlight the strengths and weaknesses of each theoretical approach. The study reflects on the attempts to analyse liquidity using moneyless models of perfect barter with the assumption that every commodity exhibits perfect saleability; an assumption that removes any need for a medium of exchange and, moreover, crowds out all other forms of liquidity. It is concluded that, because liquidity is a social and monetary phenomenon, it cannot be analysed with models populated by a representative agent consuming a single commodity. Furthermore, this conclusion is not altered by the introduction of ‘financial frictions’, which are fundamentally at odds with the nature of money. Instead, the clarification of the nature of liquidity forms the basis for an interpretation of Keynes’s theory of liquidity preference that emphasises its reliance on liquidity in general, not money in particular. The study introduces the terms redemption liquidity and exchange liquidity to explain the trade-off that underpins the theory of liquidity preference. Properly interpreted, the theory of liquidity preference can then address many of the deficiencies prevalent in the dominant theories of the rate of interest. The study therefore has implications for monetary policy and asset pricing.
- Description: Doctor of Philosophy
- Authors: Culham, James
- Date: 2018
- Type: Text , Thesis , Masters , PhD
- Full Text:
- Description: This study contributes to the understanding of liquidity in two ways. First, it considers the multifaceted nature of liquidity and its relationship with money. Second, it constructs a conceptual framework for a theory of liquidity. The first contribution is achieved by clarifying and categorising the various forms of liquidity to identify those overlooked by the existing literature. The second contribution consists of a realist critique of the literature on liquidity and money to highlight the strengths and weaknesses of each theoretical approach. The study reflects on the attempts to analyse liquidity using moneyless models of perfect barter with the assumption that every commodity exhibits perfect saleability; an assumption that removes any need for a medium of exchange and, moreover, crowds out all other forms of liquidity. It is concluded that, because liquidity is a social and monetary phenomenon, it cannot be analysed with models populated by a representative agent consuming a single commodity. Furthermore, this conclusion is not altered by the introduction of ‘financial frictions’, which are fundamentally at odds with the nature of money. Instead, the clarification of the nature of liquidity forms the basis for an interpretation of Keynes’s theory of liquidity preference that emphasises its reliance on liquidity in general, not money in particular. The study introduces the terms redemption liquidity and exchange liquidity to explain the trade-off that underpins the theory of liquidity preference. Properly interpreted, the theory of liquidity preference can then address many of the deficiencies prevalent in the dominant theories of the rate of interest. The study therefore has implications for monetary policy and asset pricing.
- Description: Doctor of Philosophy
A conceptual model of physical performance in Australian Football
- Authors: Mooney, Mitchell
- Date: 2013
- Type: Text , Thesis , PhD
- Full Text:
- Description: Objective: The objective of this project was to identify the relative influence of valid physical parameters to elite Australian Football performance. Methods: Data was collected on match performance variables (i.e. coaches’ votes, number of ball disposals, champion data rank), match exercise intensity measures (m∙min-1, m∙min-1 above and below 15 km∙h-1 and Load™∙min-1) and physical capacities (yo-yo intermittent recovery test level 2, maximum oxygen uptake, running economy, relative aerobic intensity, maximal aerobic speed and maximal accumulated oxygen deficit) on elite and recreational Australian footballers. These variables were modelled to determine the logical sequence and relative importance towards match performance. Results: The results indicate a sequential physical path to Australian Football performance. The yo-yo intermittent recovery test (level 2) performance influenced match exercise intensity (m∙min-1 >15 km∙h-1& Load™∙min-1) which in turn, affected Australian Football performance (number of ball disposals and coaches’ votes). This sequence was altered by experience, playing position and neuromuscular fatigue. The number of interchange rotations also influenced match exercise intensity throughout the match. Furthermore, the yo-yo intermittent recovery test (level 2) was found to be determined by a complex interaction of physical capacities. However, yo-yo intermittent recovery (level 2) performance was most influenced by maximum oxygen uptake, relative aerobic intensity and maximum aerobic speed. Conclusion: This dissertation showed Australian Football performance is a complex and dynamic system influenced by many variables interacting with each other in a sequential path. Sports scientists and coaches may utilise this information as a framework to evaluate Australian Football performance matches.
- Description: Doctor of Philosophy
- Authors: Mooney, Mitchell
- Date: 2013
- Type: Text , Thesis , PhD
- Full Text:
- Description: Objective: The objective of this project was to identify the relative influence of valid physical parameters to elite Australian Football performance. Methods: Data was collected on match performance variables (i.e. coaches’ votes, number of ball disposals, champion data rank), match exercise intensity measures (m∙min-1, m∙min-1 above and below 15 km∙h-1 and Load™∙min-1) and physical capacities (yo-yo intermittent recovery test level 2, maximum oxygen uptake, running economy, relative aerobic intensity, maximal aerobic speed and maximal accumulated oxygen deficit) on elite and recreational Australian footballers. These variables were modelled to determine the logical sequence and relative importance towards match performance. Results: The results indicate a sequential physical path to Australian Football performance. The yo-yo intermittent recovery test (level 2) performance influenced match exercise intensity (m∙min-1 >15 km∙h-1& Load™∙min-1) which in turn, affected Australian Football performance (number of ball disposals and coaches’ votes). This sequence was altered by experience, playing position and neuromuscular fatigue. The number of interchange rotations also influenced match exercise intensity throughout the match. Furthermore, the yo-yo intermittent recovery test (level 2) was found to be determined by a complex interaction of physical capacities. However, yo-yo intermittent recovery (level 2) performance was most influenced by maximum oxygen uptake, relative aerobic intensity and maximum aerobic speed. Conclusion: This dissertation showed Australian Football performance is a complex and dynamic system influenced by many variables interacting with each other in a sequential path. Sports scientists and coaches may utilise this information as a framework to evaluate Australian Football performance matches.
- Description: Doctor of Philosophy
A conceptual re-alignment of methodology underpinning tax effect accounting : An Australian exploration of the contemporary normalising effect
- Authors: Morton, Elizabeth
- Date: 2016
- Type: Text , Thesis , PhD
- Full Text:
- Description: This research examines the presence and effectiveness of the ‘normalising effect’, traditionally offered as the main justification for tax effect accounting’s (TEA) adoption. TEA can be seen as a technical facet of accounting practice, ‘normalising’ the timing differences between the accounting and taxation systems. That is, income tax is recognised according to when transactions are recognised for accounting purposes in order to ‘normalise’ reported profits, thereby reflecting an income statement focus. It has been contended that this will improve the usefulness of financial reports by ‘correcting’ misleading and ‘unreal’ fluctuations in income tax. Australia’s adoption of AIFRS in 2005 entailed a major conceptual re-alignment of the methodology underpinning TEA, moving away from the income statement focus in favour of a balance sheet focus. This implied a different normalisation emphasis. It is within this contemporary setting, based on a study of 90 companies over the two regulatory periods between 2002 and 2011 (AGAAP and AIFRS), that a quantitative measure of the presence and effectiveness of the normalising effect was undertaken, additionally considering the subsequent balance sheet impact. Effective normalisation was revealed during the AGAAP period, whilst only effective after the removal of loss makers during the AIFRS period. These findings suggest that the relaxation of recognition criteria under AIFRS may have had a meaningful impact on the effectiveness of the new standard. However, when normalisation was given a more narrow definition in light of prima facie tax, deferred taxes had a more substantial impact, particularly during the AIFRS period. Such findings are consistent with the notion thatTEA enables reported tax to be ‘as if’ it were a function of accounting, without a substantial build up on the balance sheet as a consequence. These findings have implications for evaluating the efficacy of TEA and comprehending the nature of contemporary financial statements.
- Description: Doctor of Philosophy
- Authors: Morton, Elizabeth
- Date: 2016
- Type: Text , Thesis , PhD
- Full Text:
- Description: This research examines the presence and effectiveness of the ‘normalising effect’, traditionally offered as the main justification for tax effect accounting’s (TEA) adoption. TEA can be seen as a technical facet of accounting practice, ‘normalising’ the timing differences between the accounting and taxation systems. That is, income tax is recognised according to when transactions are recognised for accounting purposes in order to ‘normalise’ reported profits, thereby reflecting an income statement focus. It has been contended that this will improve the usefulness of financial reports by ‘correcting’ misleading and ‘unreal’ fluctuations in income tax. Australia’s adoption of AIFRS in 2005 entailed a major conceptual re-alignment of the methodology underpinning TEA, moving away from the income statement focus in favour of a balance sheet focus. This implied a different normalisation emphasis. It is within this contemporary setting, based on a study of 90 companies over the two regulatory periods between 2002 and 2011 (AGAAP and AIFRS), that a quantitative measure of the presence and effectiveness of the normalising effect was undertaken, additionally considering the subsequent balance sheet impact. Effective normalisation was revealed during the AGAAP period, whilst only effective after the removal of loss makers during the AIFRS period. These findings suggest that the relaxation of recognition criteria under AIFRS may have had a meaningful impact on the effectiveness of the new standard. However, when normalisation was given a more narrow definition in light of prima facie tax, deferred taxes had a more substantial impact, particularly during the AIFRS period. Such findings are consistent with the notion thatTEA enables reported tax to be ‘as if’ it were a function of accounting, without a substantial build up on the balance sheet as a consequence. These findings have implications for evaluating the efficacy of TEA and comprehending the nature of contemporary financial statements.
- Description: Doctor of Philosophy
A continuous flow elevator to lift ore vertically for deep mine haulage using a cable disc elevator
- Authors: Webb, Colin
- Date: 2020
- Type: Text , Thesis , PhD
- Full Text:
- Description: Vertical continuous ore haulage with elevators in mining for deep haulage is virtually non-existent. In this, research investigations concentrated on a cable disc elevator. The problem of using a cable disc elevator is the friction between the elevator fixed tube and the moving ore on the disc. This research establishes the friction forces existing as the elevator cable and discs are elevated up a stationary tube. Then the focus is to find a way to eliminate that friction. The method involved developing three test rigs: Test Rig 1 measures static friction with the ore placed on a disc in a tube mounted on load cells to measure the resistance with the ore on the disc lifted by a counterweight. This is relevant for an elevator that has stopped under load. Test Rig 2 measures the dynamic friction in an operational 5-inch elevator with the tube on the lifting side held stationary by load cells when the cable discs are lifting the ore. Test Rig 3 eliminates friction in the lifting tube by using a pipe conveyor that travels vertically at the same speed as the cable disc elevator to contain the ore on the cable disc elevator. The cable disc elevator does all the ore lifting. The research generated results for static and dynamic friction for gravel, granite and coal. Cable tension required for ore lift of 1000 metres and the maximum hoisting distance for some existing cables are calculated. Implications of this research are that the cable disc elevator has the potential to haul from depths greater than existing elevators, has a small footprint in a mine, and with some further development could eliminate the need for truck haulage in open cut and underground mining from the mine.
- Description: Doctor of Philosophy
- Authors: Webb, Colin
- Date: 2020
- Type: Text , Thesis , PhD
- Full Text:
- Description: Vertical continuous ore haulage with elevators in mining for deep haulage is virtually non-existent. In this, research investigations concentrated on a cable disc elevator. The problem of using a cable disc elevator is the friction between the elevator fixed tube and the moving ore on the disc. This research establishes the friction forces existing as the elevator cable and discs are elevated up a stationary tube. Then the focus is to find a way to eliminate that friction. The method involved developing three test rigs: Test Rig 1 measures static friction with the ore placed on a disc in a tube mounted on load cells to measure the resistance with the ore on the disc lifted by a counterweight. This is relevant for an elevator that has stopped under load. Test Rig 2 measures the dynamic friction in an operational 5-inch elevator with the tube on the lifting side held stationary by load cells when the cable discs are lifting the ore. Test Rig 3 eliminates friction in the lifting tube by using a pipe conveyor that travels vertically at the same speed as the cable disc elevator to contain the ore on the cable disc elevator. The cable disc elevator does all the ore lifting. The research generated results for static and dynamic friction for gravel, granite and coal. Cable tension required for ore lift of 1000 metres and the maximum hoisting distance for some existing cables are calculated. Implications of this research are that the cable disc elevator has the potential to haul from depths greater than existing elevators, has a small footprint in a mine, and with some further development could eliminate the need for truck haulage in open cut and underground mining from the mine.
- Description: Doctor of Philosophy
A critical ethnographic study of older people participating in their health care in acute hospital environments
- Authors: Penney, Wendy
- Date: 2005
- Type: Text , Thesis , PhD
- Full Text:
- Description: "While consumer participation is the focus of 21st century health policy, little is known about this concept from the perspectives of people who require acute hospital services. [...]This project set out to explore older people's perspective of participating in their care. Adopting critical ethnographic method, field work included observation of the inpatient experience. Following discharge home people were interviewed about their experiences including what helped and what hindered participation in their care. Similarly nurses involved in [...] a hospital experience were invited to be involved in individual and focus group discussions aimed at defining how they believed they facilitated people to participate as well as barriers that prevent this style of care."
- Description: Doctor of Philosophy
- Authors: Penney, Wendy
- Date: 2005
- Type: Text , Thesis , PhD
- Full Text:
- Description: "While consumer participation is the focus of 21st century health policy, little is known about this concept from the perspectives of people who require acute hospital services. [...]This project set out to explore older people's perspective of participating in their care. Adopting critical ethnographic method, field work included observation of the inpatient experience. Following discharge home people were interviewed about their experiences including what helped and what hindered participation in their care. Similarly nurses involved in [...] a hospital experience were invited to be involved in individual and focus group discussions aimed at defining how they believed they facilitated people to participate as well as barriers that prevent this style of care."
- Description: Doctor of Philosophy
A critical study of the production of nampla (Thai fish sauce)
- Authors: Laixuthai, Parichart
- Date: 1997
- Type: Text , Thesis , Masters
- Full Text:
- Description: Masters of Applied Science
- Authors: Laixuthai, Parichart
- Date: 1997
- Type: Text , Thesis , Masters
- Full Text:
- Description: Masters of Applied Science
A financial stress index to model and forecast financial stress in Australia
- Authors: Mukulu, Sandra
- Date: 2017
- Type: Text , Thesis , PhD
- Full Text:
- Description: The series of financial crises that cascaded through and rocked much of the world over the past decade created opportunities to draw meaning from the pattern of countries succumbing to crisis and those who appear to be wholly or partially immune. This thesis examines the case of Australia, a developed country that has seldom experienced an endogenous crisis in the last few decades, but has experienced crisis by contagion. This study designs a financial stress index to measure and forecast the health of the Australian economy and proposes a custom-made stress index to: Gauge the potential for a crisis; and Signal when a timely intervention may minimise fear and contagion losses in the Australian financial market. Financial and economic data is used to design indicators for stress in the banking sector and equity, currency and bond markets. Further, this study explores how movements in equity markets of key trading partners of Australia can be used to predict movements in the Australian equity market. The variance-equal weights (VEW) and principal components approach (PCA) are used to subsume 22 stress indicators into a composite stress index. The VEW and PCA stress indexes were examined to determine monitoring and their forecasting capabilities. It was found that the VEW stress index performed better than the PCA stress index, because it provided more consistent estimates for the level of Australian financial stress. Although, both models show some promise, each model fell short of giving adequate forecasts in financial stress especially at the peak time of the 2007-2009 GFC. Thus, more research is needed to understand the complex nature of financial crisis, how crises develop and the techniques that can be used to predict the onset of financial crises.
- Description: Doctor of Philosophy
- Authors: Mukulu, Sandra
- Date: 2017
- Type: Text , Thesis , PhD
- Full Text:
- Description: The series of financial crises that cascaded through and rocked much of the world over the past decade created opportunities to draw meaning from the pattern of countries succumbing to crisis and those who appear to be wholly or partially immune. This thesis examines the case of Australia, a developed country that has seldom experienced an endogenous crisis in the last few decades, but has experienced crisis by contagion. This study designs a financial stress index to measure and forecast the health of the Australian economy and proposes a custom-made stress index to: Gauge the potential for a crisis; and Signal when a timely intervention may minimise fear and contagion losses in the Australian financial market. Financial and economic data is used to design indicators for stress in the banking sector and equity, currency and bond markets. Further, this study explores how movements in equity markets of key trading partners of Australia can be used to predict movements in the Australian equity market. The variance-equal weights (VEW) and principal components approach (PCA) are used to subsume 22 stress indicators into a composite stress index. The VEW and PCA stress indexes were examined to determine monitoring and their forecasting capabilities. It was found that the VEW stress index performed better than the PCA stress index, because it provided more consistent estimates for the level of Australian financial stress. Although, both models show some promise, each model fell short of giving adequate forecasts in financial stress especially at the peak time of the 2007-2009 GFC. Thus, more research is needed to understand the complex nature of financial crisis, how crises develop and the techniques that can be used to predict the onset of financial crises.
- Description: Doctor of Philosophy
A framework for adoption decision process for blockchain technology - an institutional and actor-network theory perspective
- Authors: Kaushik, Shipra
- Date: 2023
- Type: Text , Thesis , PhD
- Full Text:
- Description: Blockchain has been the most promising technology of the recent times. Originated from bitcoin, a blockchain technology use case has now been explored across almost every industry. It provides several novel technological features like transparency, disintermediation, immutability, trust among stakeholders and decentralisation. Despite so many advantages, the overview of challenges around blockchain adoption has revealed that there is a scarcity of understanding about the process of blockchain adoption decisions. Several organisations have failed to take advantage of blockchain's potential due to uneven adoption across industries and regions. Whether to use blockchain in their business is a difficult decision for many organisations. To fill this gap, this study examined the adoption decision process of blockchain in organisations. Firstly, there is a need of framework that details the steps in the blockchain adoption decision process, including tasks involved and the rationales for the actions taken. This understanding will help the potential adopters to make a successful decision to adopt blockchain technology for their organisation. Secondly, very few studies have examined the factors that influence the stakeholders’ interactions and dynamics while making technology adoption decisions, especially in blockchain based applications. When systems are designed to protect privacy or obscure actors intentionally, such as blockchain platforms, it can be challenging to identify them and understand their roles. Blockchain being an inter-organisational technology, primarily depends on the involvement of internal and external stakeholders. Thus, this study explored the actors involved in the adoption decision process and their roles while aligning other actors towards blockchain adoption. Thirdly, as these actors act as stakeholders while making decision, they act as rational individuals. Therefore, this study also explored their rationales while they are involved in technology adoption decisions to have an effective outcome of the decision-making process. To achieve these objectives, this study utilises Innovation Translation approach derived from Actor-Network Theory and Institutional Theory for technology adoption. The study has utilised a three- round qualitative Delphi method through semi structured interviews to gather views from a panel of experts from organisations who have experienced the blockchain adoption decision process for their business. The targeted experts for this study were categorized as Adopters, Non-Adopters (dropped the idea) and Consultants using selective purposive sampling. The first two rounds were exploratory in nature, and to extend the validity of the responses gathered, the final round was a confirmatory round of interviews. For this study, the saturation was seen with ten experts in the panel for round 2 and round 3. For the pilot study eight participants agreed to be part of the panel. The interviews were recorded, transcribed and analysed using thematic analysis in the NVivo tool. The analysis confirmed the use of Innovation Translation approach in literature for understanding the actors and their roles, by giving a rich interpretation of the results in understanding the crucial interactions among the actors and drawing useful findings. The interpretation also provided an insight into the institutionalisation of blockchain by exploring the institutional pressures. The study has confirmed the existence of many pressures that existed for other technologies, remain for blockchain adoption too like hype, curiosity, competitiveness, business value, cost and time but has explored new institutional pressures with blockchain adoption decision process like understanding among consultants and adopting organisations, and process participations needs. Utilising Institutional Theory for blockchain technology has also revealed a fourth pressure that is exerted by the technology itself like maturity, consensus, network dominancy and technological features that are primarily seen as blockchain being an inter-organisational and a new technology that has not been accepted widely in organisations. Achieving the objectives of this study, the study has proposed a consolidated framework for the blockchain adoption decision process from an exploratory view. The first of its kind in literature, that elaborates on the stages involved in blockchain adoption decision process, identify the actors and explains their role at each stage and how those roles evolve and also provides an insight into the institutionalisation of blockchain by exploring the pressures. These gaps, objectives, method, analysis, and contributions are further discussed in this thesis comprehensively.
- Description: Doctor of Philosophy
- Authors: Kaushik, Shipra
- Date: 2023
- Type: Text , Thesis , PhD
- Full Text:
- Description: Blockchain has been the most promising technology of the recent times. Originated from bitcoin, a blockchain technology use case has now been explored across almost every industry. It provides several novel technological features like transparency, disintermediation, immutability, trust among stakeholders and decentralisation. Despite so many advantages, the overview of challenges around blockchain adoption has revealed that there is a scarcity of understanding about the process of blockchain adoption decisions. Several organisations have failed to take advantage of blockchain's potential due to uneven adoption across industries and regions. Whether to use blockchain in their business is a difficult decision for many organisations. To fill this gap, this study examined the adoption decision process of blockchain in organisations. Firstly, there is a need of framework that details the steps in the blockchain adoption decision process, including tasks involved and the rationales for the actions taken. This understanding will help the potential adopters to make a successful decision to adopt blockchain technology for their organisation. Secondly, very few studies have examined the factors that influence the stakeholders’ interactions and dynamics while making technology adoption decisions, especially in blockchain based applications. When systems are designed to protect privacy or obscure actors intentionally, such as blockchain platforms, it can be challenging to identify them and understand their roles. Blockchain being an inter-organisational technology, primarily depends on the involvement of internal and external stakeholders. Thus, this study explored the actors involved in the adoption decision process and their roles while aligning other actors towards blockchain adoption. Thirdly, as these actors act as stakeholders while making decision, they act as rational individuals. Therefore, this study also explored their rationales while they are involved in technology adoption decisions to have an effective outcome of the decision-making process. To achieve these objectives, this study utilises Innovation Translation approach derived from Actor-Network Theory and Institutional Theory for technology adoption. The study has utilised a three- round qualitative Delphi method through semi structured interviews to gather views from a panel of experts from organisations who have experienced the blockchain adoption decision process for their business. The targeted experts for this study were categorized as Adopters, Non-Adopters (dropped the idea) and Consultants using selective purposive sampling. The first two rounds were exploratory in nature, and to extend the validity of the responses gathered, the final round was a confirmatory round of interviews. For this study, the saturation was seen with ten experts in the panel for round 2 and round 3. For the pilot study eight participants agreed to be part of the panel. The interviews were recorded, transcribed and analysed using thematic analysis in the NVivo tool. The analysis confirmed the use of Innovation Translation approach in literature for understanding the actors and their roles, by giving a rich interpretation of the results in understanding the crucial interactions among the actors and drawing useful findings. The interpretation also provided an insight into the institutionalisation of blockchain by exploring the institutional pressures. The study has confirmed the existence of many pressures that existed for other technologies, remain for blockchain adoption too like hype, curiosity, competitiveness, business value, cost and time but has explored new institutional pressures with blockchain adoption decision process like understanding among consultants and adopting organisations, and process participations needs. Utilising Institutional Theory for blockchain technology has also revealed a fourth pressure that is exerted by the technology itself like maturity, consensus, network dominancy and technological features that are primarily seen as blockchain being an inter-organisational and a new technology that has not been accepted widely in organisations. Achieving the objectives of this study, the study has proposed a consolidated framework for the blockchain adoption decision process from an exploratory view. The first of its kind in literature, that elaborates on the stages involved in blockchain adoption decision process, identify the actors and explains their role at each stage and how those roles evolve and also provides an insight into the institutionalisation of blockchain by exploring the pressures. These gaps, objectives, method, analysis, and contributions are further discussed in this thesis comprehensively.
- Description: Doctor of Philosophy
A framework for sustainability performance assessment for manufacturing processes
- Authors: Singh, Karmjit
- Date: 2019
- Type: Text , Thesis , PhD
- Full Text:
- Description: Sustainable manufacturing methods make it possible to develop products in ways which minimize negative environmental impacts, conserve energy and save natural resources whilst being economically sound. The concepts of sustainability in manufacturing being are still fairly broad, in scope, and need to be more focused and firmly established at the process, machine or factory levels. This project proposes a structure for manufacturing with a main objective to develop a sustainability framework which encompasses various production processes. Structured information models for the seamless flow of information across the design and manufacturing domains, for selected manufacturing processes, are defined. The thesis work identifies key performance indicators (KPIs) for the assessment of manufacturing sustainability and performs analysis of selected unit manufacturing processes and their sub-processes with the aim of proposing a methodology for determining science-based measurements of the manufacturing processes affecting these KPIs. The theoretical foundations established are then used to develop a model that could evaluate sustainability of selected manufacturing processes and their respective process plans providing a basis for inter-process comparison and selection of the most sustainable process plan. The proposed framework is presented in form of a manufacturing planning computer-based package which is designed to to consider different influencing factors such as product information, part geometry, material related physical and processing properties and the manufacturing equipment operating data. The thesis presents a number of case studies which have been published in international journals. The case studies present estimates of the manufacturing sustainability KPIs for a number of production methods. These estimates have been verified with available shop floor data. The work in the thesis makes it possible to establish manufacturing industry equipped to deal the challenges of the future when sustainability will be the major factor up on which the quality of success will be determined.
- Description: Doctor of Philosophy
- Authors: Singh, Karmjit
- Date: 2019
- Type: Text , Thesis , PhD
- Full Text:
- Description: Sustainable manufacturing methods make it possible to develop products in ways which minimize negative environmental impacts, conserve energy and save natural resources whilst being economically sound. The concepts of sustainability in manufacturing being are still fairly broad, in scope, and need to be more focused and firmly established at the process, machine or factory levels. This project proposes a structure for manufacturing with a main objective to develop a sustainability framework which encompasses various production processes. Structured information models for the seamless flow of information across the design and manufacturing domains, for selected manufacturing processes, are defined. The thesis work identifies key performance indicators (KPIs) for the assessment of manufacturing sustainability and performs analysis of selected unit manufacturing processes and their sub-processes with the aim of proposing a methodology for determining science-based measurements of the manufacturing processes affecting these KPIs. The theoretical foundations established are then used to develop a model that could evaluate sustainability of selected manufacturing processes and their respective process plans providing a basis for inter-process comparison and selection of the most sustainable process plan. The proposed framework is presented in form of a manufacturing planning computer-based package which is designed to to consider different influencing factors such as product information, part geometry, material related physical and processing properties and the manufacturing equipment operating data. The thesis presents a number of case studies which have been published in international journals. The case studies present estimates of the manufacturing sustainability KPIs for a number of production methods. These estimates have been verified with available shop floor data. The work in the thesis makes it possible to establish manufacturing industry equipped to deal the challenges of the future when sustainability will be the major factor up on which the quality of success will be determined.
- Description: Doctor of Philosophy
A fuzzy derivative and dynamical systems
- Authors: Mammadov, Musa
- Date: 2002
- Type: Text , Thesis , PhD
- Full Text:
- Description: "The purpose of this thesis is to develop and study new techniques for the mathematical modeling of dynamical systems and to apply these techniques to data classification problems. This approach is based on the notion of a fuzzy derivative. The main aim of the thesis is to examine this notion in data classification."
- Description: Doctor of Philosophy
- Authors: Mammadov, Musa
- Date: 2002
- Type: Text , Thesis , PhD
- Full Text:
- Description: "The purpose of this thesis is to develop and study new techniques for the mathematical modeling of dynamical systems and to apply these techniques to data classification problems. This approach is based on the notion of a fuzzy derivative. The main aim of the thesis is to examine this notion in data classification."
- Description: Doctor of Philosophy
A good sheep run. Letters from New South Wales in Scottish newspapers between 1820 and 1850 with potential to influence decisions on emigration
- Authors: Hannaford, Graham
- Date: 2020
- Type: Text , Thesis , PhD
- Full Text:
- Description: The primary aim of this thesis is to contribute to ongoing historical research into migration to and settlement in Australia by Scots. It achieves this by identifying and examining letters sent from the colonies in New South Wales which were printed in historic Scottish newspapers between 1820 and 1850. In examining the material, this thesis argues that the letters had potential to influence emigration decisions by Scots. The study shows some of the ways in which New South Wales was reported in the Scottish press and compares those reports with conditions in Scotland at the time. The comparisons and analyses of the letters, with consideration of their authors and likely readers as well as the newspapers in which they were printed demonstrate that the letters did have potential to influence emigration decisions. Its particular contribution to knowledge arises from demonstrating how mostly private letters which became publicly available through publication in newspapers had potential to influence emigrants’ decisions about moving to Australia. Rather than claiming direct evidence of the publication of particular letters as having influenced emigration, it shows how reporting of conditions in Australia when set into a context of contemporary events and conditions in Scotland had potential to influence decisions. It is grounded in the body of historical research about colonial Australia and sits within this Australian historiographical context. Given the motivations and attractions of Scots to colonial Australia this thesis also engages with techniques and theoretical approaches associated with Scottish diaspora studies, an area of research that often emphasises other Scottish migration patterns to Canada, New Zealand and the USA. When considered together both of these historiographical approaches lend themselves to primary source material analysis and a methodological approach that this doctoral study uses to examine the motivations of Scots who migrated to colonial Australia.
- Description: Doctor of Philosophy
- Authors: Hannaford, Graham
- Date: 2020
- Type: Text , Thesis , PhD
- Full Text:
- Description: The primary aim of this thesis is to contribute to ongoing historical research into migration to and settlement in Australia by Scots. It achieves this by identifying and examining letters sent from the colonies in New South Wales which were printed in historic Scottish newspapers between 1820 and 1850. In examining the material, this thesis argues that the letters had potential to influence emigration decisions by Scots. The study shows some of the ways in which New South Wales was reported in the Scottish press and compares those reports with conditions in Scotland at the time. The comparisons and analyses of the letters, with consideration of their authors and likely readers as well as the newspapers in which they were printed demonstrate that the letters did have potential to influence emigration decisions. Its particular contribution to knowledge arises from demonstrating how mostly private letters which became publicly available through publication in newspapers had potential to influence emigrants’ decisions about moving to Australia. Rather than claiming direct evidence of the publication of particular letters as having influenced emigration, it shows how reporting of conditions in Australia when set into a context of contemporary events and conditions in Scotland had potential to influence decisions. It is grounded in the body of historical research about colonial Australia and sits within this Australian historiographical context. Given the motivations and attractions of Scots to colonial Australia this thesis also engages with techniques and theoretical approaches associated with Scottish diaspora studies, an area of research that often emphasises other Scottish migration patterns to Canada, New Zealand and the USA. When considered together both of these historiographical approaches lend themselves to primary source material analysis and a methodological approach that this doctoral study uses to examine the motivations of Scots who migrated to colonial Australia.
- Description: Doctor of Philosophy
A great leap forward : EFL curriculum, globalization and reconstructionism - a case study in North East China
- Authors: Zhang, Xiaohong
- Date: 2009
- Type: Text , Thesis , PhD
- Full Text:
- Description: I have used the name, The Great Leap Forward in relation to my study of English as a Foreign Language (EFL) curriculum reform as I have linked economic, political and social developments of the late 20th and early 21st centuries in China with education developments that have occurred at the same time as the reform has been implemented.
- Description: Doctor of Philosophy
- Authors: Zhang, Xiaohong
- Date: 2009
- Type: Text , Thesis , PhD
- Full Text:
- Description: I have used the name, The Great Leap Forward in relation to my study of English as a Foreign Language (EFL) curriculum reform as I have linked economic, political and social developments of the late 20th and early 21st centuries in China with education developments that have occurred at the same time as the reform has been implemented.
- Description: Doctor of Philosophy
A hand made wood object : Studio investigation into transformed nature
- Authors: Rein, Jeannette
- Date: 2016
- Type: Text , Thesis , Masters
- Full Text:
- Description: This research explores the importance of retaining traditional hand skills in terms of their relevance to contemporary and future art practice. I examine the hand made and the process of transforming timber into a wooden sculptural form. I investigate how the artist thinks with the material, and how this process gives the artist the ‘sight’ to identify new and original possibilities. Furthermore, I explore how the transformative approach perpetuates new knowledge, and how skills are modified and adapted to suit the changes. I describe this as a dialogic process. The research examines the correlation between the transformative process and the hand made object, the imprint of the maker and how their memories of the processes used, remain embedded in the object. In addition, this research investigates the transformation processes used in creating an object, to provide individualisation within our highly mechanised world, while providing a bridge connecting the past and the future. Through the examination of traditional hand skills, I demonstrate how such skills provide an anchor, a standard of quality and artisanship that connects artists from traditional wood practice, through contemporary praxis, to hand made digital art. My research focuses on the transformation process and traditional hand skills, the vital role they play in the creation of digital hand made objects; as digital processes utilize new materials, processes and machinery that interfaces with traditional analogue tools.
- Description: Masters by Research
- Authors: Rein, Jeannette
- Date: 2016
- Type: Text , Thesis , Masters
- Full Text:
- Description: This research explores the importance of retaining traditional hand skills in terms of their relevance to contemporary and future art practice. I examine the hand made and the process of transforming timber into a wooden sculptural form. I investigate how the artist thinks with the material, and how this process gives the artist the ‘sight’ to identify new and original possibilities. Furthermore, I explore how the transformative approach perpetuates new knowledge, and how skills are modified and adapted to suit the changes. I describe this as a dialogic process. The research examines the correlation between the transformative process and the hand made object, the imprint of the maker and how their memories of the processes used, remain embedded in the object. In addition, this research investigates the transformation processes used in creating an object, to provide individualisation within our highly mechanised world, while providing a bridge connecting the past and the future. Through the examination of traditional hand skills, I demonstrate how such skills provide an anchor, a standard of quality and artisanship that connects artists from traditional wood practice, through contemporary praxis, to hand made digital art. My research focuses on the transformation process and traditional hand skills, the vital role they play in the creation of digital hand made objects; as digital processes utilize new materials, processes and machinery that interfaces with traditional analogue tools.
- Description: Masters by Research
A locale of the cosmos : an epic of the Wimmera : exegesis and text
- Authors: Rieth, Homer Manfred
- Date: 2006
- Type: Text , Thesis , PhD
- Full Text:
- Description: "This project has, for its central component, an epic poem, 'A locale of the cosmos'. The accompanying exegesis examines epic as an ancient, but continually evolving form. It argues that, as a contemporary example of the genre and, as a sustained poetic rumination on landscape and memory, 'A locale of the cosmos' represents a significant development within the modern tradition of autobiographical epic. In broader terms, 'A locale of the cosmos' privileges the landscape and history of a region of Australia, the Wimmera region of north-western Victoria and, in doing so, explores the cumulative effects of the physical environment as a site for sustained poetic treatment. The poem is, therefore, an epic of both historical narrative and philosophical reflection, giving meaning to and interpreting ideas of space, place and locale. "Furthermore, it explores, in particular, the psychological and spiritual effects of vast horizontal distances, created by a landscape in which endless plains and immense horizons form an analogue of the wider cosmos. The poem's themes, therefore, bear not only on the prominences of the visible locale, but also explore the salients of an interior world, a landscape of the mind to which the poetry gives shape and meaning."
- Description: Doctor of Philosophy
- Authors: Rieth, Homer Manfred
- Date: 2006
- Type: Text , Thesis , PhD
- Full Text:
- Description: "This project has, for its central component, an epic poem, 'A locale of the cosmos'. The accompanying exegesis examines epic as an ancient, but continually evolving form. It argues that, as a contemporary example of the genre and, as a sustained poetic rumination on landscape and memory, 'A locale of the cosmos' represents a significant development within the modern tradition of autobiographical epic. In broader terms, 'A locale of the cosmos' privileges the landscape and history of a region of Australia, the Wimmera region of north-western Victoria and, in doing so, explores the cumulative effects of the physical environment as a site for sustained poetic treatment. The poem is, therefore, an epic of both historical narrative and philosophical reflection, giving meaning to and interpreting ideas of space, place and locale. "Furthermore, it explores, in particular, the psychological and spiritual effects of vast horizontal distances, created by a landscape in which endless plains and immense horizons form an analogue of the wider cosmos. The poem's themes, therefore, bear not only on the prominences of the visible locale, but also explore the salients of an interior world, a landscape of the mind to which the poetry gives shape and meaning."
- Description: Doctor of Philosophy
A longitudinal study of trauma, social and personality factors as predictors of post-traumatic stress symptom severity in student paramedics
- Authors: Armstrong, Kim Maree
- Date: 2008
- Type: Text , Thesis , PhD
- Full Text:
- Description: Previous research suggests student paramedics are among the professionals at highest risk of post-traumatic stress disorder (PTSD). However, little research has been conducted examining duty-related post-traumatic stress symptoms (PTSS) and clinical levels of PTSD in this poplulation. The current study of 36 student paramedics undertaking university or job-based training is the first longitudinal investigation of PTSS and PTSD in this group.
- Description: Professional Doctorate of Psychology (Clinical)
- Authors: Armstrong, Kim Maree
- Date: 2008
- Type: Text , Thesis , PhD
- Full Text:
- Description: Previous research suggests student paramedics are among the professionals at highest risk of post-traumatic stress disorder (PTSD). However, little research has been conducted examining duty-related post-traumatic stress symptoms (PTSS) and clinical levels of PTSD in this poplulation. The current study of 36 student paramedics undertaking university or job-based training is the first longitudinal investigation of PTSS and PTSD in this group.
- Description: Professional Doctorate of Psychology (Clinical)
A methodology for the analysis of interactive narrative environments : a four-factor framework
- Authors: Macfadyen, Alyx
- Date: 2009
- Type: Thesis , PhD
- Full Text:
- Description: Stories have been engaging humans for thousands of years, but in interactive narrative environments, the narrative is perceived to diminish as the source of engagement. One reason for this apparent diminution, is that in interactive environments there has been difficulty in understanding the relationship between design of the unfolding story, and the ability of a user within the story to alter the course of events. As yet there are no standard or accepted evaluative methods to understand interaction at a granular level, and to understand how stories and narratives flow across the expanse of technologies and mixed realities that characterise the way people communicate, share knowledge and are entertained. This thesis presents a novel methodology called the Four-Factor Framework, that takes as its premise that there are four fundamental elements in interactive stories and narratives that can be observed.
- Description: Doctor of Philosophy
- Authors: Macfadyen, Alyx
- Date: 2009
- Type: Thesis , PhD
- Full Text:
- Description: Stories have been engaging humans for thousands of years, but in interactive narrative environments, the narrative is perceived to diminish as the source of engagement. One reason for this apparent diminution, is that in interactive environments there has been difficulty in understanding the relationship between design of the unfolding story, and the ability of a user within the story to alter the course of events. As yet there are no standard or accepted evaluative methods to understand interaction at a granular level, and to understand how stories and narratives flow across the expanse of technologies and mixed realities that characterise the way people communicate, share knowledge and are entertained. This thesis presents a novel methodology called the Four-Factor Framework, that takes as its premise that there are four fundamental elements in interactive stories and narratives that can be observed.
- Description: Doctor of Philosophy
A multi-proxy approach to track ecological change in Gunbower Wetlands, Victoria, Australia
- Authors: Mall, Neeraj
- Date: 2021
- Type: Text , Thesis , PhD
- Full Text:
- Description: The wetlands of the Murray-Darling Basin have come under the threat of a drying climate, the over-allocation of water for irrigation agriculture and widespread catchment disturbance. A synthesis of many paleolimnological assessments undertaken in the upper and lower sections of the Murray floodplain, and the Murrumbidgee, reveal considerable ecological change in wetlands from early in European settlement. The wetlands of the Gunbower Forest lie in the middle reaches of the Murray River. They are located on Gunbower Island that is deemed a wetland of international significance under the Ramsar Convention and an icon site under the Living Murray Initiative. Many Gunbower Island wetlands are located in protected forests, while others are within a zone developed for irrigation, mostly dairy, agriculture. This study analysed the sedimentary records of two wetlands within the forest estate and two within irrigation lands intending to compare long term change in the Gunbower wetlands to studies on floodplains both up and downstream, and to assess the relative impact of regional causes of change and that of local land use. Sediments constitute natural archives of past environmental changes. Sediment records were recovered from four wetlands and radiometric dating and multi-proxy paleoecological techniques were applied to assess how these wetlands have responded to changes in human occupation and other factors, such as climate. Then, extracted sediment cores were taken from Black (core length: 84 cm) and Green (86 cm) Swamps located in the forest, and Taylors (94 cm) and Cockatoo (74 cm) Lagoons were situated amongst dairy farms. In order to reconstruct ecological and water quality changes from the study sites, the cores were analysed using four different analysis techniques, i.e., Itrax-XRF (X-Ray Fluorescence) scanning, Lead-210 (210Pb) dating, Stable isotope and diatom analysis. XRF scanning provided evidence of the elemental composition of the cores. Detrital enrichment in the lower parts of all cores was observed, indicating elevated erosion rates or low water levels. In addition to this, some recent metal pollution was evident with high Cu, Ni and Pb inputs. Stable isotopes provided limited information on the carbon and nitrogen sources. The
- Description: Doctor of Philosophy
- Authors: Mall, Neeraj
- Date: 2021
- Type: Text , Thesis , PhD
- Full Text:
- Description: The wetlands of the Murray-Darling Basin have come under the threat of a drying climate, the over-allocation of water for irrigation agriculture and widespread catchment disturbance. A synthesis of many paleolimnological assessments undertaken in the upper and lower sections of the Murray floodplain, and the Murrumbidgee, reveal considerable ecological change in wetlands from early in European settlement. The wetlands of the Gunbower Forest lie in the middle reaches of the Murray River. They are located on Gunbower Island that is deemed a wetland of international significance under the Ramsar Convention and an icon site under the Living Murray Initiative. Many Gunbower Island wetlands are located in protected forests, while others are within a zone developed for irrigation, mostly dairy, agriculture. This study analysed the sedimentary records of two wetlands within the forest estate and two within irrigation lands intending to compare long term change in the Gunbower wetlands to studies on floodplains both up and downstream, and to assess the relative impact of regional causes of change and that of local land use. Sediments constitute natural archives of past environmental changes. Sediment records were recovered from four wetlands and radiometric dating and multi-proxy paleoecological techniques were applied to assess how these wetlands have responded to changes in human occupation and other factors, such as climate. Then, extracted sediment cores were taken from Black (core length: 84 cm) and Green (86 cm) Swamps located in the forest, and Taylors (94 cm) and Cockatoo (74 cm) Lagoons were situated amongst dairy farms. In order to reconstruct ecological and water quality changes from the study sites, the cores were analysed using four different analysis techniques, i.e., Itrax-XRF (X-Ray Fluorescence) scanning, Lead-210 (210Pb) dating, Stable isotope and diatom analysis. XRF scanning provided evidence of the elemental composition of the cores. Detrital enrichment in the lower parts of all cores was observed, indicating elevated erosion rates or low water levels. In addition to this, some recent metal pollution was evident with high Cu, Ni and Pb inputs. Stable isotopes provided limited information on the carbon and nitrogen sources. The
- Description: Doctor of Philosophy
A neural network approach for predicting the direction of the Australian stock market index
- Authors: Tilakaratne, Chandima
- Date: 2004
- Type: Text , Thesis , Masters
- Full Text:
- Description: This research investigated the feasibility and capability of neural network-based approaches for predicting the direction of the Australian Stock market index (the target market). It includes several aspects: univariate feature selection from the historical time series of the target market, inter-market analysis for finding the most relevant influential markets, investigations of the effect of time cycles on the target market and the discovery of the optimal neural network architectures. Previous research on US stock markets and other international markets have shown that the neural network approach is one of most powerful techniques for predicting stock market behaviour. Neural networks are capable of capturing the non-linear stochastic and chaotic patterns in the stock market time series data. This study discovered that the relative return series of the Open, High, Low and Close prices of the target market, show 6-day cycles during the studied period of about 14 years. Multi-layer feedforward neural networks trained with a backpropagation algorithm were used for the experiments. Two major testing methods: testing with randomly selected test data and forward testing, were examined and compared. The best neural network developed in this study has achieved 87%, 81% 83% and 81% accuracy respectively in predicting the next-day direction of the relative return of the Open, High, Low and Close prices of the target market. The architecture of this network consists of 33 input features, one hidden layer with 3 neurons and 4 output neurons. The best input features set includes the relative returns from 1 to 6 days in the past of the Open, High, Low and Close prices of the target market, the day of the week, and the previous day’s relative return of the Close prices of the US S&P 500 Index, US Dow Jones Industrial Average Index, US Gold/Silver Index, and the US Oil Index.
- Description: Master of Information Technology by Research
- Authors: Tilakaratne, Chandima
- Date: 2004
- Type: Text , Thesis , Masters
- Full Text:
- Description: This research investigated the feasibility and capability of neural network-based approaches for predicting the direction of the Australian Stock market index (the target market). It includes several aspects: univariate feature selection from the historical time series of the target market, inter-market analysis for finding the most relevant influential markets, investigations of the effect of time cycles on the target market and the discovery of the optimal neural network architectures. Previous research on US stock markets and other international markets have shown that the neural network approach is one of most powerful techniques for predicting stock market behaviour. Neural networks are capable of capturing the non-linear stochastic and chaotic patterns in the stock market time series data. This study discovered that the relative return series of the Open, High, Low and Close prices of the target market, show 6-day cycles during the studied period of about 14 years. Multi-layer feedforward neural networks trained with a backpropagation algorithm were used for the experiments. Two major testing methods: testing with randomly selected test data and forward testing, were examined and compared. The best neural network developed in this study has achieved 87%, 81% 83% and 81% accuracy respectively in predicting the next-day direction of the relative return of the Open, High, Low and Close prices of the target market. The architecture of this network consists of 33 input features, one hidden layer with 3 neurons and 4 output neurons. The best input features set includes the relative returns from 1 to 6 days in the past of the Open, High, Low and Close prices of the target market, the day of the week, and the previous day’s relative return of the Close prices of the US S&P 500 Index, US Dow Jones Industrial Average Index, US Gold/Silver Index, and the US Oil Index.
- Description: Master of Information Technology by Research
A new perceptual dissimilarity measure for image retrieval and clustering
- Authors: Shojanazeri, Hamid
- Date: 2018
- Type: Text , Thesis , PhD
- Full Text:
- Description: Image retrieval and clustering are two important tools for analysing and organising images. Dissimilarity measure is central to both image retrieval and clustering. The performance of image retrieval and clustering algorithms depends on the effectiveness of the dissimilarity measure. ‘Minkowski’ distance, or more specifically, ‘Euclidean’ distance, is the most widely used dissimilarity measure in image retrieval and clustering. Euclidean distance depends only on the geometric position of two data instances in the feature space and completely ignores the data distribution. However, data distribution has an effect on human perception. The argument that two data instances in a dense area are more perceptually dissimilar than the same two instances in a sparser area, is proposed by psychologists. Based on this idea, a dissimilarity measure called, ‘mp’, has been proposed to address Euclidean distance’s limitation of ignoring the data distribution. Here, mp relies on data distribution to calculate the dissimilarity between two instances. As prescribed in mp, higher data mass between two data instances implies higher dissimilarity, and vice versa. mp relies only on data distribution and completely ignores the geometric distance in its calculations. In the aggregation of dissimilarities between two instances over all the dimensions in feature space, both Euclidean distance and mp give same priority to all the dimensions. This may result in a situation that the final dissimilarity between two data instances is determined by a few dimensions of feature vectors with relatively much higher values. As a result, the dissimilarity derived may not align well with human perception. The need to address the limitations of Minkowski distance measures, along with the importance of a dissimilarity measure that considers both geometric distance and the perceptual effect of data distribution in measuring dissimilarity between images motivated this thesis. It studies the performance of mp for image retrieval. It investigates a new dissimilarity measure that combines both Euclidean distance and data distribution. In addition to these, it studies the performance of such a dissimilarity measure for image retrieval and clustering. Our performance study of mp for image retrieval shows that relying only on data distribution to measure the dissimilarity results in some situations, where the mp’s measurement is contrary to human perception. This thesis introduces a new dissimilarity measure called, perceptual dissimilarity measure (PDM). PDM considers the perceptual effect of data distribution in combination with Euclidean distance. PDM has two variants, PDM1 and PDM2. PDM1 focuses on improving mp by weighting it using Euclidean distance in situations where mp may not retrieve accurate results. PDM2 considers the effect of data distribution on the perceived dissimilarity measured by Euclidean distance. PDM2 proposes a weighting system for Euclidean distance using a logarithmic transform of data mass. The proposed PDM variants have been used as alternatives to Euclidean distance and mp to improve the accuracy in image retrieval. Our results show that PDM2 has consistently performed the best, compared to Euclidean distance, mp and PDM1. PDM1’s performance was not consistent, although it has performed better than mp in all the experiments, but it could not outperform Euclidean distance in some cases. Following the promising results of PDM2 in image retrieval, we have studied its performance for image clustering. k-means is the most widely used clustering algorithm in scientific and industrial applications. k-medoids is the closest clustering algorithm to k-means. Unlike k-means which works only with Euclidean distance, k-medoids gives the option to choose the arbitrary dissimilarity measure. We have used Euclidean distance, mp and PDM2 as the dissimilarity measure in k-medoids and compared the results with k-means. Our clustering results show that PDM2 has perfromed overally the best. This confirms our retrieval results and identifies PDM2 as a suitable dissimilarity measure for image retrieval and clustering.
- Description: Doctor of Philosophy
- Authors: Shojanazeri, Hamid
- Date: 2018
- Type: Text , Thesis , PhD
- Full Text:
- Description: Image retrieval and clustering are two important tools for analysing and organising images. Dissimilarity measure is central to both image retrieval and clustering. The performance of image retrieval and clustering algorithms depends on the effectiveness of the dissimilarity measure. ‘Minkowski’ distance, or more specifically, ‘Euclidean’ distance, is the most widely used dissimilarity measure in image retrieval and clustering. Euclidean distance depends only on the geometric position of two data instances in the feature space and completely ignores the data distribution. However, data distribution has an effect on human perception. The argument that two data instances in a dense area are more perceptually dissimilar than the same two instances in a sparser area, is proposed by psychologists. Based on this idea, a dissimilarity measure called, ‘mp’, has been proposed to address Euclidean distance’s limitation of ignoring the data distribution. Here, mp relies on data distribution to calculate the dissimilarity between two instances. As prescribed in mp, higher data mass between two data instances implies higher dissimilarity, and vice versa. mp relies only on data distribution and completely ignores the geometric distance in its calculations. In the aggregation of dissimilarities between two instances over all the dimensions in feature space, both Euclidean distance and mp give same priority to all the dimensions. This may result in a situation that the final dissimilarity between two data instances is determined by a few dimensions of feature vectors with relatively much higher values. As a result, the dissimilarity derived may not align well with human perception. The need to address the limitations of Minkowski distance measures, along with the importance of a dissimilarity measure that considers both geometric distance and the perceptual effect of data distribution in measuring dissimilarity between images motivated this thesis. It studies the performance of mp for image retrieval. It investigates a new dissimilarity measure that combines both Euclidean distance and data distribution. In addition to these, it studies the performance of such a dissimilarity measure for image retrieval and clustering. Our performance study of mp for image retrieval shows that relying only on data distribution to measure the dissimilarity results in some situations, where the mp’s measurement is contrary to human perception. This thesis introduces a new dissimilarity measure called, perceptual dissimilarity measure (PDM). PDM considers the perceptual effect of data distribution in combination with Euclidean distance. PDM has two variants, PDM1 and PDM2. PDM1 focuses on improving mp by weighting it using Euclidean distance in situations where mp may not retrieve accurate results. PDM2 considers the effect of data distribution on the perceived dissimilarity measured by Euclidean distance. PDM2 proposes a weighting system for Euclidean distance using a logarithmic transform of data mass. The proposed PDM variants have been used as alternatives to Euclidean distance and mp to improve the accuracy in image retrieval. Our results show that PDM2 has consistently performed the best, compared to Euclidean distance, mp and PDM1. PDM1’s performance was not consistent, although it has performed better than mp in all the experiments, but it could not outperform Euclidean distance in some cases. Following the promising results of PDM2 in image retrieval, we have studied its performance for image clustering. k-means is the most widely used clustering algorithm in scientific and industrial applications. k-medoids is the closest clustering algorithm to k-means. Unlike k-means which works only with Euclidean distance, k-medoids gives the option to choose the arbitrary dissimilarity measure. We have used Euclidean distance, mp and PDM2 as the dissimilarity measure in k-medoids and compared the results with k-means. Our clustering results show that PDM2 has perfromed overally the best. This confirms our retrieval results and identifies PDM2 as a suitable dissimilarity measure for image retrieval and clustering.
- Description: Doctor of Philosophy
A new ramp metering control algorithm for optimizing freeway travel times
- Authors: Lierkamp, Darren
- Date: 2006
- Type: Text , Thesis , PhD
- Full Text:
- Description: "In many cities around the world traffic congestion has been increasing faster than can be dealt with by new road construction. To resolve this problem traffic management devices and technology such as ramp meters are increasingly being utilized."--leaf 1.
- Description: Masters of Information Technology
- Authors: Lierkamp, Darren
- Date: 2006
- Type: Text , Thesis , PhD
- Full Text:
- Description: "In many cities around the world traffic congestion has been increasing faster than can be dealt with by new road construction. To resolve this problem traffic management devices and technology such as ramp meters are increasingly being utilized."--leaf 1.
- Description: Masters of Information Technology