Each year, institutions eagerly await reports from Shanghai Jiao Tong University, Times Higher, QS, and other organisations that create and publish international rankings of university performance. The metrics included in league tables and rankings—research income, research staff, number of doctoral candidates, numbers of publications—are common to other measures of research performance. Invariably, these ‘four pillars’ of research performance measurement are used as proxy measures of quality, but they are in fact quantity measures, reflecting that size does matter. For smaller and regional institutions that are not listed in the Top 100, or not even players in the Top 500, it is difficult to demonstrate and measure quality when quantity is such a factor. This article examines the history of the research performance measurement within Australian higher education, and questions the current validity and focus of these metrics. It further explores the context of these metrics, and considers the requirements for ‘little fish’ in the higher education ‘pond’ to demonstrate excellence in research. The last decade has seen a proliferation of the measurement of research productivity within the higher education sector, not just within Australia but worldwide. While teaching and learning may be the public interface of the university, servicing hundreds of thousands of Australian and international students, it is performance in research that drives much of the funding. Research performance measures are used as a proxy for the reputation and performance of the institution within local and international contexts. Indeed, the second half of each calendar year is now dubbed ‘rankings season’, as it sees the release of a range of international higher education performance assessments in the form of league tables or rankings. For the smaller higher education institutions within Australia, typically located in the regional areas, the rankings season rarely features the work of their institutions. Smaller and/or regional institutions find it difficult to compete with the research size and capacity of the ranked universities that so often influence position on these rankings. Considering the current invisibility of smaller Australian institutions within the world rankings, this subset of the higher education sector must be given consideration regarding demonstration of value. Rather than opting out or disparaging the rankings, are there ways in which smaller institutions can demonstrate their worth that do not rely merely on the size of the institution? If so, what are the theoretical constructs that underlie the development of such metrics, and how can the little fish in the big higher education pond capture this in new or revitalised indicators?
Academic mobility is considered a standard requirement for the development and progression of an academic research career. However, this career mobility is at odds with the drive to recruit and retain professionally-qualified workers in regional Australia, to ensure future generations of regional Australians have capacity to access higher education in their home region. To date, little work has been completed regarding the retention of active research staff in regional Australia. The purpose of this paper is twofold: to determine the viability of using author affiliation data as listed on publications to track an institutional cohort of authors by their affiliation; also, to determine if data analysed using this method revealed any insights regarding the retention of academic staff. Whilst using author affiliation data was found to be viable, it required extensive data manipulation and cleansing. Once analysed, the data revealed intriguing insights into the retention and movement of active academic researchers. Implications for regional higher education will be discussed.