Good University Guide 2010: Do the tables work?

Good University Guide 2010

Ahem, not really. This post will set out an assessment of the methodology of the Times Good University Guide 2010. The aim is to show how judgements of quality based on these tables are untenable. More seriously, it highlights the muddle that the current all-encompassing scientific paradigm so beloved of Government has landed us in.

“Universities were ranked according to measures in eight key performance areas; Student Satisfaction, Research Quality, Entry Standards, Student-Staff Ratios, Services & Facilities Spend, Completion, Good Honours and Graduate Prospects.” Okay so far.

“All sources of the raw data used in the table are in the public domain.” Okay no problem here, either, although it would be interesting to check that this is true.

“The National Student Survey (NSS) was the source of the Student Satisfaction data. The survey encompasses the views of final year students on the quality of their courses.” Right. This qualitative survey is no more valid than any other survey of its type. That is, it is not conducted under the supervision of staff and so can be filled in by anyone and/or mistakenly so. Moreover, like other qualitative studies, we are reliant on the respondents telling the truth, being able to describe their own inner feelings accurately and also not wanting to change their minds in the future. There is, however, an overwhelming body of evidence that shows this is not the case. Humans change their minds, are emotional, are sometimes wrong about how they feel and often lie. Daniel Dennett, for example, recognises this inherent fallibility when arguing for Heterophenomenology

“The information regarding Research Quality was sourced from the 2008 Research Assessment Exercise, a peer review exercise to evaluate the quality of research in UK higher education institutions undertaken by the UK higher education funding bodies.” Peer Reviews? What’s that then? This is when a group of experts scrutinise the work of others. And, is subject to much criticism. As for the RAE, it is a classic example of bureaucratic performance masquerading as an objective process of assessment and is like peer review under regular attack not least, ironically, for emphasising quantity over quality.

“Entry Standards, Student-Staff Ratios, Services & Facilities Spend, Completion, Good Honours and Graduate Prospects data were supplied by the Higher Education Statistics Agency (HESA) which provides a system of data collection, analysis, and dissemination in relation to higher education in the whole of the United Kingdom. The original sources of data for these measures are data returns made by the universities themselves to HESA.” It is always laughable when those with a vested interest in as positive portrayal as possible are asked to return their own data – Bernie Madoff anyone?

“In building the table, scores for Student Satisfaction and Research Quality were weighted by 1.5; all other indicators were weighted by 1. The indicators were combined using a z-score transformation and the totals were transformed to a scale with 1000 for the top score. For Entry Standards, Good Honours and Graduate Prospects the score was adjusted for subject mix.” It is a common ploy when asserting the veracity of what one is doing to spout statistics in the hope that the scent is lost. The above paragraph tells us nothing whatsoever.

“Student Satisfaction

The percentage of positive responses (Agree & Definitely Agree) in each of the six question areas (Teaching, Assessment & Feedback, Academic Support, Organisation & Management, Learning Resources and Personal Development) plus the Overall Satisfaction question were combined to provide a composite score and averaged over two years.” Attitudinal surveys are subject to the same criticism of ANY survey (see above). With attitudes, moreover, whether one agrees or not can oversimplify a response. Surely it is contingent on other factors?!

“Entry Standards

Mean tariff point scores on entry for first year, first degree students under 21 years of age based on A and AS Levels and Highers and Advanced Highers only. Entrants with zero tariffs were excluded from the calculation. Source: HESA 2007/8.” The assumption here, of course, is that students who have scored higher entrance grades are better students but as the vast majority of students in the UK went to state schools and therefore can be expected to have scored lower grades than private school pupils this doesn’t tell us very much more.”

“Student-staff ratio

The number of students at each institution as defined in the HESA Session HE and FE populations as an FTE (full-time equivalent) divided by the number of staff FTE based on academic staff including Teaching Only and Teaching & Research staff but excluding Research Only staff.” Again, this is based on another common assumption i.e. that a higher ratio is better for education. However, the evidence from schools, at least, is that this is not the case. Indeed, as many Arts subjects are founded upon self-study rather than lectures this ratio is of little importance.

“Services and Facilities Spend

A two year average of expenditure on Academic Services and Staff & Student Facilities, divided by the total number of full time equivalent students. Source: HESA 2005/6 and 2006/7.” Money is important but just because money is spent doesn’t necessarily translate into success. The varying fortunes of football teams in the English Premier Leagure are testatment to this.


Percentage of students projected to leave with a degree including students who transfer to other institutions. The HESA Performance Indicator uses current movements of students to project the eventual outcome. The measure used in the table projects what proportion of students will eventually gain a degree, what proportion will leave their current university or college but transfer into higher education and is presented as a proportion of students with known data.” This appears a sensible measure but actually only acts to punish universities for taking on students form disadvantaged backgrounds as it has been shown that there is a link between this and drop-out rates.

“Good Honours

The number of students who graduated with a first or upper second class degree as a proportion of the total number of graduates with classified degrees.” In the scientific management paradigm, grades allow for neat and tidy judgements of ability. However, there is a growing controversy regarding the awarding of grades and degrees. It is asserted that the current system needs reforming not least because of grade inflation, the difficulty of grade comparability and the lack of clearly spelt out standards and a weak external examiner and QA system.

“Graduate Prospects

Destinations of full-time first degree UK domiciled leavers. The destination categories were based upon a split of SOC 2000 codes for graduates and leavers entering employment, together with type of qualification codes for graduates and leavers entering further study.” No controversy here, you would think. Work or course progression is what its all about. Whether graduates can attribute this entirely to going to one uni or another is another debate altogether.

So, in short, the Good University Guide offers very much and fits very nicely into our intuitive and culturally-based notions of how to define and measure success and quality. However, when its methodology is looked at in detail then it begins to appear far less authoritative. Indeed, it is reasonable to say that it is not so Good after all.

Comments are closed


Register  |  Login

© 2017 EducationState: the education news blog.. All Rights Reserved. Log in