In the article on the proposed new "performance" indicators (THES, February 12) a table compared the progression of students through an imaginary "University X" with average figures for similar universities ("adjusted sector percentage"). This is an illustration of the format the Higher Education Funding Council for England plans to use. Unfortunately, the information in the table is practically useless, as there is no indication of precision associated with the figures. This is the sort of mistake that causes PhD students to fail their degrees.
With respect to the first category, 3.6 per cent of X's entry (that is, six students) do not progress to a degree, compared with an adjusted sector figure of 4.1 per cent. The note attached clearly interprets this as showing that X's non-progression rate is lower than the adjusted sector figure. These figures cannot be interpreted in the absence of further information but based on plausible guesses of the year-to-year variation of that figure, I calculate the 95 per cent confidence limits of the 3.6 per cent in the table lie between 2.7 per cent and 5.1 per cent. In this case it would not be sound to argue that X's figure was lower than the adjusted sector value. The same weakness applies throughout, and, by implication, to all proposed tables of statistical performance indicators.
I hope it is not too late for the presentation of such figures to be changed so that some confidence can be placed in their ability to indicate whether "performance" is above or below that of the "adjusted sector".
Once this basic requirement is met, the real difficulties begin. If X's non-progression rate is below that of the adjusted sector, should X be commended for excellent teaching or condemned for lax assessment? One thing is certain: the Quality Assurance Agency's procedures will not be able to tell us whether either has occurred.
David Packham. Department of materials science and engineering, University of Bath