League tables and performance indicators are much beloved by funders and by newspapers, but less loved by those they grade who understand the distortions they can produce.
This week it is the turn of the further education colleges to be measured against an arbitrary set of criteria. The criteria chosen by the further education funding council reveal the council's (and by extension the Government's) preoccupation with encouraging retention, achievement of qualifications set out in national attainment targets and growth - the latter now severely dented by the Treasury's determination to end its uncapped commitment to pay for extra students.
These are the first indicators to be published by the FEFC. They set the benchmark but as yet tell us nothing about changes over time. As a contribution to transparency and accountability and a spur to improved performance, they are welcome. Knowing that these criteria will count towards funding provides powerful incentives for colleges, incentives arranged in a reasonably sophisticated way so that colleges can concentrate their effort where it suits them best.
That said, there are difficulties with these kinds of indicator. They tend to be crude and ambiguous. What for example do drop-out rates show? They may be the result of rigorous assessment, the sudden availability of jobs locally, inadequate teaching or any number of other factors. Are they a good sign or, as they are usually taken to be, a bad sign? Do they represent a generous opportunity to people who want to have a go, or a waste of public money? Does a higher than average unit cost mean better provision or profligacy? Does making achievement of qualifications a criterion of success encourage colleges to go easy on assessment?
A second difficulty is that the establishment of criteria against which colleges know they are going to be measured invites games-playing. The Victorians discovered when they introduced payment by results that teachers quickly began coaching children to perform narrowly to the test. Similarly the present arrangments could tempt colleges to bolster numbers and bend over backwards to keep on students who have made a wrong choice.
This is not to argue against such indicators. It is to suggest that they should be adjusted fairly frequently to keep ahead of the games players and that the range should be wide enough to allow colleges to build on their strengths. Better still, the colleges themselves, through their own association, might like to develop their own indicators linked to educational objectives rather than relying on measures designed primarily to drive Government policies.