It is precisely because there is no absolute measure of the value of education that comparative indicators between institutions or countries provide such a crucial, and frequently contested, insight into performance.
The Organisation for Economic Cooperation and Development has acknowledged that it must improve the range and quality of its higher education indicators as a priority. That begs the question of what types of new indicators are needed and which aspects of higher education need to be measured and compared.
In its introduction to Education At A Glance, the OECD warns against "channelling lower-achieving students into educational ghettos" instead of providing "adequate resources to create high-quality and diversified educational options". That warning should ring true for a number of countries where universities serve as centres of learning at the higher levels, but also as vast holding centres for school-leavers who kick their heels while waiting to be absorbed somehow or other into the labour market.
Continued opposition to the politically embarrassing "student survival" indicator shows just how hard it is to come up with technically perfect comparative data able to withstand national sensitivity about drop-out rates.
While waiting for the return of an irreproachable survival indicator, OECD member countries could agree to gather more varied data on the quality of student life. The number of seminars in a degree course, the size of groups, the number of library seats, the number of assistants, or the availability of ancillary services would all provide revealing comparisons.
At present, there is an almost total lack of international data on students' lives. Many OECD nations keep no figures at all on how many supposedly full-time students also hold down jobs. If mass higher education, second-chance and life-long learning are to be the key to information societies, countries need to know not only how to create flexible entry systems but also what kind of organisation and what conditions for part and full-time study actually ensure results.
Closer scrutiny of university outcomes - employment and earnings - tied more specifically to type of qualification, cost, gender, type of institution and region, would indicate which systems are most efficient. But would data tell us if a German student with typically eight years of higher education really knows much more and applies more knowledge when starting work than a British student with three years?
There may never be a valid international comparative knowledge tests for students, along the lines of the mathematics tests for school children, to settle such a question, but it begs an answer in terms of cost-effectiveness, if only from empirical evidence. The indicators available give some inkling of the levels of efficiency of different countries' higher education systems, but they tell us nothing about those systems themselves: about what structures are in place and how they are functioning. Indicators that pinpoint the level of decision-making in higher education would be an important place to start.
Measurements of the degree of institutional autonomy enjoyed in different countries by different types of institution could include how vice chancellors and staff are appointed, how public and private funding is allocated and how new courses and qualifications are validated. The research component of higher education is another area as tricky to measure and compare internationally as it is necessary.
National approaches to research as a separable component of universities diverge even more greatly than approaches to courses and qualifications. But if genuine indicators of the quality of higher education are to be established, that nettle has to be grasped too.