Assessment damages everyone involved. I’m giving all my students As

Grading fills students with anxiety and academics with guilt. It is the enemy of real education. Time for a rebellion, says Andy Farnell

March 16, 2023
Four Barbie dolls wearing outfits out of the letter A to illustrate Assessment just damages everyone involved. I’m giving all my students As
Source: Getty (edited)

This morning I gave all 50 of my students A grades. Then I took a shower, danced, ran to the beach, swam and cried tears of joy. For the first time in years I feel like a real professor again, my work vital, alive and human.

After 30 years of self-assured professionalism I’ve experienced a crisis of faith. As a young professor, I believed in systems and fairness. Now I don’t. Grave moral responsibility to grade students without having real authority to use my own judgement has weighed on my mind, robbing me of sleep and health. Time pressure to prepare for three other modules this semester was the last straw.

But do my students all “deserve” straight As, you ask. My answer is that I don’t know any longer. Much ado is made about the “student experience”, but in reality it is one of technological dependence and dystopian performance anxiety. We meticulously craft clear assignment briefs, clear rubrics and model answers. And we allow students to test submissions against Turnitin, tweaking them until they “pass”. The emergence of ChatGPT, summarising tools, advanced Grammarly features and Copilot-style computer code “support” will only further undermine the human values we purport to hold.

The question I might ask back is why it matters whether my students deserve straight As. How did we get so hung up on judgement, as if universities were courts of law and education primarily a process of justice? The result is a fear of straying from the specified formula for academic beauty that resembles the accounts by Naomi Wolf or Susie Orbach of female body anguish. Like a row of Barbie dolls, each student submission mirrors the model answer in insipidly perfect cookie-cutter prose and code snippets. How am I supposed to differentiate them? Keyword count? Referencing style? Perhaps by simply printing them out and weighing the reports, as my own professors (presumably) joked that they did?

The more we try to make grading “fair”, the less it serves anything even resembling a purpose beyond make-work activity. Yet that doesn’t stop us trying. Why? Because knowledge is no longer the product of the “education industry”. Data is. Specifically, individualised psychometric and performance data for use in professional gatekeeping. Students know this, so they’ve become obsessed with their extrinsic “permanent record”. They are no longer the least bit interested in what I have to say or give as a teacher. They are not interested in my formative feedback. They hang only on the grades that I give them.

Some professors have lashed out, making their students scapegoats for “the system” and failing the entire class for “academic misconduct”. But students cannot be blamed for acting rationally. It is we, the faculty, whose misconduct must be put right. We must rebel against the idea that years of academic effort and, indeed, the very worth of a person are reducible to their final grade.

My rebellion is not entirely novel. For millennia, education functioned without individual measurement. And over the past 20 years “ungrading” has become something of a movement, especially in US humanities. But although common motives relate to gender, race and class concerns, my own stance is grounded in science.

My fields include cryptography and signal processing. For us, a hard problem is that it’s near impossible to grade computer code or judge its originality. There really is a right answer and, if students copy the same model, change a few variables and values, all I can do is pedantically waste time looking for imaginary faults.

More generally, though, there is overwhelming evidence of psychological damage caused by the constant anxiety associated with competitive peer-comparison and obsessing over inconsequential rules. This is wholly in conflict with the creative, equitable relations we so desperately need to allow students, once again, to focus on learning.

It is also in conflict with employers’ interests. Firms that outsource their recruitment to universities are by no means necessarily recruiting the most innovative minds. More likely, they are recruiting the most obedient and/or the most cynical, the most willing to second-guess algorithms and demand better grades.

The emotional burdens imposed by grading are similar to those borne by social media content moderators. Grading fosters hostile relations with students and racks teachers – who all feel grading’s unfairness – with guilt and self-conflict. Nor can grading be fairly automated. Although more consistent than humans, algorithms are no more reliable or accurate. And while humans can acknowledge or challenge hidden biases, digital systems invisibly encode and entrench them.

The consequences for professors who “ungrade” vary. Some are heralded as progressives and promoted to pedagogical leadership. Others are fired. But it is important to observe closely who pushes back against ungrading and why. That’s why I’ve undertaken this experiment: not so much to measure what my students learn from getting A grades (the point of the experiment is not to) but to observe the response of the institution.

I may well get fired, too. But if I do, I’ll be happy that my low-cost, high-impact research might be enlightening for school-leavers thinking of entrusting their ongoing “education” to institutions that have long since ceased to take that mission remotely seriously.

Andy Farnell is a visiting and associate professor in signals, systems and cybersecurity at a range of European universities.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (9)

In brief we are doomed. I get all the points made here, which lead to the question what are we educating people for? The whole enterprise has become unfit for purpose as we are not adequately discriminating between the thick as mince (though often personable) and the intellectually capable and curious student. The Education process has become an earlier entry point in to accumulating debt and gaining some shiny sheriff badge at the end that promises enhanced consumer opportunity. Unfortunately academics seem unable to halt the slide towards the sleep of reason. As Goya predicted the monsters and spectres of unreason now hold sway.
Why do we want to discriminate between the "thick as mince" and "intellectually capable"? Do the thick as mince not also benefit from having their mind opened and their horizons broadened?
I have no idea why THE gives space to such nonsense. Some sort of assessment is needed and in my (30 year+) experience, this has been fair and appropriate. Then I do work in a Russell Group Engineering department that has not suffered much grade inflation. There is no emotional burden since assessment criteria are defined and students can easily be informed what was good and what was not. I guess I am just lucky to work in a STEM subject!
So good to read this. My doctorate thesis looked at workplace-based assessment - in three institutions in different countries. My analysis showed that the different stakeholders manifested quite different stories of assessment. In the accreditation and institutional documents the discourse of measurement predominated (learning outcomes); in contrast, while the managers were leaning towards standardization and objectivity, they were also aware of a more complex assessment culture. For the clinical supervisors/assessors the psychometric grades given could not be seen as a legitimate measure of objectivity as the authentic and holistic constructs of competency based medicial education (CBME) dominated. WBA is a socio-cultural-material phenomenon - and I would add so is all assessment. It was the inter-relationships between institutionalized and disciplinary discourses; between standardized and personalized competencies; between educational and practitioner identities; and the entanglement of artefacts and spatio-temporal arrangements, that enabled or constrained how assessments were being carried out. Learning outcomes are a fallacy a pretence of objectivity. In the USA everyone got an A. In the UK it was teams of assessors getting together and talking about students and assessments that was the best way to determine grades. Time consuming but fair.
Despite the challenges assessment produces, in medical education, assessments (frequently as MCQs) have demonstrated predictive validity for performance in the work place, including patient survival, time spent in hospital care, and even likelihood of Fitness to Practice sanctions. 'Giving all students an A' is not a resolution of the challenges, it is an avoidance of them.
So if (let us imagine) half of the students on my course attend one or two (or zero) sessions out of ten, don't use the reading lists, fail to respond to messages offering help, and hand in essays that ignore clear and reiterated criteria, do they get an A too? put it another way, do the students who DO work deserve the same grade? If we wouldn't qualify sportspeople or musicians or doctors like this, why is it OK for the humanities?
Response to Ian Sudbery. Hi Ian, My point was not that the 'thick as mince' should not participate in HE or other kinds if continuing education genuinely geared to interest and ability. It was, as the tenor if the article, that student ability needed to be discriminated properly. Having taught across Science related disciplines and been coerced into awarding marks to poor quality work, the truly capable are then discriminated against. It is a nonsense to pretend that a student with a poor grasp on statistical principles can be objectively compared with someone with moderate competence. Again in turn classifying a moderately competent student with one who has real mathematical talent simply undermines the whole academic enterprise. I am not making judgement on any students personal worth but on their academic abilities. If we used these same wishy washy standards in competitive sport you would be pretending the standard of play i the Championship is the same as the Premiership. Other sporting performance metaphors are available.
re: #7 (I don't know why sometimes there is reply button, and sometimes not). Hi Mike, I agree that is nonsensical to classify students with a poor grasp of statistical principles the same as a competent or talented student. But the solution I think being proposed here, is not to classify them similarly (or at least, to do so is only a protest), but rather to not classify them at all. The real solution is not to give everyone the same grade, but to not have grades. The difference between competitive sport and education is that competitive sport is, well, competitive. And education shouldn't be. To take a different sporting metaphor, I go to the gym and lift weights to be able to increase strength, tone and muscle mass. This makes me more functional in my life - I can lift heavier things, I have a higher metabolism, I'm more stable in my core. But I don't really keep track of how much I can deadlift, nor do I compare myself to others - that's not the purpose of the exercise. Apart from anything else, treating education like this might discourage those with no real interest in learning, but feel they need to be there to get a piece of paper society has decided is required for a good life.
Gert Biesta suggests there are three functional purposes to education: qualification; socialisation; subjectification. If this can be agreed upon the question then becomes which of these is in the driving seat in the current era. I'd suggest the binary flavour of this thread indicates the former is defining the values mindset in both compulsory and tertiary education right now. Added to this, certain disciplinary areas inevitably see more relevance in content accumulation measurement data as opposed to adaptive skills growth, which can be more difficult to determine. Arguably, flipping the emphasis in Biesta's three definitions of education's purpose could go some way to rebalancing our value mindset, and the unfortunate and real crises being experienced by students. In other words, prioritising the learner as a being-in-knowledge, then the learner as social agent, and finally the learner as credentialed. Not, as currently is the case, the other way around.

Sponsored