Academics despair as ChatGPT-written essays swamp marking season

‘It’s not a machine for cheating; it’s a machine for producing crap,’ says one professor infuriated by rise of bland scripts

June 17, 2024
James Hinchcliffe holds a poop emoji while filming on a screen to illustrate Academics despair as ChatGPT-written essays swamp marking season
Source: Richard Rodriguez/Getty Images for Texas Motor Speedway

The increased prevalence of students using ChatGPT to write essays should prompt a rethink about whether current policies encouraging “ethical” use of artificial intelligence are working, scholars have argued.

With marking season in full flow, lecturers have taken to social media in large numbers to complain about AI-generated content found in submitted work.

Telltale signs of ChatGPT use, according to academics, include little-used words such as “delve” and “multifaceted”, summarising key themes using bullet points and a jarring conversational style using terms such as “let’s explore this theme”.

In a more obvious giveaway, one professor said an advert for an AI essay company was buried in a paper’s introduction; another academic noted how a student had forgotten to remove a chatbot statement that the content was AI-generated.

“I had no idea how many would resort to it,” admitted one UK law professor.

Des Fitzgerald, professor of medical humanities and social sciences at University College Cork, told Times Higher Education that student use of AI had “gone totally mainstream” this year.

“Across a batch of essays, you do start to notice the tics of ChatGPT essays, which is partly about repetition of certain words or phrases, but is also just a kind of aura of machinic blandness that’s hard to describe to someone who hasn’t encountered it – an essay with no edges, that does nothing technically wrong or bad, but not much right or good, either,” said Professor Fitzgerald.

Since ChatGPT’s emergence in late 2022, some universities have adopted policies to allow the use of AI as long as it is acknowledged, while others have begun using AI content detectors, although opinion is divided on their effectiveness.

According to the latest Student Academic Experience Survey, for which Advance HE and the Higher Education Policy Institute polled around 10,000 UK undergraduates, 61 per cent use AI at least a little each month, “in a way allowed by their institution”, while 31 per cent do so every week.


Campus resource: Can we spot AI-written content?


Professor Fitzgerald said that although some colleagues “think we just need to live with this, even that we have a duty to teach students to use it well”, he was “totally against” the use of AI tools for essays.

“ChatGPT is completely antithetical to everything I think I’m doing as a teacher – working with students to engage with texts, thinking through ideas, learning to clarify and express complex thoughts, taking some risks with those thoughts, locating some kind of distinctive inner voice. ChatGPT is total poison for all of this, and we need to simply ban it,” he said.

Steve Fuller, professor of sociology at the University of Warwick, agreed that AI use had “become more noticeable” this year despite his students signing contracts saying they would not use it to write essays.

He said he was not opposed to students using it “as long as what they produce sounds smart and on point, and the marker can’t recognise it as simply having been lifted from another source wholesale”.

Those who leaned heavily on the technology should expect a relatively low mark, even though they might pass, said Professor Fuller.

“Students routinely commit errors of fact, reasoning and grammar [without ChatGPT], yet if their text touches enough bases with the assignment they’re likely to get somewhere in the low- to mid-60s. ChatGPT does a credible job at simulating such mediocrity, and that’s good enough for many of its student users,” he said.

Having to mark such mediocre essays partly generated by AI is, however, a growing complaint among academics. Posting on X, Lancaster University economist Renaud Foucart said marking AI-generated essays “takes much more time to assess [because] I need to concentrate much more to cut through the amount of seemingly logical statements that are actually full of emptiness”.

“My biggest issue [with AI] is less the moral issue about cheating but more what ChatGPT offers students,” Professor Fitzgerald added. “All it is capable of is [writing] bad essays made up of non-ideas and empty sentences. It’s not a machine for cheating; it’s a machine for producing crap.”

jack.grove@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (10)

" 'It’s not a machine for cheating; it’s a machine for producing crap,’ says one professor infuriated by rise of bland scripts." Too much focus on students? Why am I reminded of management-related academic journal articles and conference papers?
Good or bad AI is the future , students have embraced it and Academics need to accept that they are going to need to change their teaching and assessment methods to keep pace.
The genie is very much out of the bottle. Thus we need to deal with it somehow. But currently, it is pretty damaging. I totally agree that the way it is currently appearing (and certainly some aspects are very recognizable), leads to poor writing, poor standards and poor scholarship. The point should be to learn, to explore, to be enthused and explore in-depth, not to churn out crap for marks. And yes, perhaps we need to change the way we teach and assess, but the pace is too fast for it just to come from individual dedicated academics.
All of this sounds very familiar. Essay mills were already a big problem and AI just makes this problem worse. I guess one blunt approach would be to assess entire classes exclusively by examination. One colleague said, 'that will never happen as overseas students won't choose to come to Univ of XXX if they're actually meaningfully assessed'. Honestly, I have nothing but contempt for colleagues who say 'its the future. We must embrace it' even when embracing AI sounds the death knell for all we do as a profession. The consequence of student use of AI is to deny the chance for students to develop their ideas in a more thoughtful, considered way through an essay. Its a real pity that students are outsourcing their thinking to technology and soul destroying for those of us who have to mark the resulting bilge. I fear for the future if we are producing a generation of people unable to synthesise data and ideas in a coherent argument.
In France, substantive assessment at my secondary school are all written during class time, usually with no notes. In-class discussion, classwork, quizzes further confirms how well a student is learning the skills and content. Even pre-AI, anything done at home could easily be written by someone else, or a savvy parent/tutor could feed the student the ideas and polish the final product. So there was never any reason to ever think something done out of class is automatically going to be the students' own work, Chat GDT just makes it more obvious. If we want university students to learn, we have to invest in actually teaching them.
At my institution you cannot just mark down for AI use, you have to report it. But here’s the thing, the university’s policy is basically the usual feel good waffle and the investigating officer is seriously workshy. So you just mark down and in exam boards say the work was of a lower standard than usual. Getting to the point where you may as well either use AI to mark or just award grades at entry (you couldn’t do it for attendance because even that is too hard).
The only solution is in class exams, no books, and in class presentations. If students don’t want that they do not have to do a degree.
Our degrees are about 60-70% invigilated exam, and about 30-40% coursework. Currently, chat GPT is capable of producing an essay that will score in the mid 50s on my programs. Given that mid 50s is not enough for further study, nor most gtaduate jobs, and any student that has relied on GPT will get found out in the exam, I don't see a grading problem. The real problem is convincing students of this - that the only person they are cheating by using GPT is themselves.
Try and focus the essay subject locally, in the university city. Works for social science type essays, geography and economics anyway. Let's see how well chatbot knows local history etc.
new
I remember when calculators became mainstream and everyone was up in arms about that. People use online resources now for source material and then embed it in essays or journals - AI is just a short cut. It's a research tool with the added benefit of compiling and punctuating content. So unless the course is English, it's not really that different from other forms of digital information harvesting. Unless you're only actually testing how good a memory someone has? The academic world just need to adapt and analyse the submissions more thoroughly for comprehension, a good argument or making of the case and an indication that the person submitted has some comprehension of the task in hand. More frequent contact with students will help set personal benchmarks and a better assessment of capability.

Sponsored