Reading Time: 3 minutes

It’s been a year since ChatGPT3 exploded on the digital landscape attracting 100 million users in just two months. We’d been aware of generative AI (GenAI) and its potential to support learning but also that it could be used to generate answers to assessments. ChatGPT3 proved a game changer, ChatGPT4 was launched quickly on its heels causing worldwide panic in academia. There was talk of the death of the essay, the need to ban GenAI tools. Very quickly academics’ thoughts turned to think about how we could detect AI generated writing and whether Turnitin could help with this.

AI detection

It transpired that Turnitin had been working on developing an AI detector and they announced their intention to launch and switch on AI detection for all customers in April 2023. The University, along with most UK universities, decided to opt out of this launch. A key factor in this decision was that we were unable to test and evaluate the reliability of the detection tool and were not provided with independently verified data to give us sufficient confidence to implement at this time.

Almost six months on it is still not clear how Turnitin detects AI writing. There has been no information forthcoming detailing how its detection tool identifies AI generated writing. Meanwhile, research internationally is highlighting a range of issues with AI detection. We previously highlighted the following from Jisc:

Jisc notes:AI detectors cannot prove conclusively that text was written by AI.”  

Michael Webb (17/3/2023), AI writing detectors – concepts and considerations, Jisc National Centre for AI 

In a recent update Jisc have reiterated their previous recommendations position and guidance, highlighting that: 

  • No AI detection software can conclusively prove text was written by AI
  • It is easy to defeat AI detection software
  • All AI detection software will give false positives
  • We need to consider what we are actually trying to detect as AI-assisted writing is becoming the norm. 

Michael Webb (18/09/2023), AI Detection – Latest Recommendations, Jisc National Centre for AI 

OpenAI the company the developed ChatGPT developed their own AI detector. They’ve subsequently withdrawn it due to its low rate of accuracy. Meanwhile a white paper from Anthology, the parent company of Blackboard, highlights that their research on AI detection tools has led them to conclude that they are not currently fit for purpose. You can find some interesting examples of content that have been uploaded to some of the AI detection tools, for example they think that the US Constitution and excerpts from the Bible were generated by AI.

Now that we’re able to create a test environment for the Turnitin AI detection tool we are evaluating it with both human and AI generated work. In the meantime, it is interesting to note that universities that did not initially opt-out of AI detection have now disabled its use. Vanderbilt University posted an announcement to this effect in August 2023 explaining why they had taken this decision.

If you suspect an assessment includes AI-generated writing …

Here at Dundee our Academic Misconduct by Students Code of Practice was already clear that any unauthorised use of AI would be viewed as academic misconduct.

If you have an assessment that you think is not the student’s own work, you should refer this your School’s Associate Dean for Quality and Academic Standards (AD QAS) for review and where appropriate this can be referred to our academic integrity panel for further review. Other universities are likely to have their own investigation processes.

It is important that lecturers are aware that they should *NOT* use unauthorised AI detection tools. We have not approved the use of any of these tools at the University of Dundee and we do not have student consent to upload their work to third party sites. It is likely that other Universities have similar guidance, but you should check locally.

Cues to AI generated writing

If you are carefully reading student work you may be able to identify signs that Gen-AI has been used to help with the writing, or possibly even an essay mill. For example

  • Check references: GenAI is fallible and may generate inaccurate statements and make up or hallucinate references. Similarly there may be factual inaccuracies in the writing again reflecting the hallucinations of AI.
  • Consistency between pieces of work: Is there a change in tone or style of writing from previous pieces of work that the student has submitted. Have a look back at the student’s previous submissions.
  • Consistency within a piece of work: Are there variations of style within the piece of work? Is there consistency in spelling, eg all UK versions or a mixture of UK and US in the same sentences or paragraphs.

If you’d like to understand more about how AI detection tools work and some thoughts on the implications of AI on academic integrity watch this recording of our webinar with Robin Crockett, Head of Academic Integrity at the University of Northampton.

A time for reflection

We asked at the start if we “should” detect student use of AI? What are your thoughts? What are the potential implications of this? We’d welcome you sharing your ideas here.

All posts in this LearningX

1: Academic Integrity – can, and should, we detect student use of AI?

Leave a Reply

Your email address will not be published. Required fields are marked *