UTServicesCESExamination BoardsFraud / Generative AI (Chatgpt)

Fraud / Generative AI (Chatgpt)

Students, society, and the labor market need to be able to trust the value of a diploma. To safeguard the value of the diplomas is an effort of the programme management as well as the examination board. 

What about generative AI & Assessment?

For some useful sources about AI & assessment, see  [here].

The Student Charter governs the rights of students and the way we treat each other at the UT. The Student Charter consists of an institutional section, which applies to all students, irrespective of the programme, and the programme-specific section as described in the EER for the programme. The Student Charter describes what is considered academic misconduct and fraud.  
The Examination Board (ExBo) of each programme specifies in its Rules & Regulations (R&R; or Rules & Guidelines) what they consider to be fraud, which may include additional provisions. In the R&R, they also set out what action will be taken in cases of (suspected) academic misconduct. In the case of suspicion of fraud, the fraud case must be reported to the ExBo. They will then investigate and determine whether the fraud occurred and what measures need to be taken. 
It is important to make sure that students are well informed about what will be seen as fraud (including plagiarism and free riding) and what the consequences may be. As an example, the faculty of BMS has set up a website on scientific misconduct to inform the students and a site to inform examiners.

Fraud may occur when written tests are taken. The Assembly of Examination Board Chairs has developed a Rules of Order for written tests. This document describes the rules and procedures to be followed for written tests (including those that are taken digitally). It applies to tests in study programmes of which the Examination Board has adopted these rules as part of their Rules & Guidelines.
To make sure students are aware of what kind of behaviour is expected and allowed during test-taking, a cover sheet is recommended. An example of a cover sheet as used by BMS can be found [here]. 
For teachers, BMS also has a more elaborate Rules during Exams Guide, see: rules-during-exams-guide-for-examiners-2025-2026.pdf

Fraud case collection

This document contains a collection of concrete (anonymised) fraud cases. This collection is meant to increase transparency for: 

  • Students, to get a better understanding of what is considered to be fraud and what the consequences may be
  • Examination Boards, to share insight into the practice across study programmes and enable a more uniform treatment of fraud cases

Disclaimer: It is important to use this overview only for the intended purpose. In particular, no prediction may be derived regarding individual (new) fraud cases: every case is different and will be judged on its characteristics. The final decision always rests with the Examination Board of the study programme in question.

Hand writing correspondence

Case description

In a written test, students A and B submitted solutions contained parts apparently not written by themselves, judging by the hand writing. Some answers on A’s sheet were written in B’s hand, and vice versa for other questions of the same exam. Moreover, the answers, though not identical, showed strong correspondences, including atypical errors made in both cases. Apparently the students had prepared this fraud together, each of them made part of the exercises twice during the test (once for him/herself, once for his/her partner), then swapped answer sheets when handing in their results.

Proceedings

When confronted, the students denied their actions. The written tests were then submitted to an external handwriting expert, who concurred with the finding of correspondence between the two sets of handwriting. When presented with this independent opinion, the students admitted their fault, confirming the procedure they had followed (described above).

Outcome

This was regarded as a severe fraud case, because the students had planned it carefully and agreed to act fraudulently (rather than acting on the spur of the moment); and also because they at first denied their actions when confronted with the evidence. 
Both students were barred from taking any examination in the next quarter, and warned that they would be evicted from the programme in case of a repeat infraction.

Open repository plagiarism

Case description

Student A, upon failing to meet a deadline for an individual take-home test, copied the solution of student B, who had been careless enough to put it on a publicly accessible repository, without knowledge of B. This was detected; A admitted his fault immediately, upon which the case was referred to the Examination Board.
Before the Examination Board had come to a verdict, the same student A, now working together with another student C on another project for the same study unit, repeated his plagiarism by again copying solutions from an open repository of student D. This was done without knowledge of either C or D. When confronted with the evidence, A once more immediately admitted his actions, claiming full responsibility.

Proceedings

Because A admitted his fraud immediately, the facts of the matter are clear. The course unit reader contains a description of what constitutes fraud that includes exactly the case of copying from an open repository. 
Students are warned not to put solutions to tests and projects on an open repository, with the explicit statement that doing so may make them accessories to this type of plagiarism. Awareness of this is, however, still low.

Outcome

Although A admitted his fault the first time around, he clearly still did not understand or appreciate the severity of his actions. For the first instance of fraud, the sanction was to declare the test results invalid, meaning that he could not pass the study unit any more. For the second instance of fraud, the sanction was to bar the student from taking any examination during the next quarter.

First AI case concerning a paper before the Raad van State (feb. 2026)

‘De eerste AI-paper stond voor de rechter, en verloor' - ScienceGuide  
In the Science Guide an article about the first AI case before (de Afdeling van) the Raad van State (but it was some time ago, so there may have been more since then). The article is only available in Dutch. It is interesting because of the criteria used and Van Berkel's challenges. Which leads to an interesting question: Can we detect and prove the use of AI without errors in the work (e.g., incorrect references)? 

The gist of the story: The Raad van State concluded that it has been established beyond a reasonable doubt that a student improperly used AI, and the appeal of the student (against a previous ruling by the CBE) was declared unfounded. It rejected the student’s claim that the issues were merely careless mistakes, considering the number and nature of the inaccuracies and the relatively small scope of the assignment. A key aspect of the decision was the criteria used to determine fraud. The evidence was indirect: no one directly observed the student using AI. Instead, the judgment relied on circumstantial evidence — facts that do not directly prove misconduct but allow it to be inferred. 

According to established case law, two conditions must be met when relying on indirect evidence:  
1) A single indirect fact is usually insufficient.
2) A conviction must be based on a combination of circumstances that together leave no reasonable alternative conclusion than that the person committed the violation.
Together, these requirements are referred to as a “closed chain of evidence.”

Van Berkel (previous UHD Universiteit Maastricht and assessment expert) challenges the ruling. While the first criterion has been met, it is not clear that the second has been satisfied. The judgment mainly relies on the errors attributed to AI (e.g., incorrect references). However, why are these considered typical AI errors? If a paper is flawless, does that mean no AI was used? According to Van Berkel, the conclusion is somewhat problematic.

Interesting resources

What can be done to secure the testing process? What can be done to prevent or detect fraud? Here are some resources (NB. Not all are available in English):