Curtin Student Guild has made a submission to the House Standing Committee of Employment, Education and Training Inquiry into issues presented by generative artificial intelligence (AI), presenting recommendations that the committee incorporates data privacy to ensure students can safely use AI.
Submitted earlier this month, the submission was written by Guild President Dylan Botica, Guild Vice President of Education Veronika Gobba and Guild Postgraduate Student Committee President Mitch Craig.
The Australian Government’s Tertiary Education Quality and Standards Agency (TEQSA) explains that generative AI is progressing at a ‘rapid rate’, and whilst the use of the technology may be permitted in some instances, AI that is used in a manner that is inconsistent with one’s institution’s rules can result in academic misconduct.
According to Curtin University, the ‘unapproved, inappropriate or undisclosed use’ of AI in assessments may be contract cheating; a form of outsourcing content.

Using AI in a way that is inconsistent with a student’s institution can result in academic penalties. Photo: Unsplash.
The submission explains that whilst tools can be used to detect the misuse of AI by students, these tools ‘can be unreliable and might lead to students being falsely accused for cheating – especially for the international student cohort’.
According to the Curtin Student Guild’s submission, the ‘most important consideration for higher education’ is that AI contributes to a student’s learning environment in a manner that ‘enhances the quality of our education … and adequately assesses and mitigates potential risks.’
Whilst the submission highlighted multiple benefits of generative AI in an educational context including creating personalised learning, making complex concepts easier to understand and reducing student accessibility barriers, Curtin Student Guild also highlighted a number of ‘safety and ethical risks’ that come as a result of AI.
“Teachers may not know how to implement the use of AI in a safe and ethical way given the relative infancy of the technology in education,” the submission reads.
“Students may become reliant on the AI tools to the exclusion of other learning opportunities … students may not understand how to appropriately use AI tools and be subject to academic penalties.”
The submission also identified the potential for AI generated content to contain ‘factual errors’ and ‘bias’.
“[AI generated information] may not cover all areas of educational content which would jeopardise learning outcomes if students and academics are not trained to utilise generative AI effectively,” the submission reads.
“Bias in AI data sets could go unchecked and become amplified … the quality of education and research may suffer if bias or incorrect information is incorporated in data sets.”
Curtin University student Amy Figueiredo says whilst AI can be a useful tool, regulations must be implemented so students can continue to ‘think for themselves’.
“AI helps me with uni because it helps generate ideas when I’m struggling with something in class. For example, when coming up with a marketing strategy I can just use AI to help come up with ideas that I can use going forward,” Ms Figueiredo says.
“However, I do think AI needs to be regulated because it can prevent us thinking for ourselves … If something gets too hard we can just use AI to get through it which you can’t learn much from.”