Unis uses AI to keep students from cheating and it's a bit of a chore

Unis uses AI to keep students from cheating and its.jpgsignature487768bc4975bf0afe78f27dfbc5d074

Universities are increasingly using computer programs to track university students taking the exams. Is this the future of testing?

As a result of the pandemic, institutes around the world have quickly adopted test software such as Issuelify, ExamSoft and ProctorU.

Authentication technology allows testers to be inspected off-campus. They can sit exams at home, instead of watching them in a traditional exam room. Some programs simply allow a person to monitor students remotely.

More sophisticated, automated authentication software takes over the student's computer to prevent and monitor suspicious activity. These programs often use artificial intelligence (AI) to analyze test behavior.

Our recent research paper examined the ethics of automated procurement. We have been assured that the software is attractive, but there are significant risks.

Some educational institutions say that authentication technologies are needed to prevent fraud. Some institutions and other students are concerned about hidden dangers.

Of course, students have launched complaints, petitions and lawsuits. They criticize online proctoring as discriminatory and harassing, with overtones of Big Brother. Some probation firms have responded with attempts to prevent a complaint, which includes a lawsuit the critics.

A student complaint that a AI proctoring test misrepresented her as a cheater drew millions of comments on TikTok.
Index

    What does the software do?

    Automated programs offer investigators tools to prevent fraud. The programs can capture system information, block web access and monitor keyboard strokes. They can also use comandeer computer cameras and microphones to record testers and their surroundings.

    Some programs use AI to "detect" suspicious behavior. Face recognition algorithms are monitored to ensure that the student is still seated and that no one else has entered the room. The programs also identify whiskey, ape typing, unusual movements and other behaviors that may suggest deception.

    Following the “flags” event program, investigators can explore further by watching video and audio and questioning the student.

    Why use proctoring software?

    Automated proctoring software aims to reduce fraud in remotely administered trials - essential during the pandemic. Fair tests protect the value of qualifications and recognize the importance of academic honesty. They are a key part of certification requirements for professional areas such as medicine and law.

    Cheating is unfair to honest students. If left unchecked, it will encourage these students to cheat.

    The companies that sell proctoring software say that their tools prevent fraud and improve the fairness of testing for everyone - but our work raises the question .

    So what are the problems?

    Security

    We evaluated the software and found that simple technical tricks can overcome many of the protections against fraud. This finding indicates that the devices may provide limited benefits.

    Requiring students to install software with such powerful computer control is a security risk. In some cases the software will still surreptitiously remain even after students have installed it.

    READ  Microsoft personally draws CES 2022 presence - TechCrunch

    Accessibility

    Some students may not have access to the right tools and high-speed internet connections required by the software. This leads to technical issues that cause stress and imbalance. In one event, 41% of students had technical difficulties.

    Privacy

    Online proctoring creates privacy issues. Video capture means that investigators can see into students' homes and examine their faces without being aware. That close-up study, which is scheduled for prospective review, distinguishes it from the personalized test guide.

    Fairness and bias

    Authentication software raises serious concerns about fairness. Face recognition algorithms in the software we have evaluated are not always accurate.

    An upcoming paper by one of us found that the algorithms used by the major US - based manufacturers do not recognize dark - skinned faces as accurately as lighter - skinned faces. The resulting hidden discrimination could contribute to social bias. Others have reported similar problems in software proctoring and in face recognition technology in general.

    Also of concern, the proctoring algorithms could falsely reveal abnormal eye or head movements in testers. This could lead to unreasonable suspicions about students who are not neuro-normal or have idiosyncratic test styles. Even without automated proctoring, tests are already stressful events that affect our behavior.

    Investigating baseless suspicions

    Educational institutions can often choose which automated actions to use or reject. Certification firms may request that “flags” created by AI are not a proof of academic dishonesty but mere reasons for a possible fraudulent investigation at the will of the institution.

    However, simply examining and questioning a student can be self-inflicted and difficult when it is based on a machine-created sputum suspicion.

    Cultural studies

    Finally, the study of automated testing can set a broader precedent. Public concerns about automated monitoring and decision-making are growing. We should exercise caution when introducing potentially harmful technologies, especially when implemented without our prior consent.

    Where from here?

    It is important to find ways to administer tests remotely. It will not always be possible for us to replace tests with other assessments.

    However, institutions that use automated authentication software must be held accountable. This means being transparent with students about how technology works and what can happen to student data.

    Researchers can also offer meaningful options such as personal test-seating options. Offering alternatives is fundamental to informed consent.

    While authentication tools seem to offer a panacea, institutions need to carefully weigh the risks involved in the technology.

    Article by Simon Coghlan, Principal Research Fellow in Digital Ethics, Center for AI and Digital Ethics, School of Computing and Information Systems, University of Melbourne; Jeannie Marie Paterson, Professor of Law, University of Melbourne; Shaanan Cohney, Lecturer in Cybersecurity, University of Melbourne, and Tim Miller, Associate Professor of Computer Science (Artificial Intelligence), University of Melbourne

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Related Posts

    Deja una respuesta

    Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

    Subir

    We use cookies to ensure that we give the best user experience on our website. If you continue to use this site we will assume that you agree. More information