Generative AI content can be unreliable and not provide the correct response to a prompt. It's important that we all adopt a critical thinking lens when using generative AI technologies to ensure the information provided is correct and unbiased. Being able to evaluate the credibility of sources is an important aspect of academic writing at university and these same principles apply to generative AI.
There have been many instances when generative AI has provided responses that sound credible but are inaccurate, false, flawed, simplistic, out of date, or even biased. Responses generated by AI also often does not link to the original sources of content. When references are provided it is often the case upon investigation that these sources do not exist or are not scholarly peer-reviewed publications.
As with all sources of information you use for your study, a critical evaluation of the resources should be undertaken. One tool we use for this is the ROBOT test (Wheatly & Hervieux, 2022). This simple acronym allows you to question certain aspects of generative AI and assist in evaluating the information. Please see below for a breakdown of how the ROBOT test works.
Image adapted from "Separating artificial intelligence from science fiction: Creating an academic library workshop series on AI literacy" by A. Wheatley & S. Hervieux, S, in S. Hervieux & A. Wheatley (Eds.), The Rise of AI: Implications and Applications of Artificial Intelligence in Academic Libraries (pp. 65 - 66), 2022, (https://escholarship.mcgill.ca/concern/books/0r9678471). Copyright 2022 by Amanda Wheatley and Sandy Hervieux under CC-BY-NC-SA
Southern Cross University acknowledges and pays respect to the ancestors, Elders and descendants of the Lands upon which we meet and study.
We are mindful that within and without the buildings, these Lands always were and always will be Aboriginal Land.