The use of GenAI should complement scholarly methods and expert knowledge when undertaking academic research, not replace it. As with any AI-generated content, it's crucial to critically assess any outputs and to consider potential ethical, copyright and academic integrity issues.
To ensure alignment with university and national policies, take a look at Southern Cross University's Guidelines for HDR candidates – use of Artificial Intelligence tools and the Top 10 tips for using GenAI in research created by the Tertiary Education Quality and Standards Agency (TEQSA).
Most of the research activities outlined below can be supported by generalist Large Language Models (LLMs) such as Microsoft Copilot and ChatGPT. These tools can assist with tasks like brainstorming, refining questions, drafting, translating, and summarising.
Designing a good prompt will help you maximise the quality of this support and any outputs generated by GenAI. Click on the stages of the research workflow below for prompt ideas based on the Context → Task → Output structure. The more context you provide, the more useful the output.
✅ Tip: Always check, refine, and fact-check GenAI outputs before using them in your research.
Context: I’m starting a project in environmental engineering on renewable energy.
Task: Brainstorm research topics addressing technical + social aspects.
Output: 10 ideas with 1-sentence explanations.
Context: My topic is AI in academic libraries.
Task: Refine my draft question: “How are AI tools transforming library services?” Identify gaps in the literature.
Output: 3–5 refined questions + list of gaps.
Context: I want to study mindfulness practices and student stress.
Task: Suggest suitable qualitative, quantitative, or mixed-methods designs with pros/cons.
Output: Comparison of 2–3 designs with rationale.
Context: I need to search for studies on urban green spaces and mental health.
Task: Suggest keywords, synonyms, and Boolean strings for databases like Scopus.
Output: A table of terms + 3 example searches.
Context: I’m reviewing blockchain in supply chain management.
Task: Map themes, key authors, and seminal papers.
Output: A concept map (or outline) of connections.
Context: I have a Spanish journal abstract on climate policy.
Task: Translate it into English, keeping terms accurate.
Output: A fluent, technical English version.
Context: I have 5 articles on social media in disaster communication.
Task: Summarise findings, similarities, and differences.
Output: A table + 150–200 word synthesis.
Context: I have survey data from 300 people on public transport use.
Task: Suggest statistical analyses and visualisations.
Output: A step-by-step analysis plan + 3–4 chart types.
Context: I want to run sentiment analysis on survey responses using Python.
Task: Generate code to clean text and apply sentiment analysis. Flag common pitfalls.
Output: Well-commented Python script with explanations.
Context: My article is on sustainable housing in Australia.
Task: Suggest journals suited to this topic.
Output: Ranked list of 5 journals + justification.
Context: I drafted the introduction for my marine biodiversity article.
Task: Edit for clarity, conciseness, and academic tone.
Output: A polished version with suggested changes.
Context: I’m applying for a grant on Indigenous knowledge in land restoration.
Task: Draft a compelling 300-word project summary highlighting aims, significance, and outcomes.
Output: A persuasive grant-style summary.
Context: I’m in digital humanities and want to present on AI + text mining in 2025.
Task: Identify relevant international conferences.
Output: A list of 5 conferences with dates, locations, and deadlines.
Specialised GenAI academic research tools can assist with various stages of the research lifecycle and offer capabilities that go beyond Large Language Models (such as Copilot), especially when conducting systematic-style reviews. However, discipline-specific database searching, along with the human validation and screening of results, remains essential to meet the gold-standard PRISMA guidelines required for systematic reviews.
Tools such as Elicit, Undermind, and Consensus are among the growing number of platforms that can fast-track parts of the research process, including:
You can also view the AI Search Tools table from Monash Health Library which evaluates a range of tools here.
Click on the tools below to learn more about their features, strengths, and limitations. Keep in mind that this is a rapidly evolving field, and access may change over time.
Cost: Free version with limited features; Plus and Pro plans add higher extraction limits and advanced tools.
Coverage: Searches across millions of papers in the Semantic Scholar corpus and PubMed (strongest in STEM and medicine). May not include books, grey literature, or content outside Semantic Scholar.
Benefits
Limitations
Cost: Free version available –premium plans add advanced analytics and export options.
Coverage: Indexes research papers from major open-access and publisher sources (details less transparent than Elicit/ResearchRabbit).
Benefits
Limitations
Cost: Free version available with core features; paid plans unlock advanced tools. Pricing varies, with institutional licenses available.
Coverage: Built on a corpus of over 200 million academic papers and book chapters, primarily sourced from Semantic Scholar. Strongest in STEM and health sciences; coverage in humanities and social sciences is more variable.
Benefits
Limitations
Southern Cross University acknowledges and pays respect to the ancestors, Elders and descendants of the Lands upon which we meet and study.
We are mindful that within and without the buildings, these Lands always were and always will be Aboriginal Land.