top of page
Search
Writer's pictureDr Tamar Sztal

Should I use ChatGPT to write my grant application?

The rise of ChatGPT has created an arms race in the artificial intelligence (AI) world.
With an estimated 180 million people currently using ChatGPT, and one million of those accessing the software in the first 5 days after its release by OpenAI in 2022, ChatGPT has become a recognised interactive platform.  

Robot typing with question marks

What is ChatGPT?


ChatGPT is a public and open-access AI software, that uses machine learning algorithms to analyse large amounts of information to generate written responses to user enquiries about almost any topic. ChatGPT is supported by a large language model, so the more data the model is trained on, the better it gets at detecting patterns and anticipating what will come next to generate reasonable text.

ChatGPT in research and education


ChatGPT has infiltrated multiple sectors; research and education are no exception. A 2023 Nature survey of 1600 researchers reported that more than 25% use AI to help them write manuscripts, and more than 15% of researchers use this technology to help them write grant proposals (Nature 621, 672–675; 2023)!


What’s the problem with that, you might ask?


One of the primary concerns with using ChatGPT is privacy. By using ChatGPT to write their grant proposals, researchers will be sharing their precious research ideas and intellectual property (IP) on a public platform. ChatGPT uses information from the internet including books, articles, blogs, and posts – often filled with personal information that is obtained without overt consent. (Which means it could also draw on outdated literature or guidelines for research applications.)  What is fed into the ChatGPT software remains in the public domain and will inform others’ use of ChatGPT. This includes YOUR sensitive IP if you choose to make it unrestricted by entering it into ChatGPT.

Another key concern is related to the wide range of public sources used by ChatGPT to produce a targeted piece of writing.  Its inability to distinguish between credible and non-credible sources challenges the quality, authenticity, and reliability of the information used and, by association, the written piece it produces.

What does NHMRC, MRFF and ARC say about ChatGPT?


In recent grant rounds, the widespread use of ChatGPT to generate applications has provoked major funding agencies to develop policies related to its use. NHMRC, MRFF and ARC caution applicants against using generative AI to prepare applications, given the substantial implications on data sharing and their responsibility to provide accurate information in their grant. 

Using ChatGPT for grant writing


Writing grant applications is a resource-intensive, demanding and relentless process, which may beg the question: ‘Why should researchers write applications when a chatbot could do the work for them?’

Embracing a researcher’s curiosity and quest for knowledge (and ours!), we decided to see if ChatGPT could write a robust 2-page ARC Discovery Project (DP) Expression of Interest (EOI).

Our purpose was to understand both the benefits and limitations of ChatGPT. Our hypothesis was that it would be unable to generate a competitive EOI.

We identified a broad focus for our application that centred around enhancing classroom engagement using video games as educational aids. We then reviewed i) the ChatGPT-generated document against the ARC guidelines and assessment criteria from a reviewer’s perspective and ii) the process from an applicant’s perspective.

Here’s what we found.

What ChatGPT did well


From a reviewer’s perspective

The EOI narrative generally contained good grammar, and sentence and paragraph structure. It identified current and relevant literature from a wide range of sources to explain the significant problem that existed and knowledge gap that would be addressed in the project.

From an applicant’s perspective

ChatGPT generated text and structured sections that addressed the recommendations included in ARC’s DP EOI Instructions to Applicants. It suggested potential areas of focus, generated a methodological pipeline that spanned the requested funding period, and produced expected project outcomes.

 

What Chat GPT did poorly


From a reviewer’s perspective

The information did not flow logically, so it was difficult to find where assessment criteria and other important elements were addressed. Rather than clearly describing the significant problem that exists and innovative ways to solve the problem (e.g. new concepts, methodologies, technologies), the ChatGPT generated text provided a wide array of information, much like a broad literature review, that had to be carefully sifted through for clarity.

Old information was cited as supportive evidence to justify significance and innovation (the most recent citation was from 2016). The discussion of why the project was innovative was not strongly aligned with the assessment criteria.

The methods were poorly developed and contained a list of activities for which the purpose was unclear. Important experimental details that demonstrate methodological rigour and feasibility were not provided (e.g. participants, samples, datasets). The link between benefits, intended beneficiaries and the previously identified gap in knowledge was unclear. A discussion about how overcoming the knowledge gap would enable other researchers to move forward in the field of research was lacking, and the benefits described were informed by the significant problem but not the research outcomes. Despite DP EOI Instructions to Applicants requesting alignment between the project’s benefits and ARC’s Research Impact Principles and Framework, it was unclear what the potential environmental, economic, and social impact would be on Australian and international communities.

The dissemination strategies described in the communication of results section were very generic and did not include the relevant stakeholders and other beneficiaries of the research.

From an applicant’s perspective

ChatGPT contains a disclaimer urging users to validate the information provided, which meant that the EOI cited information may have been incorrect.

While ChatGPT generated text for all relevant sections, most of these were structured as seeding sentences for which the relevance was unclear. Effort was required to flesh out these sentences, which required a thorough understanding of concepts, for example significance and innovation.  These two terms are commonly misunderstood by researchers, and ChatGPT did not always provide detailed sentences that effectively addressed the assessment criteria for significance and innovation.

Not only was it time-consuming to prompt the software to produce the relevant bits of information over several laboriously developed iterations, but also to survey the included literature to validate its use, and then revise and flesh out detail across all sections. It is questionable whether it would have been quicker to write the proposal from scratch rather than rely on generative software. 

Our cautionary tale


Our experiment highlighted both some benefits and substantial flaws in using ChatGPT to write a competitive DP EOI. What makes an application worth funding is the research team's combined expertise and knowledge that are leveraged to construct an original, clear, and high-quality proposal. While the software can provide quick and easy answers to simple questions, it cannot capture the appropriate language and tone needed to convey the excitement and innovation of a researcher’s transformative and creative approach. ChatGPT’s automation does not enable complex human qualities of critical-thinking, problem-solving and emotional intelligence, which are essential for a well-developed argument. The process of developing and refining our ARC DP EOI over multiple iterations using ChatGPT was also time consuming!

Our final message: think twice before using ChatGPT.

If you wish to view our ChatGPT generated DP EOI, you can download it here:


Comments


bottom of page