Beyond Disclosure: Using AI as a Collaborative Tool in Academic Writing
By Tamas Makany
Tamas Makany is an associate professor at the Singapore Management University, where he teaches communication design courses and researches how people interact with AI chatbots. You can find him on LinkedIn.
Disclaimer: This essay is a personal reflection and does not reflect the official position of my employer, Singapore Management University. Readers should refer to their own institutions' policies and guidelines regarding AI use in academic contexts.
I have a confession to make: AI helps me rewrite parts of my research papers. A recent survey revealed that about half of my academic colleagues believe this practice is unethical and authors should come clean about it when submitting a manuscript. (Hence my confession!) Some publishers and universities already require authors to disclose the use of generative AI, arguing that it promotes accountability and transparency. But could these policies miss the point about what AI actually mean for academic writing?
What AI disclosures may have achieved was to boost Montblanc fountain pen sales. After all, nearly every digital authoring tool, from Apple Notes to Google Docs, and Microsoft Word, now incorporates an AI writing assistant. Should these tools be off-limits due to perceived threats to academic integrity?
The same survey about academics’ attitudes found that while 52% believed substantial ChatGPT use for rewriting should be declared, only 14% thought the same about Grammarly. The reason seems clear: spell-checkers have been in word processors since the 1970s. What was once a novelty is now an unremarkable aspect of writing. ChatGPT and other tools will follow a similar path, becoming part of our academic workflows. We will soon grow accustomed to AI predicting, correcting, and guiding our keystrokes as we fuss over our next peer-reviewed publication.
I believe it is time to move beyond the "what if" and focus on "how to" augment scholarly writing with AI. Assessing collaborative human-AI flows could enhance the quality, originality, and impact of academic work while maintaining integrity. In this essay, I will examine my own AI-assisted journey and share how I use these tools to write research papers, hoping to dispel misconceptions held by both enthusiasts and skeptics.
My first AI writing faux pas
I was first confronted with questions related to academic integrity and AI-writing in the spring of 2023 after an interview for an opinion piece on ChatGPT’s impact on the future of businesses. The editor was finalizing the writeup and asked me to provide some practical—and more upbeat—advice for businesses on navigating AI ethics. I had originally suggested a cautionary tale, which essentially boiled down to: “These machines are only as good as the data they’re trained on, so don’t be surprised if you don’t like what they spit out.”
Reveling in the irony, I turned to ChatGPT for inspiration. The suggestions were surprisingly cogent, aligning with previous research on digital transformation, emphasizing the focus on people over technology while maintaining a critical perspective. The AI elegantly articulated how LLMs, trained on human-generated content, reflect our societal imperfections. It emphasized that business leaders should prioritize empathy and understanding their users’ needs over chasing the perfect tool, given AI's potential for hallucination and perpetuating our very human biases. Satisfied, I incorporated the suggestions into the final draft.
The response arrived the next day. While the editor liked the overall piece, they also shared that an AI-detection tool had flagged the closing paragraph as likely AI-generated—which, of course, it was. The highlighted text in the document morphed into a crimson blush of embarrassment on my face. I hastily rewrote the section using words that felt more human, though still basing the message on ChatGPT's recommendations. Before sending the final version back, I ran the humanized version through the AI-checker to make sure that I was out of the yellow zone. Fortunately, the editor didn't press the issue, and the piece was published. (You can read it here.) As I reflected on this experience, I found myself thinking back: Why did I feel embarrassed? Had I compromised my academic integrity? Did I commit AI plagiarism, or was I just bad at using a new writing tool?
Why academic AI?
The rapid rise of ChatGPT after its public launch in late 2022 triggered many discussions across academia resulting in a slew of generative AI policies. My institution, SMU—like Harvard and Stanford—developed frameworks emphasizing "responsible and ethical use" of AI tools.
These policies, while well-intentioned, often boil down to vague directives like "accurately attribute" and "transparently cite" AI contributions. Their ultimate goal is to encourage researchers to adhere to the guidelines of publishers (such as Elsevier) that instruct their authors that AI “should only be used to improve readability and language of the work and not to replace key authoring tasks.” Such limitation on writing with AI strikes me as a naive overestimation of AI’s capabilities: the potential to act as a collaborative tool throughout the writing process.
This human-AI collaboration demands a new understanding of academic integrity. Instead of fixating on whether AI was used, we should focus on how well it was used. Did it enhance the researcher's thinking? Did it help communicate complex ideas more clearly? Will the findings be more impactful for a wider audience? These are the impact measures policy-makers should incorporate in future publishing guidelines.
Ultimately, academic writing with AI is about leveraging technology to produce more far-reaching and accessible research. It's not about replacing human intellect, but augmenting it. As researchers, we must embrace these tools while maintaining our critical thinking, ethical standards, and unique scholarly voices.
How to write with AI
AI in academic writing isn't magic—it's like managing research assistants whom you constantly explain the purpose of the project. The notion that ChatGPT could produce a publishable paper with a few prompts is pure fantasy. AI lacks the creativity, intentionality, and logical rigor essential to scholarly work. It only generates statistically-probable word salads, not carefully constructed arguments.
The real power of AI in academic writing is its ability to enhance human thought. I've found it invaluable for transforming rough drafts into refined manuscripts. It is devilishly good at suggesting alternative phrasings, highlighting potential counterarguments, and even pointing out logical inconsistencies. But at every turn, human oversight is crucial.
I recently got an upgrade in my writing with AI from a course taught by Every’s lead writer, Evan Armstrong. Here are five steps I follow in my academic writing, combining what I learned with my experiments:
1. Prework: Defining my voice
I begin by asking the AI to analyze my best writings and combine them with my favorite authors' styles. This results in an aspirational "taste profile" that guides the AI's suggestions. It's not about mimicry, but honing my unique voice and defining it as custom instructions for ChatGPT before a new writing project.
2. Outlining the spark
Here, I define the research context and central argument. The AI acts as an early peer reviewer, challenging my assumptions and highlighting overlooked angles. I use tools like NotebookLM, Perplexity, Undermind, or Scite to augment—not replace—my literature review. At this stage, I only have sentences, snippets, and selected quotes—the raw materials for the foundation of the manuscript. These chaotic fragments serve as guiding beacons around which empirical data and logical interpretations will gradually form an initial draft.
3. Generating the first draft
It’s time to turn divergent pieces into a convergent thread. I ask the AI—Claude.ai seems to do this best at the moment—to generate a draft for each section based on my outline, using the target journal’s structural requirements. The result is often generic and lacks proper argumentation, but that doesn’t matter. The first draft is meant to help me overcome the initial hesitation of starting, not to produce publishable content.
4. Revision-polish-repeat
The real intellectual heavy lifting happens now. I go through multiple rounds of revisions, often completely rewriting sections to sharpen my academic voice and argument. I ask the AI to suggest better-sounding phrases or highlight logical inconsistencies, but the revision remains a laborious human task. Later, I perform stylistic and grammar checks on the text. Lex has a fantastic set of features called Checks for this, but it’s mostly manual work.
5. Distribution and impact
After publication, AI can help translate complex academic ideas into accessible formats. For example, I can ask Spiral to draft LinkedIn posts summarizing key findings, but I must review and adjust each output to ensure it accurately represents my research and fine-tune the language for the general public.
Through these five steps, AI may amplify my capabilities, allowing me to produce more refined and impactful work, but it doesn't replace the core intellectual labor associated with academic writing. When used in collaboration with human cognition, these tools can help open-minded academics produce more accessible research without compromising rigor or originality.
From disclosure to discovery
To better integrate AI into academic writing, we need to rethink our ethics. We must move beyond simplistic notions of declaring "AI-generated" versus "human-generated" content. Instead, we should focus on how it enhances (or diminishes) the quality, originality, and impact of academic work. Here are some conversation starters for this shift:
Developing guidelines that acknowledge the spectrum of AI assistance, from grammar checks to substantive contributions. We can start by differentiating various levels of involvement. Disclosure of AI’s role in writing a research paper doesn’t have to be a limitation. It could extend the methodology section for true transparency and replicability.
Fostering critical AI literacy among academics to leverage these tools responsibly. Publishers, academics, and the public need to dismiss the idea of single-shot AI miracles and embrace human-AI flows with iterative chain-of-thought collaboration.
Reimagining peer review processes to account for AI's role in manuscript preparation. This could involve developing new criteria for evaluating AI-assisted work and training reviewers (and detection software) to assess poor and excellent usage.
Emphasizing the uniquely human aspects of research, such as framing questions, interpreting results, and drawing meaningful conclusions. As AI models help us examine our societies’ training data, we could start deeper conversations on what makes us truly human. For academic writing, this could mean rewarding an elegant question, insightful analysis, or innovative problem-solving over raw publication volume.
I've come to view my AI experimentation as necessary. I'm not cheating the system; I'm adapting to a new reality of human-AI collaboration in academia. The approach described in this essay allows anyone with an open mindset to augment their writing process to maintain rigorous standards while expanding the reach and impact of their research.