When you ask a generative AI tool like ChatGPT, Gemini, or Claude for a fact, it doesn’t pull from a database. It guesses. And it guesses wrong a lot more than you think. A 2023 study from the University of Washington found that ChatGPT invented fake scholarly citations in 65% of cases when asked to reference academic sources. That’s not a glitch. It’s how these systems work. They’re not search engines. They’re pattern generators. And if you cite them as if they’re reliable sources, you’re not just making a mistake-you’re spreading misinformation.
Why AI Can’t Be a Source
You can’t cite AI the way you cite a book, journal, or website. Why? Because AI doesn’t store or retrieve information. It recombines what it’s seen before-sometimes accurately, often not. When it gives you a quote from a 1987 paper, that quote might never have existed. Or it might be from the right paper but twisted to fit the prompt. Professor Edward Ayers from the University of Richmond documented cases where AI cited real sources but assigned them false content. Imagine writing a paper and citing a study that says, "Climate change is a hoax," when the actual study says the opposite. That’s not a typo. That’s AI hallucination.The Association of College and Research Libraries put it bluntly in 2023: "AI should never be cited as a source of factual information." The only time you should mention AI in your bibliography is if you used it as a tool-like asking it to brainstorm interview questions, rephrase a paragraph, or summarize a dense article. Even then, you’re citing the tool’s role in your process, not its output as truth.
How Major Style Guides Handle AI Citations
Three major citation styles-MLA, APA, and Chicago-have all released guidance since 2023. But they don’t agree. And that’s the problem.MLA requires you to include the exact prompt you used. For example:
- "Explain the causes of the French Revolution in three sentences." prompt. ChatGPT, model GPT-4o, OpenAI, 15 Mar. 2023, chat.openai.com/chat.
This makes your process transparent. But it also bloats your Works Cited. UC Berkeley writing instructors reported student papers grew 17% longer just from including prompts. And if someone else tries to replicate your result? They’ll get a different answer. AI outputs change with every prompt, even if the wording is identical.
APA treats AI like a software program. You cite the company, the tool, and the version:
- OpenAI. (2023). ChatGPT (Feb 13 version) [Large language model]. https://chat.openai.com
In-text: (OpenAI, 2023). It’s cleaner than MLA, but it still implies the AI output is a stable, citable product. It’s not. The same prompt on March 1 will give you a different response than on March 15.
Chicago took the most radical stance: don’t include AI in your bibliography at all. Treat it like a conversation you had with a colleague. Just note it in a footnote:
- Text generated by ChatGPT, OpenAI, March 7, 2023, https://chat.openai.com/chat.
Chicago’s reasoning? AI outputs aren’t retrievable. You can’t go back and check them. But in December 2023, OpenAI launched shareable chat links for enterprise users-something that might change Chicago’s mind in 2024.
The Only Safe Way to Use AI in Research
There’s one rule that all experts agree on: Never cite AI as the source of your facts. Always trace the claim back to the original source.Here’s how to do it, step by step:
- Treat AI output as a draft. If ChatGPT says, "The Treaty of Versailles caused World War II," don’t accept it. Look up the treaty yourself. It didn’t cause WWII-it sowed resentment that contributed to it.
- Find the real source. Use Google Scholar, JSTOR, or your university library. Find the book, article, or primary document that actually supports the claim.
- Cite the real source. Use MLA, APA, or Chicago to cite the original author, not the AI.
- If you used AI as a tool, note it. Example: "Interview questions were generated using ChatGPT (GPT-4, OpenAI) with the prompt: 'Generate 10 open-ended questions about healthcare access in rural communities.'"
Stanford’s HAI Institute tested this method in pilot courses. Students who followed this two-step process cut citation errors by 73%. That’s not magic. That’s basic research hygiene.
What Happens When You Don’t Verify
The consequences aren’t theoretical. In 2023, a law student submitted a brief that included four fake court cases invented by ChatGPT. The judge noticed. The student was suspended. The case made national news.Another example: a biomedical researcher cited AI-generated methodology in a paper. When a colleague tried to replicate the experiment using the same prompt, the AI gave a completely different answer. The paper was retracted. The researcher lost funding.
A survey by the Council of Writing Program Administrators found that 47% of instructors had caught students including AI-generated fake citations in their reference lists. That’s not laziness. That’s ignorance. And it’s getting worse. In December 2023, 61% of undergraduates admitted to using AI-generated content without proper citation-mostly because their schools hadn’t given clear rules.
What’s Changing in 2024 and Beyond
The academic world is scrambling to keep up. In November 2023, the International Committee of Medical Journal Editors (ICMJE) declared that AI tools can’t be listed as authors. All AI-generated text must be disclosed with full prompt details.OpenAI’s shareable chat links are a game-changer-if they become standard. Right now, only enterprise users can access them. But if every AI tool starts offering permanent, shareable conversation links, citation styles might finally have something stable to point to.
Meanwhile, Crossref, the organization that issues DOIs for academic papers, is building a system to assign unique identifiers to AI-generated content. Pilot testing begins in Q2 2024. If it works, we might see something like a DOI for AI outputs: 10.1234/ai-chatgpt-2024-03-15-abc123.
But here’s the hard truth: no citation format can fix the core problem. AI doesn’t know what it’s saying. It doesn’t understand truth. It doesn’t have sources. It’s a mirror that reflects the internet’s noise.
What You Should Do Right Now
If you’re using AI in your work, here’s your action plan:- Never cite AI as a source. Only cite the original documents it references-or should have referenced.
- Document your prompts. Save screenshots or copies of your conversations. You might need them later.
- Verify every fact. If it came from AI, find the book, article, or official report that backs it up.
- Ask your institution. Do they have an AI policy? If not, push for one. Most universities don’t yet.
- Teach others. If you’re a student, show your peers. If you’re a professor, build verification into your assignments.
The goal isn’t to ban AI. It’s to use it responsibly. AI can help you write faster, think broader, and find gaps in your research. But it can’t replace your judgment. And it can’t be trusted as a source. The only way to protect your work-and the integrity of scholarship-is to trace every claim back to something real.
Can I cite ChatGPT as a source in my academic paper?
No. Major style guides (MLA, APA, Chicago) and academic institutions agree that generative AI tools cannot be cited as sources of factual information. AI systems like ChatGPT generate plausible-sounding but often false content, including invented citations. You should only cite AI if you used it as a tool-for example, to brainstorm ideas or rephrase text-and even then, you must cite the original verified sources for any claims you make.
What’s the difference between citing AI as a tool versus a source?
Citing AI as a tool means you’re documenting how you used it in your research process-like asking it to generate interview questions or summarize a paper. You’re not claiming its output is true. Citing AI as a source means you’re treating its output as factual evidence, which is never acceptable. For example: "Interview questions were generated using ChatGPT (GPT-4, OpenAI)" is correct. "ChatGPT states that climate change is caused by solar activity" is incorrect and misleading.
Why do some style guides say to include the prompt in the citation?
MLA requires including the prompt to make your interaction reproducible. Since AI outputs change with each prompt, even slight wording differences can lead to different answers. Including the prompt helps others understand what you asked and why you got the result you did. But this doesn’t make the output reliable-it just makes your process transparent. You still need to verify every claim with real sources.
Can I use AI to help write my paper if I cite it properly?
Yes, but only if you follow strict verification rules. You can use AI to draft sections, suggest structure, or clarify complex ideas. But every factual claim, statistic, quote, or reference must be checked against original, credible sources. Then, you cite those sources-not the AI. Institutions like Stanford and MIT have shown that this dual approach reduces errors by over 70%. The AI is your assistant, not your authority.
What happens if I cite AI-generated content without verifying it?
You risk academic misconduct. In 2023, a law student was suspended after citing four fake court cases invented by ChatGPT. Journals are retracting papers where AI-generated content was presented as fact. Many universities now use AI detection tools and require disclosure statements. Even if you’re not caught, you’re spreading misinformation. Your credibility-and your institution’s-depends on accurate sourcing.
Will AI citation rules change soon?
Yes. OpenAI’s new shareable chat links for enterprise users are a major step toward verifiable AI outputs. Crossref is developing a DOI-like system for AI-generated content, with pilot testing starting in Q2 2024. If these systems become standard, citation styles may evolve to treat AI outputs more like traditional sources. But until then, the safest rule remains: never cite AI as a source. Always go back to the original document.
Liam Hesmondhalgh
December 13, 2025 AT 01:19AI citations? Please. I’ve seen students hand in essays with fake references like they’re playing Bingo. And now they’re using ChatGPT to make up entire court cases? That’s not lazy-that’s criminal. My uni should ban this crap outright.
Patrick Tiernan
December 13, 2025 AT 07:10So we’re supposed to trace every single thing back to a paper now? Cool. I’ll just go dig up 1987 journals in the basement of the library while my coffee goes cold. Meanwhile ChatGPT gave me a perfect outline in 30 seconds. Who cares if it made up a quote? No one’s gonna check anyway.
Patrick Bass
December 14, 2025 AT 15:01Actually, the MLA requirement to include the prompt is kind of useful-if you’re trying to replicate results. But it’s also a nightmare for formatting. I’ve seen papers where the Works Cited is 70% prompts. It’s like citing your brainstorming notes.
Tyler Springall
December 14, 2025 AT 15:18Let’s be real: this isn’t about citation styles. It’s about the collapse of intellectual responsibility. We’ve outsourced thinking to machines and now we’re surprised when they vomit nonsense? The real crisis isn’t AI-it’s the generation that treats algorithms like oracles.
Colby Havard
December 15, 2025 AT 10:08It is, without question, an egregious epistemological failure-a systemic abdication of scholarly rigor-when one substitutes algorithmic probabilistic output for verifiable, peer-reviewed, primary-source evidence. The very notion of citing an AI as a source constitutes a fundamental breach of academic integrity, and its normalization portends the erosion of truth itself.