AI Research Tools: Learning from Wikipedia's Legacy
What Wikipedia Can Teach Us About Guiding Students to Use AI Responsibly
Exciting News! Zainetek Educational Advisors has launched The AI Chalkboard, a newsletter designed to bring you actionable tips, practical examples, and the latest updates from the world of AI in Education. Whether you’re an educator, administrator, or just curious about meaningful AI literacy, this is your go-to resource. Sign up here and join the conversation—we can’t wait to connect with you!
This week, Google announced the launch of a new tool called “Deep Research,” a new addition to the evolving landscape of AI research tools. Though still in beta, Deep Research appears poised to compete with existing platforms like Perplexity and ChatGPT’s Web Search, it aims to deliver more comprehensive reports by synthesizing information from across the web and providing detailed summaries with linked references.
AI's role in research has become one of the most pressing questions in educational settings. Teachers and administrators are grappling with how to integrate these tools responsibly while maintaining academic integrity. From concerns about misinformation to opportunities for fostering critical thinking, the debate reflects the broader challenges of adapting to transformative technologies.
Deep Research’s launch provides a timely opportunity to discuss how educators might react to the growing use of AI in research. With tools like this emerging, the question is not just about their functionality but also about how we can guide students to use them responsibly. This is not a review of the tool, but rather an exploration of how to respond to students’ increasing use of AI chatbots for research purposes.
The Evolution of AI Research Tools
At first, large language models (LLMs) like ChatGPT weren’t connected to the Internet, so their information wasn’t up-to-date. Educators at the higher education level pointed this out to students, consistently flagging these systems as “stuck in time.”
Then, OpenAI added plug-ins to allow its LLM to connect to the Internet in 2023. Perplexity, which launched its ‘Ask’ feature late in 2022, increased capabilities through the following year and as of March of this year had over 15 million monthly active users. Now, the race to replace traditional search engines is a key aspect of commercialization across leading frontier models, including Google.
As a result, platforms like Perplexity are regularly used as research tools and alternatives to search engines by students at all levels. A recent survey by the Digital Education Council analyzed responses from 3,839 students across bachelor, master’s, and doctorate programs in 16 countries. It found that 86% of students use AI in their studies, with 24% using it daily and 54% at least weekly. The most common activity among these users was searching for information, reported by 69% of respondents.
The Flaws in AI as a Research Tool
Many are aware that AI often produces inaccuracies. These inaccuracies go beyond simple misrepresentations; they include entirely fabricated explanations, or “confabulations,” as Geoffrey Hinton calls them.
Researchers and developers are working to reduce these flaws, but there’s no guarantee of success. For instance, initiatives like OpenAI’s alignment research and Google’s fact-checking algorithms aim to improve AI’s ability to verify sources and minimize confabulations. Despite these efforts, the challenge of eliminating inaccuracies entirely remains daunting. This leaves users with a predicament.
On the surface, LLMs seem to offer incredible tools for summarizing academic documents, news articles, or data—potentially saving significant research time. However, the effort required to verify each summary can negate or even exceed those time savings, especially when AI outputs include dead links or irrelevant articles.
More concerning is the likelihood that users might implicitly trust AI-generated explanations and unknowingly incorporate inaccurate information into their understanding of a concept. This issue matters because it risks undermining the development of critical thinking skills, a cornerstone of education, and could lead to a generation of students ill-equipped to discern fact from fiction in an increasingly digital world.
Wikipedia’s Journey in Education
Wikipedia launched in 2001 and quickly became a popular reference tool. By the mid-2000s, however, its open-editing model drew significant criticism from educators who questioned its reliability. High-profile incidents of misinformation—such as the infamous false accusations in John Seigenthaler’s biography—highlighted its vulnerabilities.
In response, many schools and universities prohibited students from citing Wikipedia in academic work. Middlebury College’s history department, for example, formally banned Wikipedia citations in 2007, emphasizing the importance of consulting more authoritative sources. However, this was not a blanket ban on Wikipedia; educators recognized its value as a starting point for research, provided students critically evaluated its citations and cross-referenced them with credible materials.
Over time, attitudes shifted. By 2010, Wikipedia collaborated with 10 public universities, including the University of California, Berkeley, in a 17-month pilot program called the WikiProject Public Policy Initiative. The program aimed to enhance the quality of Wikipedia pages related to public policy by involving students and professors in creating and updating content as part of their coursework.
By 2011, a Harvard Graduate School of Education report noted that educators had started to “make peace” with Wikipedia, accepting it as a reality of the modern research landscape, mirroring the framework of acceptance as the last stage of the grieving process that I see as crucial for forward progress.
By 2015, educators were even integrating Wikipedia into curricula as a tool for fostering critical analysis. A wide range of academic research has been published showing that creative infusions of the platform into learning processes can enhance critical thinking.
How Wikipedia Mirrors AI Tools
The arc of educators’ reactions to Wikipedia is a useful prism for considering our approach to AI, but the analogy extends further. Wikipedia and AI are very similar with regard to the information that they produce. Humans confabulate too, in other words, (and sometimes on purpose).
This duality makes Wikipedia a useful lens for understanding how we might craft a more nuanced reaction to AI in research. Here are several key shared characteristics to consider across both platforms:
Information is both flawed and unreliable.
Information is sometimes accurate and, when it is, quite useful.
Information is presented as summaries or explanations of existing Internet content.
Summaries are accompanied by links that supposedly verify the information.
The links are often dead, unreliable, or disconnected from the summaries.
These similarities provide an opportunity to create a more nuanced approach to AI in research than the early days of Wikipedia allowed. By learning from past missteps, educators can guide students to leverage these tools effectively while addressing their limitations.
A Classroom Example
Last year, when I asked my high school English students to research apartheid in South Africa ahead of reading Born a Crime by Trevor Noah, I didn’t need to explain the safe and effective use of Wikipedia. Before starting, I asked them how we should “treat” Wikipedia in our research process.
They groaned, as if they’d heard it a thousand times.
“You can’t cite it, it’s unreliable, but you can use it as a starting point,” one student mumbled. Another added: “We’re supposed to go to the bottom and click the links to check them. Use it to start, but find credible sources to cite.”
That’s it. They got it. I sat back and let them work.
Fast-Forward to AI
Now let’s apply the same logic to AI tools in research. Instead of going through years of bans and re-bans, why not skip to the part where we teach students to use AI responsibly, as we do with Wikipedia?
Here’s how it might sound in a history classroom:
“Yes, you can use AI tools to help you with research, but only as a starting point. It’s not credible, and it makes things up all the time. But it can be useful as a summary tool to help you understand concepts and find links. Once you have the links, evaluate their credibility. If they’re credible, you can cite those sources in your research—but you cannot cite an AI summary. If I see uncited facts in your paper, you’ll lose points for the research portion of this exercise. That's not an accusation that you of copied from AI; it's a tenet of good research. Facts need citations.”
“Oh, and pay attention to how often those links match or don’t match the summaries. That’s for your own sake, so you can navigate the world outside of school without me.”
There are also opportunities for fostering critical thinking by analyzing outputs in an adversarial fashion, reviewing transcripts of AI use for research as a class, and more.
Same as the Old Boss
It took over a decade for educators to see Wikipedia for what it was: a sometimes-useful starting point for research, never a source to be cited. This experience teaches us the importance of adaptability in the face of new technologies. Instead of resisting change, educators can focus on guiding students in critically evaluating sources, fostering digital literacy, and leveraging tools like AI and Wikipedia to enhance learning outcomes while mitigating their flaws. I believe AI will follow the same trajectory—not just in research, but across all areas of education.
So what should we do about AI in research?
Use it like Wikipedia.
Let me know if you disagree.
We’re giving away a free signed copy of AI in Education: A Roadmap for Teacher-Led Transformation for the Christmas holidays! If you would like to nominate an educator to be entered to win, tag them here on BlueSky or Instagram. Entries close December 20th!
Don’t forget to sign up for our weekly email newsletter here.
I don’t disagree, Mike. This post is very useful as a primer on Wikipedia, for one thing. The history you provide, the links to sources for follow up reading, and the shift from static to dynamic tools to start an inquiry project ought to be built upon in teacher prep programs. This is a great example of the kind of information classroom teachers at the middle and secondary level can put to immediate use. What links Wikipedia and makes Perplexity not so perplexing? Old fashioned critical thinking.
I do have a quibble with your imaginary history teacher. While I appreciate their warning to students about the difference between simply citing a source—any source—for a public fact and citing a credible and verifiable source, I wonder why they threaten to deduct “points” from the research section of the grading rubric?
You recall Ethan Mollicks list of things students should consider when deciding whether to turn to AI or not. The relevant “point” Mollick highlighted is “Do not use AI if your goal is to earn points.” The upshot seems to be “Have a genuine desire to learn something for you, not for your teacher.”
I also worry about the tension between teaching toward self-regulation/intrinsic motivation and traditional uses of points and percentages to motivate compliance behavior. What advice do you have for teachers who are following Mollicks advice?