Image by Hotpot.ai
By Jill Maschio, Ph.D
As an educator, I have pondered for quite some time how to help my students use large language models (chatbots) effectively for learning. The following is a document I share with my students. You may use or revise this document to suit your students’ needs. Please consider a small donation.
The use of Large Language Models (LLMs), such as chatbots, has entered the field of education. The questions educators are asking are essential and range from plagiarism to curriculum development. The following provides students with guidance on using LLMs effectively in their education. The document starts with security risks, then potential problems with AI for learning, potential errors in AI-generated outputs, and, last, psychological harm. The last section lists tips for using AI to enhance learning, reduce security risks, use AI effectively for learning, and ways to avoid psychological harm.
Security Risks
There are several security risks associated with using AI. The following provides an overview of some of the risks.
- Collection of user information. Companies that develop chatbots collect user information and use it to train their bots. People agree to that in the Terms and Conditions when they create a new account.
- Leak Information and scam users. Chatbots can leak confidential information, inject malicious outputs due to attackers, and be fake or real, with real ones hijacked and used for phishing scams and tricking users into sharing credentials or clicking on malicious links.
- Misinformation. If a chatbot is trained on “bad” data, it can spread misinformation or harmful biases.
- Breaches. Companies that store information in cloud storage or servers housing chatbots leave it vulnerable to breaches.
Learning
Cognitive offloading
Cognitive Load Theory (CLT) posits that human working memory has a limited capacity, so overloading it can overwhelm learning (Sweller, 1988) According to an MIT study (2025), student participants who used an LLM before writing an assignment showed less neural connectivity than those who first thought about the content and then used an LLM, as demonstrated by an EGG. Judges also scored the LLM first group’s content higher. The authors noted, “The convenience of instant answers that LLMs provide can encourage passive consumption of information, which may lead to superficial engagement, weakened critical thinking skills, less deep understanding of the materials, and less long-term memory formation” (Kosmyna et al., p. 12). ). LLMs may reduce mental effort but compromise learning information in depth (Stadler, et al., 2024).
Hivemind
Hivemind is a concept where LLM’s provide users with similar output or content. The similarity is due to computer scientists training chatbots on similar data sets. According to a study by Jiang et al. (2025), LLMs fail to produce diverse, open-ended information, as evidenced by intra-model repetition and inter-model homogeneity across 70 LLMs. Intra-model repetition is where LLMs fail to produce diverse outputs. Inter-model homogeneity is where different LLMs produce similar outputs. Using a pairwise statistical analysis of the outputs and qualitative analysis, the researchers reported that the LLMs produced highly repetitive outputs with sometimes overlapping phrases and produced similarities with responses. These concerns have led to researchers questioning the homogenization effect on human thought and conformity and have shown, since the introduction of LLMs, that the diversity in human writing styles has declined (Sourati et al., 2025).
Critical thinking
Critical thinking is a deliberate process of applying scientific thought to what we read or see before forming a conclusion (Ennis, 1964). Critical thinking can also be applied to our own thoughts. When information and answers are readily available, critical thinking may be reduced or omitted from the cognitive process.
Synthesis of information
LLMs have an interface that provides a synthesis of information. This information synthesis is completed by the time the end-user reads it, eliminating the need for the user to perform the cognitive task. This may ease the cognitive burden or complexity for the user, but there may be a cost to not synthesizing information into long-term memory. Students may offload other cognitive processes, such as metacognition, understanding, critical thinking, and engaging deeply with content, as a result of not carefully using LLMs for learning.
Echo Chamber Effect
According to Kosmyna et al. (2025), an echo chamber is “where a person becomes trapped within information environments that reinforce existing beliefs while filtering out contradictory evidence” (p. 21). Voices may be excluded from LLM output as its algorithms are designed to predict the most probable “token” in a sequence. Much like social media, information provided by LLMs may not provide the expressions or the opinions of the general public, but reinforce similar narratives (Avin, et al. 2024). This can limit a student’s ability to learn about alternative and diverse perspectives.
In addition, when an LLM produces a synthesis of information, it is easy to accept the information at face value and fail to read the original author’s work. This has the potential to leave the user out of the cognitive process of synthesizing information and to fail to read the original author’s information, which may lead to a lack of understanding of the content and a deeper understanding crucial for development.
Content Errors
- Hallucinations. AI hallucinations are when the model produces outputs that are false, nonsensical, or not grounded in reality. According to VisualCapitalist’s website, hallucination rates vary depending on the LLM.
Perplexity had the lowest rate of 37%
Copilot – 40% rate
ChatGPT 67% rate
Gemini 76% rate
Grok 2 and 3 had the highest rates at 77% and 94%, respectively.
- Bias. Bias can occur for the following reasons (Chapman University, 2025). (a) data collection. If data is trained on biased algorithms or is not diverse, the resulting outputs can reflect those biases. (b) The labeling process for training data can be biased if human annotators interpret the same data differently. (c) If the training data is imbalanced, the LLM may produce biased outputs, such as favoring majority group predictions over others, and (d) biases can also occur when the LLM is deployed for real-world applications. These biases can occur through selection bias, confirmation bias, stereotype threat, and out-group homogeneity bias.
- Incorrect answers. According to the BBC and the EBU (European Broadcasting Union), AI inquiries yield an erroneous answer rate of 45% for ChatGPT, MS Copilot, Gemini, and Perplexity. The AI was “dangerously self-confident,” according to the author of the article, Joshbersin (2025). It answered incorrectly who the Pope is, who the Chancellor of Germany is, and other news-related information.
Psychological Harm
- Addiction. Technology addiction has been a growing problem. Along with it, so have depression and mental illness rates increased.
- Sycophancy. LLMs can direct human thinking to a narrow environment through compliments. LLMs have been designed to agree with humans and to use flattery.
Solutions and Effective Prompting
- Turn off features for the chatbots to collect and use chats to train the chatbot.
- Don’t provide a chatbot with any personal or confidential information.
- Report suspicious content from AI.
- Review privacy policies.
Learning
Learn and use effective prompts to prevent cognitive offloading and a hivemind. One way is to prompt the LLM with specific guidelines.
- The Prompt
The prompt you use affects the outcome. To help ensure learning and avoid potential pitfalls of AI, do the following:
Ask it to provide you with information(a) provide three alternatives (b) the information must come from credible and scholarly sources from early contributors to the field and/or from the literature or science (c) cite sources in (list writing format required) (d) provide non opinionated information (e) provide two critiques of the contributing authors and or science, and (f) do not write in bullets but use paragraphs.
2. Dive Deeper
- Question Prompts. Use the information the LLM provides to dive deeper to foster your learning. One way is to think of your own questions about the content generated by AI and prompt it with them. This will take a learner beyond basic knowledge or the synthesis of information generated by the LLM.
- Metacognition. Metacognition is the act of reflecting on and being conscious of what you know and what you don’t. Ask yourself reflective questions, such as: what did I learn, and what do I yet need to understand?
- Read. Read the information the LLM provided as well as the sources that it offered. After reading the original work, assimilate what you read into your own thoughts and words. In doing this, you will not cite the LLM because your thoughts will be from reading and synthesizing the original work.
- Accelerate your Knowledge. It’s one thing to learn basic information, but students should strive to go beyond the basics to become productive, educated citizens and employees. Students can use LLMs to cultivate their knowledge, understanding, and intelligence. This can be done by taking control of one’s own learning and not side-stepping the cognitive processes that AI can support.
Psychological Harm
As an intelligent agent, artificial intelligence does not have a will or the ability to think. It doesn’t have human emotions or experiences. It doesn’t have a “moral compass”. It is not conscious. It runs on algorithms. It cannot “be a friend” or relate to human experiences. Anthropomorphizing an intelligent agent that is not superior to humans and perceiving it as such can be dangerous. It is a machine.
Know that AI is developed to have and start conversations with humans. It can generate content that makes it sound empathetic and caring about humans. Because AI can sound as though it connects with humans and is conscious, it can influence human psychology. It does not know you; it hasn’t had any real-life experiences, so people should not consider what it generates as authoritative for their lives, thoughts, and behavior. Because of these issues, I recommend students use caution and common sense when using AI chatbots.
Avoid over-reliance on it. When using it, avoid letting it be an external guide to cognition (your thoughts). Learn to recognize when you are relying on it too much and taking too much of what it says as truth. As with other forms of technology, there is an addictive aspect. If you believe you are exhibiting addictive behavior to an LLM, it is advised that you talk with a professional.
There is a fundamental principle that AI should not harm humans, but governments and AI developers are not ensuring it is upheld. Until that occurs, my philosophy is to view AI as a tool that can assist your learning. I discourage students from outsourcing their learning to a machine.
Furthermore, I caution you about the use of LLMs (chatbots) in your writing until the copyright lawsuit is settled. Consider the argument that if ChatGPT developers used information from the World-Wide Web without authors’ permission or without citing its source, it is a form of plagiarism. This is true even when done in the name of science, as Sam Altman has used that as his reasoning for scraping the World-Wide Web for data for ChatGPT. Therefore, using the output from specific chatbots directly in your paper increases the chances of your paper being plagiarized. I recommend Liner AI for academic research and writing if your institution does not incorporate one into their learning management system. This LLM uses sources like Arixiv, PubMed, and Nature in scholar mode and prioritizes reliable, human-verified information and scholarly, peer-reviewed content, not just raw web pages.
References
Avin, C., Daltrophe, H., & Lotker, Z. (2024). On the impossibility of breaking the echo chamber effect in social media using regulation. Scientific Reports, 14, 1107. https://doi.org/10.1038/s41598-023-50850-6
Chapman University (2025). Bias in AI. https://www.chapman.edu/ai/bias-in-ai.aspx
Ennis, R. H. (1964). A definition of critical thinking. The Reading Teacher, 17(8), 599–612. http://www.jstor.org/stable/20197828
Joshbersin. (2025, October 26th). BBC finds that 45% of AI queries produce erroneous answers. Joshbersin.com https://joshbersin.com/2025/10/bbc-finds-that-45-of-ai-queries-produce-erroneous-answers/
Sourati, Z., Karimi-Malekabadi, F., Ozcan, M., McDaniel, C., Ziabari, A., Trager, J., Tak, A., Chen, M., Morstatter, F., & Dehghani, M. (2025). The shrinking landscape of linguistic diversity in the age of large language models (arXiv:2502.11266). https://doi.org/10.48550/arXiv.2502.11266.
Stadler, M., Bannert, M., & Sailer, M. (2024). Cognitive ease at a cost: LLMs reduce mental effort but compromise depth in student scientific inquiry. Computers in Human Behavior, 160, 108386. https://doi.org/10.1016/j.chb.2024.108386
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257–285. https://doi.org/10.1207/s15516709cog1202_4
Unesco. (n.d.) Ethics of artificial intelligence. https://www.unesco.org/en/artificial-intelligence/recommendation-ethics#:~:text=The%20use%20of%20AI%20systems,may%20result%20from%20such%20uses.
Visualcapitalist. (n.d). Hallucination rates of AI models. https://www.chapman.edu/ai/bias-in-ai.aspx
