|
–Geoff-Hart.com: Editing, Writing, and Translation —Home —Services —Books —Articles —Resources —Fiction —Contact me —Français |
You are here: Articles --> 2026 --> The dangers of artificial intelligence
Vous êtes ici : Essais --> 2026 --> The dangers of artificial intelligence
by Geoffrey Hart
Previously published as: Hart, G. 2026. The dangers of artificial intelligence. https://www.worldts.com/english-writing/506/index.html
Artificial intelligence (AI) holds the promise of helping us deal with a variety of tedious and repetitive tasks, such as filling out forms, checking literature citations in a manuscript, or analyzing data. Unfortunately, many scientists, engineers, and other professionals are using it to replace humans in work that is intellectually, emotionally, and economically satisfying. This is clearly unethical: computers and their software should be our assistants, not our replacements. But careless use of AI also leads to many problems, some of which may be quite subtle and difficult to detect. Here, I’ll summarize the major problems with the “large language models” that are increasingly used for writing and editing.
AI software does not think and therefore cannot understand our writing; it only searches for correlations based on the frequency with which words appear close together. Even when these words are correlated (i.e., they appear together), the correlation may not be based on a causal relationship. Never forget: “correlation does not imply causality”.
If you ask AI software the same question repeatedly, or ask using different words with the same meaning, you often get different answers. Thus, it’s necessary to “triangulate” by comparing two or more answers to see whether they agree. More importantly, always do a reality check by employing your skepticism and your analytical skills to ensure that you agree with the software’s conclusions.
There are many tools for detecting plagiarism (use of another person’s writing without their permission); you’ve probably already encountered this software if you submit manuscripts to a journal publisher. To minimize the risk of plagiarism, use one of these tools or search for parts of any AI-generated text using Google or another search engine.
One problem you’ll encounter is that there are relatively few ways to accurately and efficiently describe specific research methods, which means that everyone ends up using the same words to describe those methods. Plagiarism software doesn’t understand this, and often flags these words as plagiarized. You may need to work around this problem by speaking with a journal’s editor.
When AI software cannot find what you’re looking for, it often invents “facts”. This is particularly common for literature citations, which are easy but time-consuming to check. It’s harder and takes much more work to find instances of hallucination in the Results, Discussion, and Conclusions of a journal article. Always rigorously review any results, interpretations, and conclusions provided by AI software.
It’s difficult and sometimes impossible to learn the logic AI used to reach a conclusion. This remains true even with software that attempts to show the chain of logic that it used to reach a conclusion. Without knowing the logic, it’s difficult to know when an AI solution can and cannot be trusted.
Even today, the input data used to create large language models is usually not screened for quality, thus unreliable information gets equal weight to the truth. This undermines the software’s potential to find the correct information and correctly interpret and use that information. As the saying goes, "garbage in, garbage out".
AI can only repeat information that has already appeared in its database. It cannot detect possible new interpretations that would lead to breakthroughs. Future AI software may learn how to examine the current state of knowledge, identify gaps, and provide suggestions on how to fill those gaps. But current software cannot do any of these things.
Now, and for some long time ahead of us, AI software will not be trustworthy. If you use it, remember that it’s not a substitute for human insight and that it’s your responsibility to rigorously scrutinize its findings. But also remember the ethical implications of its use: the more often you use the software, the more free (unpaid) training you are giving that software and the sooner that software will acquire the “skills” required to replace your colleagues—or you.
Bergstrom, C.T.; West, J.D. 2025. Modern-day articles or bullshit machines? https://thebullshitmachines.com/
©2004–2026 Geoffrey Hart. All rights reserved.