Using AI Before It Uses Us
On the fleeting moment in which large language models still help more than they harm
Mainstream coverage of AI tends toward pessimism. Its seductive power is widely acknowledged, but the prevailing tone of commentary is one of apprehension, foreboding, and even alarm.
At best, there is a bubble that will burst, generating unforeseeable wreckage. At the near-worst, the machines will stultify our minds and pillage our wealth. At the very worst, AI will result in the extinction of the human race. My purpose here is not to contest these scenarios or to soften their force. The hazards of AI need no rehearsal here—they are widely catalogued, and I take them as the dark backdrop against which my own observations are set.
What I offer here are reflections on the brief interval of AI history we presently inhabit—an interval that may not endure. For the moment, at least, there are striking things these systems can do (my experience is limited to ChatGPT and Claude), and some of them are genuine gains in an era dominated by alternative media and podcaster journalism. Currently, AI systems are able to help us parse the dizzying landscape of internet “news,” if we use them intelligently. They also offer powerful tools for writers and researchers who were previously limited to tools built into online databases and search engines, and before that, to the byzantine catacombs of university libraries.
Before going into this, let me offer some reassurance by making clear my awareness of the current state of AI awfulness. AI slop is turning the internet into something even more garish and insidious than it always has been. By flooding the internet with frictionless prose and content devoid of thought, AI-generated articles and advertising are actively degrading our habits of attention and our sense of what counts as information. The writing profession is littered with opportunities to use this godless technology to generate content that is both plausible and vacuous. Vulnerable users are being drawn into forms of engagement with these systems that foster a new species of digital dependency—one that carries real psychological costs.
True as these things are—and this is but a small sample of the problems this new technology is generating—I am not ready to say AI is simply evil and to be avoided entirely. At least not yet. As it turns out, some current AI chatbots make excellent servants. They can be tireless and meticulous research assistants—a benefit worth embracing, so long as one approaches these systems with care and a discerning skepticism. ChatGPT, for example, is astonishingly sophisticated as a library rat. In the same way Google and other search engines were revolutionary as a means of seeking information, large language models mark a significant advance along that same trajectory. I tend to treat ChatGPT as little more than a highly advanced search engine. I give it a prompt—carefully worded to avoid prejudicing its results—and I instruct it to return only results with precisely annotated sources. In the cases of ChatGPT and Claude, the systems are already tuned to filter sources according to strict journalistic and academic standards. This isn’t perfect, but it’s a very good start.
These bots can quickly generate source material that is both highly relevant to the topic I am researching and easily verifiable. This highlights a crucial precaution for anyone who chooses to engage these tools. Always check behind the bot. These things make mistakes, so take nothing for granted on a first pass. You must pester it with little questions that corner it into either confirming its results or confessing an error. There is a skill one must learn in both formulating prompts and interrogating the results. One important tactic is to instruct the LLM to list its sources. This lets you see the source, assess it directly, and check that the AI’s interpretation is accurate. Sometimes it isn’t.
In addition to the help this kind of careful AI usage can offer to citizenly news-reading, it can also serve professional researchers by dramatically accelerating and refining internet and database searches. In the past, one could Google a topic of interest or dig into specialist services such as JSTOR, Scopus, ScienceDirect, or Google Scholar. Now, a good AI bot can reach into many of these same repositories—academic and journalistic alike—and sweep across them in seconds. It can perform this function in a discursive fashion that can learn quickly what you’re looking for and offers suggestions that help you refine your inquiries.
Judging the reliability of sources is another topic, which is relevant to any reader, whether you’re using LLMs or just perusing a newsstand at a local bookstore. Learning the basic tenets of proper journalism is more necessary now than it used to be. There’s quite a lot of text out there these days that masquerades as journalism and expert opinion but falls far short of basic standards. As of now (late 2025), ChatGPT and Claude are tuned to help users distinguish authentic journalism from its imitators and peer-reviewed research from unvetted speculation. Still, one should always repeat the rituals that confirm source quality.
The point here is that not only can the professional writer and researcher work much more quickly and efficiently by means of a judicious use of AI, but also the educated and conscientious layman now has a much easier time getting precise and reliable information out of the wild and wooly internet.
The truth of what I’m saying here is premised on the reliability of these AI systems. If that were to be compromised, then all bets would be off.
Unfortunately, such compromise is not altogether unlikely. Earlier this year Elon Musk’s xAI chatbot, Grok, exhibited astonishing behavior. Users had complained that many of its responses were “woke,” and so Musk spent a weekend tweaking Grok’s innards to better reflect his anti-woke values. A few days later, when Grok degenerated into full Hitler mode, there were sufficient complaints to prompt xAI to dial back Grok’s new commitment to Musk’s sensibilities.
This episode is instructive. Erratic and cartoonishly ideological minds such as Musk’s will make a hash of tuning chatbots. It is and will continue to be fairly easy to see when this happens. More sophisticated ideological minds, however, will inevitably take a more nuanced and insidious approach. We may be living in a brief golden age of AI, before these systems are reshaped by those with the means and the motive to steer them—and us—toward preferred ends. Given what we’ve seen in the tech world in recent years, this seems likely, particularly if policymakers aligned with the industry continue to oppose regulation of AI development.
For now, though, we inhabit a moment in which these tools can genuinely assist us, if we approach them with discipline and a healthy suspicion. The same technologies that inundate us with synthetic nonsense can also help thoughtful users cut through it—and through the bad-faith reasoning and misinformation that have now flooded the public square. The tensions inherent in this technology, however, will not disappear, and the balance may well shift for the worse once commercial, political, or ideological interests learn how to bend these systems more fully to their purposes. But while the present window remains open, we should not be afraid to avail ourselves of a portion of what AI offers: improved research efficiency, clearer and more verifiable access to information, and a digital experience that is a bit easier to navigate than before.


