Years ago, when I was just out of college, someone close to me recommended David D. Burns’s Feeling Good: The New Mood Therapy. The book, first published in 1980, effectively popularized CBT, and it remains widely recommended and used to this day.
The element of Burns’s book that I have carried with me all this time is a table listing ten types of common cognitive distortions. These include items like “all-or-nothing thinking,” overgeneralization, catastrophizing, and disqualifying the positive.
Over the years, I’ve used these categories in all aspects of my life, and often enough to help students build stronger, clearer argumentation skills in the classroom. Increasingly, I’ve seen these ten flawed ways of thinking seep their way into all kinds of discussions, but especially those that unfold online.
All-or-nothing thinking—also known as “dichotomous thinking”—is the one that seems to keep coming up around discussions having to do with technology, or questions around the hows and whys of emergent technologies, like the cluster of automation technologies often referred to as “AI.”
There’s a chapter in CODE DEPENDENT, for example, when journalist Madhumita Murgia describes software that has been trained to scan X-rays for visual patterns indicating a risk of TB in patients. These tools—“diagnostic algorithms,” which essentially scan pixels for patterns—are meant to be used alongside medical specialists (which of course is where things, as always, seem to get murky and, at worst, quite dangerous).* As such, they show great potential.
There are also positive examples of how large language models can help, in humanities-based research, for example, in picking up on clusters of words over time. If the input that goes into the LLM is accurate and useful—and that, of course, is a big “if,” as it requires transparency and expertise—then large language models can be helpful for picking up patterns of different kinds of texts or images (via pixels). They can scan databases too vast for individuals to manage without loads and loads of time and resources.
What they cannot do is think.
The all-or-nothing rhetoric around emergent technology essentially says: If you’re not with us, you’re against us. If you’re not using different forms of automation and the synthetic generation of images and words in [insert your line of work here], then you’re going to fall behind the times. Then you’re clearly technophobic. Then you’re doing a disservice to your students, your colleagues, your public, etc.
The problem with this black-and-white narrative is: (1) it assumes an inevitability to how and how much certain tools will be used and how much good they do or will eventually do, where good is productivity or helping others or some other kind of value; and (2) it leaves no room for nuanced conversations about what a specific automated tool is doing, how it is doing it, what is being gained and what is being lost in the use of the tool, etc. In other words, it does not distinguish between different kinds of tools and their uses. It closes conversations.
Last week, I tried an experiment: I took “autocorrect” off my phone. Autocorrect, of course, has been a default on iPhones for over a decade. It’s one of the oldest forms of a language model, what’s called an “n-gram” model. In a nutshell, you take a bunch of texts (input) and the model counts up every time different words come up and ranks them according to how often they come up. As the model gets more complex, it can look at pairs of words next to each other and eventually predict what word will come next based on what most often does come next.** (This is basically what deep learning LLMs do, but with a lot more data, much better hardware, and a lot more training to shape their predictions.)
What I wanted to see was whether getting rid of my daily frustration of seeing the wrong words typed out by an automated system—and then needing to retype them, or apologize if I had already hit enter—would be more or less annoying than having to type out whole words on my own. And correct them, when I mistyped them.
I was also curious about cognitive load: that is, whether in having to reread what I typed—however quickly and unconsciously—while using autocorrect was having an impact on the amount of effort I was making in sending everyday inane messaging. Whether, or what, it was costing me in exchange for efficiency. This would be a hidden cost that I had accepted long ago, without actually pausing to think about it, in exchange for typing text messages with greater speed (at least from what I can tell). It’s also something I had no idea how to gage.
My experiment is obviously unscientific. These were the questions I had, and I simply brought my attention to focus on a tool I had been using for a long time in order to see what would happen, how I would feel, if it were taken away.
I can’t tell yet what my take-away is with autocorrect, now that I have not been using it for a week. I am definitely out of practice and finding myself often frustrated in having to retype basic words that somehow, over the years, I have lost the facility to type. One thing I have certainly become more aware of is how much I was relying on the automation.
But I like pausing to think about something, even in this small way, that I have been taking for granted for so long and using so regularly. I like taking the time to consider whether a specific tool is serving me, or not.
It’s difficult to do this if you make broad assumptions, one way or another, about the tools that you’re using: that they are inevitable, that they are all useful or that they are useful across all kinds of contexts, that they are good for the world and a sign of progress.
Consider slowing down, parsing out what is being lost alongside what is being gained. Consider asking questions about each and every one of the tools you use in your everyday life. It may not change your desire to use them, but it will remind you that the technologies we are presented with are not foregone conclusions.
*Madhumita Murgia, Code Dependent: Living in the Shadow of AI, Chapter 4, “Your Health.” Henry Holt, 2024.
**Emily M. Bender and Alex Hanna explain these early LLMs in their book, The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, Chapter 2, “It’s alive! The hype of thinking machines.” Harper, 2025.