Digital literacy--new study out on lateral reading practices among students

This is a fascinating new study (in part from my alma mater!) pointing to students' awareness of lateral reading as the best strategy for spotting misinformation, but their failure to regularly engage in those strategies as a matter of course. The article suggests a few things related to encouraging and practicing lateral reading strategies (I'll excerpt what is most relevant and useful in my opinion, from the discussion):

"Indeed, our students often indicated preference for lateral reading strategies as the best way to determine trustworthiness without acting on their preferences. This pattern is in keeping with prior work identifying dissociations between knowledge and/or intentions and observed behaviors (Brodsky et al., 2022, 2023). The presence of these dissociations suggests that more than awareness is needed to cultivate effective fact-checking habits. Active and habitual practice is a necessary component in developing most skills (Healy et al., 2014), and this may hold true for lateral reading as well. Greater experience with a behavior may also help narrow the “intention-behavior” gap in novices (Sheeran et al., 2017; Sheeran & Webb, 2016). Therefore, instructors may need to engage students in lateral reading as a routine part of their coursework to strengthen connections between knowledge of appropriate strategies to evaluate information, intentions to apply them, and use of the strategies when engaging with content online."

Suggestions, in addition to "active and habitual practice" of lateral reading strategies, include increasing reading literacy in general (the conjecture there is that students are not going to Wikipedia and other lateral sources to double check information because of limited reading comprehension skills) and also helping students recognize and practice Wikipedia as a valuable resource for lateral reading practices.

The full article is “Preference for a Use of Lateral Reading Strategies” by Lodhi, Brooks, Gravelle, Brodsky, Syed, and Scimeca, out in Journal of Media Literacy Education.

One important issue seems to be that students have a distrust of Wikipedia (often stemming from their professors/teachers), despite changes in its usefulness over the years. Here's an article I love to share to help understand how and why Wikipedia works: “Wikipedia is the last best place on the internet,” by Richard Cooke, Wired, Feb. 17, 2020.

If you look up the most visited websites (obviously difficulty to gage exactly, but this seems like a fair estimate), note the one that is a foundation (nonprofit). (Spoiler: it's wikipedia!)

Resisting the inevitability narrative; or, it's time for nuance

Years ago, when I was just out of college, someone close to me recommended David D. Burns’s Feeling Good: The New Mood Therapy. The book, first published in 1980, effectively popularized CBT, and it remains widely recommended and used to this day.

The element of Burns’s book that I have carried with me all this time is a table listing ten types of common cognitive distortions. These include items like “all-or-nothing thinking,” overgeneralization, catastrophizing, and disqualifying the positive.

Over the years, I’ve used these categories in all aspects of my life, and often enough to help students build stronger, clearer argumentation skills in the classroom. Increasingly, I’ve seen these ten flawed ways of thinking seep their way into all kinds of discussions, but especially those that unfold online.

All-or-nothing thinking—also known as “dichotomous thinking”—is the one that seems to keep coming up around discussions having to do with technology, or questions around the hows and whys of emergent technologies, like the cluster of automation technologies often referred to as “AI.”

There’s a chapter in CODE DEPENDENT, for example, when journalist Madhumita Murgia describes software that has been trained to scan X-rays for visual patterns indicating a risk of TB in patients. These tools—“diagnostic algorithms,” which essentially scan pixels for patterns—are meant to be used alongside medical specialists (which of course is where things, as always, seem to get murky and, at worst, quite dangerous).* As such, they show great potential.

There are also positive examples of how large language models can help, in humanities-based research, for example, in picking up on clusters of words over time. If the input that goes into the LLM is accurate and useful—and that, of course, is a big “if,” as it requires transparency and expertise—then large language models can be helpful for picking up patterns of different kinds of texts or images (via pixels). They can scan databases too vast for individuals to manage without loads and loads of time and resources.

What they cannot do is think.

The all-or-nothing rhetoric around emergent technology essentially says: If you’re not with us, you’re against us. If you’re not using different forms of automation and the synthetic generation of images and words in [insert your line of work here], then you’re going to fall behind the times. Then you’re clearly technophobic. Then you’re doing a disservice to your students, your colleagues, your public, etc.

The problem with this black-and-white narrative is: (1) it assumes an inevitability to how and how much certain tools will be used and how much good they do or will eventually do, where good is productivity or helping others or some other kind of value; and (2) it leaves no room for nuanced conversations about what a specific automated tool is doing, how it is doing it, what is being gained and what is being lost in the use of the tool, etc. In other words, it does not distinguish between different kinds of tools and their uses. It closes conversations.

Last week, I tried an experiment: I took “autocorrect” off my phone. Autocorrect, of course, has been a default on iPhones for over a decade. It’s one of the oldest forms of a language model, what’s called an “n-gram” model. In a nutshell, you take a bunch of texts (input) and the model counts up every time different words come up and ranks them according to how often they come up. As the model gets more complex, it can look at pairs of words next to each other and eventually predict what word will come next based on what most often does come next.** (This is basically what deep learning LLMs do, but with a lot more data, much better hardware, and a lot more training to shape their predictions.)

What I wanted to see was whether getting rid of my daily frustration of seeing the wrong words typed out by an automated system—and then needing to retype them, or apologize if I had already hit enter—would be more or less annoying than having to type out whole words on my own. And correct them, when I mistyped them.

I was also curious about cognitive load: that is, whether in having to reread what I typed—however quickly and unconsciously—while using autocorrect was having an impact on the amount of effort I was making in sending everyday inane messaging. Whether, or what, it was costing me in exchange for efficiency. This would be a hidden cost that I had accepted long ago, without actually pausing to think about it, in exchange for typing text messages with greater speed (at least from what I can tell). It’s also something I had no idea how to gage.

My experiment is obviously unscientific. These were the questions I had, and I simply brought my attention to focus on a tool I had been using for a long time in order to see what would happen, how I would feel, if it were taken away.

I can’t tell yet what my take-away is with autocorrect, now that I have not been using it for a week. I am definitely out of practice and finding myself often frustrated in having to retype basic words that somehow, over the years, I have lost the facility to type. One thing I have certainly become more aware of is how much I was relying on the automation.

But I like pausing to think about something, even in this small way, that I have been taking for granted for so long and using so regularly. I like taking the time to consider whether a specific tool is serving me, or not.

It’s difficult to do this if you make broad assumptions, one way or another, about the tools that you’re using: that they are inevitable, that they are all useful or that they are useful across all kinds of contexts, that they are good for the world and a sign of progress.

Consider slowing down, parsing out what is being lost alongside what is being gained. Consider asking questions about each and every one of the tools you use in your everyday life. It may not change your desire to use them, but it will remind you that the technologies we are presented with are not foregone conclusions.

 

*Madhumita Murgia, Code Dependent: Living in the Shadow of AI, Chapter 4, “Your Health.” Henry Holt, 2024.

**Emily M. Bender and Alex Hanna explain these early LLMs in their book, The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, Chapter 2, “It’s alive! The hype of thinking machines.” Harper, 2025.

Can self-driving cars "see"? And other behind-the-curtain questions...

How do self-driving cars work? Do they actually "see" the world around them?

The short answer: No.

I'm just a layperson trying to understand processes that I don't hear many experts trying to explain to other laypeople. But as far as I can understand, self-driving cars work through a combination of software and sensors.

What part of it involves AI?

Well, using, in part, deep neural networks is how these systems are trained to "see." This training involves a host of low wage workers around the world, many working for third party companies, signing NDAs so word doesn't get out about what it is that they're doing. Most of them don't even know. And, as I learned from a chapter in Madhumita Murgia's brilliant CODE DEPENDENT, these workers will be sent clips of a camera trained on the road. They will then spend hours and hours (days and days, months and months, you get the point) marking objects. Identifying them on screen. Cataloging them. Labelling them (often a process also involving online translation systems). This data is then used to train systems to identify objects (oversimplified, but they are finding patterns).

This is all to say, there is no "seeing" going on, not in the human sense. And it would be very helpful if more people who understood these processes broke them down and explained them to the public. It might help dispel some of these "artificial intelligence" myths. But, of course, if AI loses its magic, maybe we would be a bit more hesitant to rely on it across the board without intensive testing and investigations first. Without, well, critical thought.

Ask the neuroscientists in the room: how much do we even know about how the human brain works? Do you think anyone has figured out how to replicate something that is still, even with all of the progress made, pretty much unknown to us?

One final point: now is a crucial time for us to support journalists who are working to understand and help us all understand and better talk about how these systems work. To help us dispel myths, as Murgia, for example, so keenly does.

Some excerpts (published on LitHub, link below) from her book:

"Today, real-world AI is less autonomous and more an assistive technology. Since about 2009, a boom in technical advancements has been fueled by the voluminous data generated from our intensive use of connected devices and the internet, as well as the growing power of silicon chips. In particular, this has led to the rise of a subtype of AI known as machine learning, and its descendent deep learning, methods of teaching computer software to spot statistical correlations in enormous pools of data—be they words, images, code or numbers.

One way to spot patterns is to show AI models millions of labelled examples. This method requires humans to painstakingly label all this data so they can be analyzed by computers. Without them, the algorithms that underpin self-driving cars or facial recognition remain blind. They cannot learn patterns."

What kind of writing will survive A.I.?

"Will Writing Survive A.I?" is one of the more terrible headlines I've read recently (NYT, "Will Writing Survive A.I.? The Media Startup Every Is Betting on It," May 21, 2025).

A.I. is built on people's writing. You can continue to generate new content using deep learning, but if you're not continually refreshing that content with high quality up-to-date writing then you're going to end up stuck in a kind of stultifying time warp.

I think a more compelling question we might ask in the current moment: What kind of writing will survive A.I.? Because if there is one thing that A.I. generated content shows us, it is that not all writing is created equal. If a bot can create something presumably equal to the human version of it, is that maybe because the human version of it is vapid or stale?

Now seems like great time to build on the work of those--educators and others--who have been asking these questions for a long time. One example: is the traditional seminar paper useful for student learning? The traditional paper emphasizes product over process, and I think many of us--especially those with backgrounds in writing pedagogy--have long resisted it for this reason. Many people have been experimenting, for years, with new ways of thinking through how to teach critical thinking in the classroom--through stages, through reading, writing, and speaking, etc.--and how to find ways for students to develop and expose what they learn without overemphasizing that moment of "handing in" a final product.

The same question--What kind of writing will survive A.I.?--might be applied to other areas as well. If, as I have now heard anecdotally many times over, A.I. can generate a recommendation letter that rivals what professors typically write, then maybe the recommendation letter isn't doing the work that it once did. Maybe many recommendation letters are stale (are they closely read?; are they doing what we think they ought to be doing?; have they become possibly somewhat pointless?), and maybe there are new ways to think through both the reasoning behind them, the why, and alternatives to traditional methods.

I am uncertain about many things, but whether or not writing will survive A.I. is something I have no doubt about. Because thinking will survive A.I. and writing is, among other things, a way of thinking. It is also, often, a pleasure to write, in and of itself, and then to feel like you have made into some shape the thing that was, before that, inside of you, without form.

Hannah Arendt said something related to this final point, in a 1964 interview with Günter Gaus. It's a quotation that has been taped to my wall, just beside my desk, for a few years now. It's part of the loss behind the question posed in the article headline--a loss nobody seems to want to acknowledge. That is, the sense of satisfaction that comes from generating certain kinds of writing:

"I want to understand. And when others understand in the same sense as I understood, then it gives me satisfaction, like a sense of being at home [heimat]."