In Which I Play the Prediction Game
Not being a person tightly tied to the Gregorian calendar, I have little use for new year’s resolutions and predictions, etc. Yet here I am, in the uncomfortable position of offering a prediction that could perhaps reach a tipping point as early as 2030.
First, some relevant background and bona fides.
- For as long as I can remember, I’ve been an eager and voracious reader.
- Writing was harder for me initially, but once I had a couple of epiphanies, I started learning more efficiently and enjoyed the process much more. This began in high school. I’ve considered myself a writer and editor (in varying capacities) ever since. I enjoy both roles immensely and am still learning, which is its own joy.
- My graduate training focused on perceiving, learning, and remembering—which are now considered part of cognitive psychology. At that time, brain-scanning technologies increased the popularity of neural models that sought to identify where and how information is “processed” in the brain (e.g., models of parallel versus serial processing, distributed processing, etc.). Although I took no neuroscience courses in graduate school, I was and remain deeply interested in it.
- I taught developmental psychology at my most recent teaching job. Because I hadn’t taken any grad-level courses in this subfield, as part of my course prep I regularly checked for new findings in the field, especially in developmental neuroscience.
Also relevant: I am not a techno optimist; but neither am I a pessimist. I consider myself justifiably cynical about the statements and claims made by many “experts” and others with skin in this game.
My prediction is: If large language models (LLMs) continue to be used at current rates in the US, it will become a “post-thinking” society.
Credit for the phrase goes to Peter Saint-Andre, who made the observation in a recent conversation. It should be clear why I can’t provide a more precise timeframe: use of LLMs appears to be increasing even as criticisms of their output also are; and I know of no research in the neurosciences on how their use may be affecting the cognitive development of young people who use them.
New technologies invariably change people’s behaviors. In a fairly short period of time, radio, television, and the internet have transformed American entertainment from a mostly active, participatory pursuit to an inactive, consumerist one.
- We went from making our own music (which could include making one’s own instruments) to hearing others do it, to listening to recordings others have made of their performances (which are increasingly digitally created or altered).
- We went from reading books and periodicals (including magazines and newspapers) to listening to them, whether performed on the radio or presented in a television program, newscast, or podcast.
- Similarly, most of us don’t participate in nor attend live theater performances or music concerts; we watch them on television and/or via YouTube and other streaming sources.
- We’ve gone from writing long letters by hand to typing them; then to emailing shorter communications; and then to even briefer texts, posts, tweets, and skeets. These are often composed on one’s cellphone and when they aren’t private, they’re published on a social-media site where “engagement” rather than quality is the most-prized metric.
The human brain is remarkably malleable over one’s lifetime, and one consistent pattern is this: a person tends to get better at activities they focus on (attending to, and thinking about) and engage in (acting). To improve a specific skill, one must repeat it, attending to both one’s activities and their results, then making refinements. This applies equally to fundamental abilities, such as learning to walk and talk, and more complex ones, such as critical thinking and writing. All the technologies I mentioned above have shifted many people from participant to consumer roles. If we’re no longer participating in something’s creation, our skills atrophy. If we never learn a skill, we can never improve at it.
LLMs have been trained on large volumes of writing; what comprises a dataset varies among LLMs, and by their intended purpose. LLMs do not generate writing in the human sense of the process—they are predictive tools. They put together strings of words in response to a prompt. Oftentimes, the output is adequate at best. Because the LLM is composing predictively, its prose can be redundant; there’s no recognition that a point or instruction has already been given. And once the predictive tool has slipped off course, things can go wildly awry, leading to the misnamed “hallucinations” and worse. The general term for LLMs’ output is “slop.” From what I’ve seen—and I’m assuming it has been reviewed by a human editor prior to publishing—the term is apt.
I’ve seen a decline in college students’ writing and analytical abilities over the years I was teaching. At my last workplace, in some cases it was difficult to distinguish between a student with a possible undiagnosed learning disability and one who couldn’t be bothered—or didn’t know how—to write at any level above texting. As low marks and copious feedback didn’t seem to provide sufficient motivation to try to improve their writing, I reluctantly concluded that these students were not and are unlikely to ever become functionally literate.
“Use it or lose it” is broadly true for our cognitive abilities. It takes time, effort, and immersion into good writing to become a competent writer and editor. To become a great one requires what the Japanese call “kaizen”: a commitment to improvement. With fewer people reading and learning from powerful writing—whether that’s great literature, an insightful analysis, or a concise scientific paper—the quality of recent content available for human and LLM training will decline.
We’re already probably past the point where LLMs are trained only on human writing. As bad as Facebook posts and the like would be for training an LLM, including the slop previously churned out by one is worse. So I’m not too sanguine about where things will stand 10–20 years from now. If most of the written content available is vapid, how can young people learn how to think critically and write cogently?
We have a lot of history—as well as empirical evidence—that demonstrates the cost of losing cognitive skills when technology takes them over “for” us. I want to be wrong about where I think we’re headed, but I see little evidence that enough people have even recognized the precipice before us.