The Psychological Pull of Technology
In the race for bigger and better tech, what are we prepared to lose?
While writing the applied project for my master’s degree, I noticed an interesting phenomena. When I felt unconfident in my writing, I would ask ChatGPT for help. For example, I was so overwhelmed with the revisions from the first round of editing that I didn’t know what to do. In hindsight, it was pretty simple—it was a matter of directly addressing what the reviewer said. And the reviewers of the project provided good enough feedback that I should have known what to do. Instead, I leaned on ChatGPT like a crutch, and I really hate that I did that.
This isn’t a ChatGPT-is-bad post though. Instead, I’m curious about the ways I’ve surrendered my own confidence and competence to digital technology. I realize now that it doesn’t matter how knowledgeable I might be, or how capable I am at thinking critically about technology, many of the digital tools I use undermine my confidence. What is it about technology that enables this to happen?
Technological undermining happens in micro-moments. I think that I know that we went to the moon in 1969, to use another example, but with a quick Google search that I can know definitively. This certainty means that I will go with the Googled answer rather than my own knowledge. At this point, it doesn’t even matter if I was right, what manners is that my smartphone’s access to the internet cultivated enough doubt that I felt compelled to indulge in the sure answer. This is what I mean by a micro-moment. Another example of a micro-moment of technological undermining happened in a recent conversation. I’ve been reading about Gnosticism, and I told one of my friends that I’d been reading about it. When he asked me what Gnosticism was, instead of relying on my own knowledge that I was gaining on the topic, I went to Wikipedia. This happened in a just a few seconds—this is what I mean by a micro-moment. This isn’t some grand conspiracy to undermine human intellect, at least I like to be hopeful in that regard, but there is something in the mechanism of digital technologies that enable this to happen.
Since people like to talk about it so much, let’s talk about the impacts of the invention of the calculator. It’s popular now, with the advent of generative AI, to compare it to the calculator. The argument goes something like this: the invention of the calculator proved that basic arithmetic wasn’t a skill that we all needed, and so the invention of the calculator was a net positive. This is a fine argument when it’s clear what skill is being replaced. With the invention of the calculator, knowing exactly that basic arithmetic was going to be a skill people lost was calculable. It was understandable what we would lose it. This though, is categorically different with generative AI; it’s not clear right now what we have the chance of losing. If we do a one-for-one swap of the argument that people make with the calculator, then we lose out on: our ability to write, to reason, and overall, to communicate. But it feels like it could be more than that too.
While writing this essay, I encountered a scientific paper that’s pursuing exactly this point of cognitive offloading. Michael Gerlich, the head of the Center for Strategic Corporate Foresight and Sustainability, published a paper just a few weeks ago, which investigated our propensity of cognitive offloading and this current iteration of AI tools. Using a mixed-methods approach, both interviewing 665 participants, and quantitative correlation analysis Gerlich found that cognitive offloading to AI negatively impacts our critical thinking skills:
Our research demonstrates a significant negative correlation between the frequent use of AI tools and critical thinking abilities, mediated by the phenomenon of cognitive offloading. This suggests that while AI tools offer undeniable benefits in terms of efficiency and accessibility, they may inadvertently diminish users’ engagement in deep, reflective thinking processes.
This relies on self-reported data, and the fact that there were interviews with individuals means Gerlich could have a sampling bias—both of which Gerlich admits himself—but the research at minimum starts a conversation about how we must critically engage with these technologies.
I’d like to think of this phenomena less as cognitive offloading and more as a function swap. Cognitive offloading is a pretty natural process of doing something to minimize cognitive processing—such writing something down. What I’m calling “function swap” is the replacement of a cognitive function for a technological function for the reason that the technology does it better. If there is a function that I cognitively have rehearsed many times in my life, but then there is a technology that does it better, faster, and more reliably, then what’s going to happen overtime, and this is just conjecture, is that my trust in myself is going to erode as I interact with the digital technology that does that function better which will prompt me fully substituting that function.
Going back to the master’s applied project example, it’s clear that the functions that were being swapped were my own ability to creatively and critically think through the implementation of feedback. Now, my question becomes “do I want to lose this function”? As in do I want to continue to rely on ChatGPT for this function of thinking through implementing feedback? My answer is no. So, I have to do better.
To be better, I have to stop relying on the tool. I have to, through the process of struggling, through the process of learning—learning requires struggle, get better at doing this kind of work. And that requires that I don’t rely on ChatGPT for this function any longer.
Ultimately, I’m worried about my propensity to fall into harmful cognitive shortcuts. Given the path of least resistance, I will take it. And that act of doing the shortest route to solve my problem, impedes my betterment as a person. And so I’m not saying that people who desire to use generative AI as a tool are necessarily wrong. However, if you relegate a function to generative AI you better prepare yourself to be without that cognitive function. To outsource a skill, and at this point I’m not even talking about with digital technology, is to risk its atrophy internally. In the book The Fifth Discipline Peter Senge talks about a feedback loop working between HR consultants, personnel problems in an organization, and internal managers’ capabilities. In it, the company becomes increasingly more reliant on HR experts rather than their internal managers. Senge calls this a “shifting the burden structure” where “well-intended solutions make matters worse in the long term”. In the case of the company, they employed HR experts to address a personnel problem, but gradually shifted the burden away from expecting internal management handling those problems.
In this same regard, only use these tools for what you are prepared to lose; if it matters to you, stay as true as you can to it.