Exploring my Relationship With Generative AI Has Been More Difficult Than I Thought
Applying Sociotechnological Vision
If you have been tracking the development of the concept, you’ll realize that I have changed the name of sociotechnological imagination, to sociotechnological vision (STV). This was to avoid awkward acronyms and also to provide greater insight into how I want people to use the concept after learning it.
If you aren’t familiar with the concept of sociotechnological imagination (called sociotechnological vision from now on), please check out my previous article before reading this.
My previous article establishing sociotechnological vision (STV) needs a follow-up. While the article constructed the theory of STV, I wasn’t clear enough for you to use the tool. This follow-up will provide a personal case study of STV to give you a better perspective on how to use it. To do this, I’ll be analyzing my relationship with generative AI. In the end, I hope to have a cogent logic that helps me interact with this technology in the future.
The Biographies, Personal and Technological
A little bit about me
I don’t know how to engage with generative AI. On the one hand, I see a clear threat to the artistic class, of which I’m a bastard member. I’ve seen the Reddit thread of the artist who wanted to kill themself because generative AI threatened their aspiration; a life of and off of art. At the same time, I was once a heavy user of ChatGPT. I used it to write short form thoughts on LinkedIn, short stories, and I was even using it to write a book. I was, and to an extent still am, interested in the way that generative AI will shape our relationship with writing. But it turned out I didn’t want my relationship with writing to change. It already changed once and I’m still coping with the loss.
I’ve wanted to write my whole life. I wanted to make writing my primary means of employment while still finding joy in it. I remember large parts of my childhood through filled notebooks of stories and novel starts. Since then, I’ve published short stories, poems, and articles on the web. Over time, I’ve noticed how technology created significant changes in how I write. I was once a deliberate writer. I pondered sentences endlessly to achieve just the right effect that I wanted. I treated writing like art. As I integrated computing into my writing life as a late teen, I became quick, sloppy, and anchored to my first thoughts.
To be clear, this isn’t because of computing per se, but because of the relationship I unwittingly developed with computing. I didn’t realize this then; it took me writing with ChatGPT to discover it. Writing with ChatGPT I was writing faster than I ever had, orders of magnitude faster even. I felt powerful at first, but when the novelty wore off, what remained was mediocre text that I was incapable of feeling proud of, even when I subscribed to the person-in-the-loop style of writing and iterating with AI. This takes me to the history of ChatGPT and its siblings.
A little bit about ChatGPT
Algorithms for generating text have been around since at least the 1960s. These algorithms, categorized into a process called natural language generation (NLG), were originally developed to “explore human-machine communication.” This is a theme we can still see today with ChatGPT. The interface of a chatbot begs the continued exploration of how we communicate with machines. Even in early applications were the desire to automate tedium: reports, customer service, and image captioning. It is also important to know that since the inception of NLG was the hypothesized capability to produce creative work. The way this ties into the other tasks categorized as tedium should not go overlooked. When NLGs went commercial in the 90s they had yet to make their pivot to artificial intelligence as a backbone, this would happen in about the early 2000s, and from then on significant gains in natural language processing (NLP) can be mapped to gains in artificial intelligence.
ChatGPT has only existed as an accessible prototype since November 2022. It is built on the concepts of large language models and generative pre-trained transformers. The specifics of these are important, but I can’t for the sake of the length of this article describe them sufficiently in detail. What it means in the simplest way I can construct, is that ChatGPT is “pre-trained” (feed a bunch of tagged data) from a large corpus of text. In the case of ChatGPT the text includes a slice of available text on the internet and a set of conversations that give it its conversant style. This is by no means a technical treating of ChatGPT, however, and I recommend you read the papers that develop those concepts.
Tying to Social Roots
I can’t claim to know the intent (positive or negative) of any company involved with this tech. Even when intentions are clearly expressed by their CEO or CTO, I can’t take them at face value, as much as I wish I could. Instead, I look at the impacts of the technology and determine what those impacts might mean.
I write this over a week into the Writer’s Guild of America (WGA) strike. Writers and directors are asking for more pay. Considering these are people who are writing for Hollywood, Disney, Netflix, and other movie and TV giants, it is easy to think that these people are already well-paid and are greedy for asking for more pay. Writers for these companies however, struggle to even have stabling housing. It is clear they are not compensated sufficiently for the effort and impact they have for their employees if this is the case. Aside from compensation, there is also a career longevity subtext to the strike; ChatGPT proves that it can be “cheap labor”. It is anchored in the fear that is in various ways being realized at various stages, that it is possible to automate away the creative class. But does it make a difference who (or what) creates a written work (same question can be asked about visual work as well).
This is another point I struggle with. Here’s the extremely rudimentary thought experiment I had: think of two texts, both of them nearly identical in content, there’re some stylistic differences but they largely convey the same information and the same reading experience. In this situation, would it matter if an AI generated one and a human created the other?
I can imagine the multitude of arguments pro and con. Those who say source doesn’t matter will point to your consumption of processed and fast foods and our collective blindness to source, and they will call their opposition hypocrites. Those who will claim that source does matter will point to the DIY movement/maker movement, or the handmade movement, and call their opponents technological shills. Naturally, the true conversation around this point will be a spectrum of which the two sides I mention will be the moderate sides of each pole, there are more extreme arguments and there will be at least some agreement.
As I think at the scale of individuals, I realize that we have to make decisions that align with our own values. Through the striving for and satisfying of these values, a person may achieve some degree of flourishing. This seems to be the crux of the argument for me; maximizing for human flourishing. But, what do you do when one’s flourishing intrudes on another’s? Currently, it is capital which drives the importance of whose flourishing. This leads to the satisfying of the tech CEO, CTO, and computer scientist’s flourishing at the expense of the artist, who has already been historically seen as a pest of an expense by many.
Final Words for a Thing that Makes Words
To me, ChatGPT becomes a decent idea generation tool, an okay sounding board when I feel like I’ve bothered my typical collaborators too much. To the extent that it starts to threaten people’s livelihoods and means of flourishing, we should be much more deliberate. Without this deliberative spirit, we will unnecessarily cause harm in the name of progress; a backward pursuit. We need to understand the view, whether or not we want to believe it, that ChatGPT and their kin are an organizer of political power more than “simply a tool,” where political power flows from the concentration of automating power at the expense of many parts of labor (creative and otherwise) that we take for granted.
I want to be part of a society that values creative labor (and all other kinds of “invisible” labor) this current iteration of generative AI, irrespective of intent, seems to have the opposite effect.
Sociotechnological Vision
Sociotechnological Vision is a foal, awkward in its steps and needing still to learn how to run. This article and the last are the first step toward maturing a concept that I hope can stay with you and become a part of how you think of technological consequences in society. In that capacity, I hope it has been helpful to you. If you try to use it, please tell me about the experience, the good and the bad.