In his second collection of short stories, Ted Chiang included a novella called "The Lifecycle of Software Objects". Despite my initial reaction,[1] the core premise of the story, that anything remotely like artificial intelligence would require patience, time, and β perhaps most insightfully β an endless battle against the deprecation of suitable digital hosting environments, has stayed with me.
There is something in this framing that offers insight into the current state of "artificial intelligence" and helps me articulate my own thinking on the subject. Human intelligence is not inevitable or inherent, it is primarily cultivated over time. If we're lucky, this cultivation is done with patience, love, and affection. The intelligence[2] of machines, as it's currently modeled and implemented is done quickly, using mindbogglingly vast amounts of data and energy, and founded on a false premise of intelligence being mostly a problem of pattern recognition. The result is a fully synthetic product, quickly produced and brought to market despite shortcomings that are apparent to anyone who takes a closer look.
Of course, Chiang has more recently expressed his thoughts about the current state and trajectory of "artificial intelligence" in the pages of the New Yorker.[3] In this piece, while he doesn't go so far as to say that software will never reach a point where it is comparable to human intelligence β he is, I suspect, reticent to make any sweeping condemnations of the progress of technology, and remains an optimist on this point β he does say this:
"Whether you are creating a novel or a painting or a film, you are engaged in an act of communication between you and your audience. What you create doesnβt have to be utterly unlike every prior piece of art in human history to be valuable; the fact that youβre the one who is saying it, the fact that it derives from your unique life experience and arrives at a particular moment in the life of whoever is seeing your work, is what makes it new."
Jorge Luis Borges's elegantly subversive short story, Pierre Menard, Author of the Quixote is based entirely on this premise. If this story were instead written about a computer that, when prompted, returned the work of Cervantes, while remaining a computer with the experiences of a computer, it would not be taken seriously. A computer producing the work of a 17th century novelist would be seen as an act of retrieval or reproduction, and therefore the point can no longer stand.
I think that this is really the crux of it for me. No matter how adept these models become at mimicking prose styles or art. Even if we invent models that become better at "innovating," by identifying historical patterns in art and combining them in novel ways, there will always be a yawning gap for me. Can a programmed machine have personal history? Experience? Can it use these to shape its intent or inform its work?
Even in the realm of total abstraction. If there were a model that were to exist like a black hole, capable of pulling all of human art, science, history into itself and producing some hitherto unknown result. It would still ultimately be a program whose cannot cultivate its own intent, it must be given by an external mind.[4]
I cannot anthropomorphize the process or output of these models to the point of meaningfulness. My own scope of knowledge and experience may be narrow and finite, but this narrowness contributes to its meaningfulness. It shapes both my intent and the execution of the creative act.
This entire, long-winded preamble has all been in service of this: I write because there are emotions, thoughts, and experiences I have had that I want to articulate and share. I see no value in the proposition that a model might help me better express these things. In fact, to suggest that it might threatens the entire premise of the creative act I engage in. Robs me of my own, imperfect means of expression.
This post serves as my public commitment to never use any generative models of any sort to do any of the following:
This applies to all my writing, not solely the writing that appears on this website. I can't promise that all of it will be quality prose or free of errors, but I can promise that it will exist because of my own intention, time, and effort. For my first post of 2025, it felt worthwhile to put this down in writing.
There are so many other thoughts I have on this subject, like how cryptocurrency and A.I. almost feel like the market's response to attempts to reduce energy emissions. Or how the ultimate goal of these technologies is to allow capital to enjoy the benefits of creative work without having to handle the messy and expensive reality of the humans who perform it.[5] But these topics are all ancillary to my main point here, so I've set them aside.
It is also worth acknowledging that there are some artists and thinkers out there β I think most notably James Bridle β who reframe the argument in an interesting way. Intelligence, they say, is not some innate thing that can only reside in human individuals, but rather it's a collaboration between humans, nature, and machines. Though I don't find this argument as compelling given the current state of the technology, I don't want to discount it entirely. Sometimes we give ourselves too much credit, after all.
Cards on the table, I found it to be quite a slog. The plotting dragged on, the characters were interesting but everything they said just beamed "I'm Ted Chiang and I have ideas to share via these fictional avatars" at me.
I don't want to overdo the scare quotes, but I do want to make it very clear that I don't give the term "artificial intelligence" much credence. Its inescapable popularity, in my opinion, stems from its rhetorical plasticity, which serves to obfuscate an otherwise deeply flawed product.
Look, I understand I'm starting to dance on the edge of an infinite regress here, but I'm not going to wade into those waters. This is but a humble blog post.
If you're curious and in a position to pay me to write out these thoughts for your publication, just let me know.