When I was at Cornell in 2013 (ten years ago now!), I wrote a story about a writer whose job it was to look over AI-written fiction and make small corrections to it. The narrator then developed an infatuation for a woman who worked in the poetry department.
The story was part of a reading event that professor and author J. Robert Lennon conceived, which involved each of us MFA students writing something that dealt with AI. At the time, the idea of AI creating fiction and poetry felt real and possible enough to capture my attention, but also speculative. To write the same story now would instead be like writing, in 2013, a story about a guy who owned a smartphone.
I am trying hard to avoid certain platitudes. For instance, the idea that we inherently value human writing, that it is an individual’s experience that we want to know about, not a computer’s idea of a person’s experience. I think this is true in theory, but not in fact.
In theory, you could offer me the exact same book in two scenarios: 1) it’s written by AI, and 2) it’s written by a person, and even if the content was identical, I would only be interested in the second one. In theory, I care about the origin and inspiration, not just the final result. But what if I read something and it touched me deeply, and only in retrospect did you reveal that it was created by AI? What if that book was The Bell Jar? What about a future in which all AI writing is indistinguishable from our own? What about the assertion that all great writers steal?
I believe in art as catharsis, as exploration, as things you scribble in your high-school notebook when you feel alone, and I don’t think we will give up on making it. But I am cynical about our ability, as “consumers” of art, to recognize the human-generated stories that we claim to want. And given the power of commerce to create shortcuts, to cut costs, to optimize wherever possible, I believe that this is a failing the market will gladly exploit.
One of the first experiments I did with ChatGPT (version 3.5) was ask it to write in the style of people I knew at Cornell. When asked to “write me a paragraph-long story in the style of the author J. Robert Lennon,” ChatGPT responded:
The house had always been there, on the edge of town. It was a small, unremarkable building, with a peaked roof and a front porch that sagged in the middle. It was a place that people walked by without noticing, and that was just fine with me. I had moved in a few months ago, after my wife left me. I needed a place to be alone, to think. I didn’t expect to find anything interesting in this house, but I did. I found a door in the basement, hidden behind a stack of old boxes. I opened it, and found a stairway leading down, into darkness. I took a flashlight and started to descend. The stairs were narrow and steep, and the air grew colder as I went down. At the bottom, there was a room, filled with strange machines and contraptions. I had no idea what they were for, but I couldn’t help but feel drawn to them. I spent hours down there, tinkering and experimenting. I don’t know what I was hoping to find, but I knew I was getting close. And then one day, the machines came to life, and I realized that I had unlocked something incredible.
ChatGPT told me that Lennon’s short fiction “often features characters that are struggling with personal issues and have a sense of isolation.” It told me, “The story also reflects the idea of the protagonist being drawn to something and being unable to resist the urge to explore, a theme that is often present in the author’s work.”
On the surface, at least, this story seems like a pretty accurate imitation of J. Robert Lennon’s style. The idea of exploring the old house rings true, and the sentence structure isn’t bad either. But I doubt that anyone would approach the idea of sentient machines by saying “And then one day, the machines came to life,” unless they were writing a children’s book. Like so much of what ChatGPT writes, all the subtlety has been lost.
ChatGPT has a tendency, when impersonating someone, to rehash their past work. For instance, the story above is quite similar to the opening of J. Robert Lennon’s book Castle, which is about a man fixing up an old house. While human writers tend to be drawn to certain themes, they are likely to make a conscious effort not to simply repeat a previous plot. (Highly commercial writing, such as franchise films or certain comics, is one exception.) Also, a writer’s style may evolve, sometimes as a direct attempt to diverge from their past works. Whereas ChatGPT exists in an eternal present, one in which an author exists as an amalgamation of all their previous works, and there is no possibility of progression or of writing “against.”
I asked ChatGPT to imitate the style of Ling Ma, who wrote the bestselling book Severance:
The city was a ghost of itself. The buildings stood tall, but the streets were empty. The billboards continued to flash, but the audiences were gone. I walked down the avenues, my footsteps echoing in the silence. The world had ended, and I was one of the last ones left. I had survived the fever, but at what cost? The memories of my past life haunted me, and I couldn’t shake the feeling of isolation and emptiness. […]
By way of explanation, ChatGPT writes, “The paragraph reflects the sense of emptiness and isolation that the protagonist is feeling after the world has ended. The imagery of the empty city and the echoing footsteps are used to reflect the emptiness of the world and feeling of being alone.”
Again, the first paragraph isn’t terrible. It does sound a bit like Severance. But Ma—and I hope she would take this as a compliment—is a lot weirder than this. See, for example, this real quote from her novel:
If Fuzhou Nighttime Feeling were a sound, it would be early/mid-nineties R&B. If it were a flavor, it would be the ice-cold Pepsi we drink as we turn down tiny alleyways where little kids defecate wildly. It is the feeling of drowning in a big hot open gutter, of crawling inside an undressed, unstanched wound that has never been cauterized.
“Defecate wildly” is a pretty big leap from “The memories of my past life haunted me.” It seems the AI has not yet mastered showing vs. telling. At any rate, it’s not imitating Ma, it’s imitating one specific book, and in the most surface-level way.
I don’t necessarily think there’s some human je ne c’est quoi that will remain inimitable for all of eternity. But I do think absurdity is an interesting place to start, when considering the difference between human and machine.
AI is good at creating the absurd by accident—see any supercut of its uncanny valley creations—but when it comes to the intentionally absurd, it tends to falter. When asked to “write me an absurd story,” for example, it began:
In the peculiar land of Whimsitopia, where gravity had a habit of taking days off and rainbow-colored llamas roamed the streets wearing top hats, a curious incident unfolded. Professor Mumblefluff, a renowned expert in aquatic ballet for flamingos, woke up one morning to discover that his bathtub had transformed into a portal to the Cheeseverse—a dimension entirely made of cheese.
This paragraph is “absurd” in an extremely online, early 2000s way, like that meme about “Katy t3h PeNgU1N oF d00m” or, maybe more accurately, any gift at a quirky retail store. The randomness is the humor: “Wouldn’t it be silly to wear a pair of socks with narwhals on them?”
I grew up loving Hitchhiker’s Guide to the Galaxy. And while it’s true that those books feature lines like “The ships hung in the sky in much the same way that bricks don’t,” and two-headed aliens, and magical fish that go in your ears, they also have characters that you care about, and social critique, and long stretches of relative normality. Whereas the ChatGPT paragraph is relentless in its absurdity.
When I tell ChatGPT to be “absurd,” it seems to understand the word as largely equivalent to “random.” This leaves out a number of other absurdities: the absurdities of capitalism, of depression, of hopelessness, of existentialism, of Kafka, of the movie Eraserhead.
I asked ChatGPT to write me a paragraph-long story in the style of Kafka. It began:
In a nameless city shrouded in perpetual mist, Karel awoke to find himself transformed into an enigmatic and labyrinthine bureaucracy. His limbs, once supple and agile, now coiled into intricate networks of forms and documents.
This example I would deem an accidental success. It is so on-the-nose that it becomes absurd in a whole new way. ChatGPT knows what words and phrases are likely to be described as absurd, and it knows that Kafka is concerned with bureaucracy, but it doesn’t know what it feels like to read a Kafka story, or that you could write a story about a woman who chooses to become vegetarian, as in Han Kang’s novel, that is far more “Kafkaesque” than if she had turned into a bug.
When real people write absurdly, they are often doing so not just for its own sake, but to express some emotion they feel, or to get at some truth, some bigger topic without writing about it directly—maybe out of fear, or maybe because they don’t even know what they are writing about until after it is done. ChatGPT doesn’t feel pain, or anger, or a sense of desperation. It has no motivation to approach things obliquely, or indeed, any motivation at all. And what about anxiety? Could an AI ever imitate me, authentically, without the feeling of pulling each word from under its skin?
Again, I will answer my own question with a cynical yes. The technology will advance. It could be that in a year, or two years, generative AI will write stories indistinguishable from Kafka’s, or my own, or anyone’s.
There is something oddly beautiful, albeit depressing, about the way that generative AI cuts through the facade, revealing the empty, compulsory nature of many of our activities. It reveals that Hollywood is not looking for human stories so much as vehicles for consumption. That students are not interested in learning so much as receiving an A, and instructors are not so much into crafting thoughtful assignments as fulfilling a curricular requirement. The economy could keep chugging along with bots selling things and buying things and writing copy to sell the things, and this reveals something about what we mean by the phrase ‘creating value’ and who that value is for.
Because of my own pessimism, I’m drawn to the doomsayer view, the inevitability of my own obsolescence. But on some level, I’d like to believe otherwise. Recently, the idea that has been giving me some small measure of hope is that of generative AI as ouroboros: the internet consuming itself. The idea is that as AI-generated text proliferates, it will be used as training data for future AI, until everything is recycled, everything becomes more of itself, the homogeneous pink slime of human endeavor.
Here is what I find comforting about this idea: Maybe things will get so bad that we’ll just give up. We’ll stop being so online. We’ll go back to making zines. We’ll keep being weird, in our own little corners of the world.
I know that’s not very likely. I know this is all very “Millennial shakes fist at cloud.” There are those who believe AI will inspire rather than diminish us, that it will lead to whole new schools of weirdness, of creativity. I wish I could be one of them, but I can’t. The only thing I can really do, like everyone else, is wait and see.



























