Book review: “The Literary Theory of Robots” by Dennis Yee Tennen
Literary Theory for Robots: How Computers Learned to Writeby Dennis Straight
In The Literary Theory of Robots, a playful new book by Dennis Yi Tinen about artificial intelligence and how computers learned to write, one of his most powerful examples comes in the form of a small error.
Tennen draws connections between modern chatbots, science fiction plot generators, ancient dictionaries, and medieval prophecy wheels. Both the Utopians (Robots will save us!)) and the pessimists (Robots will destroy us!) Argues. There will always be an irreducible human aspect to language and learning, a crucial core to meaning that emerges not just from syntax but from experience. Without it, you get nothing but the chatter of parrots, which, “according to Descartes in his Mediations, is merely repetition without understanding,” writes Tennen.
But Descartes did not write “Mediations”; Tenen must have meant “reflections” – the missing “t” would pass any spell-checking software because both words are perfectly legit. (The book’s index lists the title correctly.) This simple typo has no bearing on Tennen’s argument; If anything, it strengthens the case he wants to make. Machines are becoming stronger and smarter, but we are still deciding what is meaningful. Man wrote this book. Despite the robots in the title, it is intended for other humans to read.
Tennen, now professor of English and comparative literature at Columbia University, was a software engineer at Microsoft. He uses his disparate skill sets in a book that is surprising, funny, and not at all scary, even as it evades big questions about art, intelligence, technology, and the future of work. I think the book’s small size – less than 160 pages – is part of the point. People are not indefatigable machines, relentlessly absorbing vast amounts of vast topics. Tennen has discovered how to present a network of complex ideas on a human level.
To this end, he tells stories, beginning with the 14th-century Arab scholar Ibn Khaldun, who recorded the use of the Wheel of Prophecy, and ending with a chapter on the 20th-century Russian mathematician Andrei Markov, who analyzed the contingencies of letter sequences in Pushkin’s “Eugene Onegin.” It forms a fundamental building block of generative AI (regular Wardell players sense such possibilities all the time). “The process of processing everything published in English was readily available,” Tennen writes of the technological barriers that stymied previous models of computer learning, before “brute force was required.” He urges us to be vigilant. He also urges us not to panic.
“Intelligence is evolving on a broad scale, from ‘partial assistance’ to ‘full automation,’” Tennen wrote, giving the example of an automatic transmission in a car. Driving an automatic in the 1960s must have been amazing for people who were used to manual transmissions. The automatic works by automating key decisions, downshifting on hills and sending less power to the wheels in bad weather. It has removed the option to stop or grind gears. He was “artificially intelligent,” even if no one used those words to describe him. Now American drivers take its charm for granted. This has been demystified.
As for current debates about artificial intelligence, this book attempts to demystify them as well. Instead of talking about artificial intelligence as if it had a mind of its own, Tennen talks about the collaborative work that went into building it. “We are using cognitive and linguistic shorthand by condensing and attributing technology itself,” he writes. “It’s easier to say,”the phone Completes my messages with ‘instead of’The engineering team behind the software to write the autocomplete tool relies on the following dozens of research papers Complete my messages.
Our common metaphors for artificial intelligence are therefore misleading. Tennen says we should be skeptical of all metaphors that attribute familiar human cognitive aspects to AI. A machine thinks, speaks, explains, understands, writes, feels, etc., by analogy only. This is why much of his book revolves around questions of language. Language allows us to communicate and understand each other. But it also allows for deception and misunderstanding. Tennen wants us to “do away with the metaphor” of artificial intelligence—a suggestion that might at first seem like an English professor’s hobby but turns out to be entirely appropriate. A metaphor that is too general can make us complacent. Our sense of possibility is shaped by the metaphors we choose.
Text generators, whether in the form of 21st-century chatbots or 14th-century “message magic,” have always faced the problem of “external verification,” Tennen writes. “Procedurally generated text can make grammatical sense, but it may not always make sense sense sense.” Take Noam Chomsky’s famous example: “Green, colorless thoughts sleep fiercely.” Anyone who has lived in the material world would know that this grammatically impeccable sentence is nonsense. Tennen goes on to point out the importance of “lived experience” because that Describes our condition.
Tennen does not deny that artificial intelligence threatens much of what we call “cognitive work.” There is also no denying that automating something reduces its value as well. But he also puts it another way: “Automation lowers barriers to entry and increases the supply of goods for everyone.” Learning is now cheaper, so having a large vocabulary or repertoire of memorized facts is no longer the competitive advantage it once was. “Today’s scribes and scholars can challenge themselves with more creative tasks,” he suggests. “The tedious tasks are outsourced to machines.”
I agree with his point, even if the prospect still seems dire to me, with an increasingly shrinking segment of the population doing challenging creative work while a once-thriving ecosystem collapses. But Tennen also argues that we, as social beings, have the power to act, but only if we allow ourselves to accept the responsibility that comes with it. He acknowledges that “individual AI poses a real danger, given its ability to amass power in pursuit of a goal.” But the real danger comes “from our inability to hold technology makers accountable for their actions.” What if someone wanted to attach a jet engine to a car and see how well it worked on the streets of a crowded city? The answer is clear: “Don’t do it,” says Tennen.
Why “don’t do it” may seem easy in one field, but in another it doesn’t require more thought, more precision, more scrutiny — all qualities that fall by the wayside when we bow down to AI, treating technology like a unique deity. Instead of being a god. Many machines built by a large number of people. Tenen leads by example, using his human intelligence to influence the AI. By reflecting on our collective habits of thought, he offers a reflection of his own.
Literary theory of robots: How did computers learn to write? | By Dennis Truth | Norton | 158 p. | $22