Technology, Trends
Dennis Yi Tenen
Associate Professor at Columbia University
Dennis Yi Tenen is an associate professor of English and comparative literature at Columbia University, where he also co-directs the Center for Comparative Media. His recent publications include Literary Theory for Robots (Norton, 2024). He lives, writes, and codes in New York City.
Imminent: How do you define intelligence, knowledge, and learning in the age of GenAI? Is this era challenging traditional humanistic ideas of knowledge?
Dennis: The question assumes that intelligence has somehow changed because of GenAI. I’m not sure it has. Intelligence has always been a combination of your natural ability to learn, adapt, and do smart things, supplemented by collective and instrumental capacity. I know certain things personally, but most of what I “know” actually belongs to what we know as humanity: how far away the moon is, how to make cheese, how to construct a building… Knowledge has always been shared, distributed, and cumulative. We know these things together.
What GenAI changes is not the nature of intelligence, but the distance between individual ignorance and collective expertise. That distance has become dramatically shorter. I can now access what exists in a diffuse, collective sense almost instantly. In that respect, AI makes visible something that was already true: intelligence has always been collaborative.
If we accept that, then education has always been about teaching students how to cross that gap—how to connect themselves to this vast reservoir of expertise. The difference today is that the first step into the pool is easier. And because it is easier, we can—and should—demand more.
Imminent: If access is easier, how does that change the way knowledge is constructed? And what does it mean for the role of teachers, especially in writing and literature?
Dennis: If we accept that the gap between collective expertise and personal ignorance is narrowing, there is a democratizing effect. Think of a physician: traditionally, you go to school for many years to acquire highly specialized knowledge. Now, a regular person—your patient—can access that very specialized knowledge much more quickly and in a less restricted way. General collective knowledge is much more accessible and this changes the role of the expert.
Classrooms have traditionally relied on simplified exercises: toy versions of real intellectual work. Scientific discovery takes place at the frontier of what we collectively do not know, while undergraduates practice controlled simulations. It can take years before a student reaches the actual edge of our shared ignorance.
Now I think we can bring students to that edge much earlier. As a researcher, I no longer feel compelled to conceal the uncertainty in my own work: the questions that genuinely puzzle my colleagues and me. I can expose students to those live problems. That changes the classroom dynamic. They’re not working on a rehearsal problem; they’re encountering questions that are unresolved in the field itself.This shift alters the teacher’s role. Instead of guarding expertise, we model intellectual vulnerability. We show that knowledge is provisional, contested, and engineered. Because access to the archive is easier, we can move more quickly toward its limits. That is both democratizing and demanding.
Imminent: You’ve written about textual “templates.” In a moment when content is increasingly produced with AI and digital tools, are literary schemas themselves changing?
Dennis: There are two intertwined issues here.
First, AI forces us to confront the collective nature of cultural production. Collective here means that what we produce—whether art, journalism, or science—is not the work of solitary geniuses. It is something we make together. This should change our attitude toward those intellectual products: they’re collectively constructed structures, like a bridge or a large engineering project. When we produce intellectual goods, we are working within inherited architectures. A novel, like a bridge, relies on prior design principles, shared conventions, and tested forms.
It was always a mistake to imagine that culture was exempt from patterning. Once we acknowledge that knowledge is engineered—a massively built structure—you cannot take a purely individual attitude toward it.
The second issue is scale. As with buildings, you don’t want your architect to start from scratch every time: there are structures we have figured out, things that are stable and safe. Culture is templated in a similar way, and it was a mistake to imagine that intellectual infrastructure was somehow exempt from this. When production becomes massively automated, like with AI, those patterns can be replicated at industrial speed. And that introduces the risk of flattening. Just as mass-produced buildings begin to resemble one another, or fast-fashion clothing converges toward uniformity, intellectual goods can drift toward statistical averages—safer, cheaper, more standardized.For instance, distinct traditions, like Italian and American music, evolved through particular historical trajectories. When pooled into a single generative system, those differences risk softening into a monoculture. There is something powerful about shared space, but there is also something fragile about local distinction. We have to think carefully about both.
Imminent: Given that risk, are there pedagogical strategies to resist cultural flattening?
Dennis: History is one of the strongest defenses against flattening. When I teach literature, I resist presenting it as a single undifferentiated field called “world literature.” Instead, I emphasize genealogies—the history of French letters, the evolution of Anglophone prose, the development of Italian narrative forms—and the diversity within each tradition.
Also, contextual awareness is crucial. Let’s take literary styles: if you ask for “a beautiful paragraph,” you’re invoking a stylistic lineage whether you realize it or not. American English prose, for example, often favors short, declarative sentences—a taste shaped by journalism, modernism, and particular institutional histories. Other languages cultivate different cadences, different tolerances for ornament or abstraction.
It’s similar to cooking. If you say “make me something delicious,” delicious does n’t mean the same thing in the United States and in Italy. That’s what I mean by contextual awareness: acknowledging cultural, geographical, and demographic differences, and having a more nuanced sense that there’s no universal metric of beauty or quality. Acknowledging that plurality is essential. And that awareness helps preserve diversity within the archive.
However, I’m not entirely pessimistic. Thankfully, the world remains diverse enough. Preferences vary. That variation creates friction. And friction can nourish creativity.
We need to confront the fact that creativity, intelligence, and knowledge are not perfectly equitable, transferable, or equally distributable qualities
Imminent: How is creativity conceived today in education?
Dennis: I don’t have a formula for creativity. But one way creativity can be nourished is through exposure.
In my course Writing AI, I tell students they have to develop a sense of style in their writing. And style—whether in writing, clothing, music, or cooking—emerges from encountering differences. To put it directly: if you don’t read stylish authors from different traditions, if you only consume bureaucratic ChatGPT bullet points, you’ll never develop a sense of style. You need to be exposed to a diversity of art, food, and writing at an early age. From that diversity, you can create your own special formula by combining the influences you love—even the strange and marginal ones, which are often what creates your unique blend. Without exposure, you produce very bland things.
That’s part of what education is for: creating that exposure. I want to expose my students to beautiful writing: that’s why we teach Dante, Joyce, and Dostoyevsky. My job isn’t to say you must like this, but to say you must like something. Students need to experience intensity: admiration, resistance, fascination, even discomfort. Because then I know my students will go on and build their own unique sense of identity.
Imminent: Do you think students are still willing to put in this work of searching for their own style and creativity?
Dennis: As always, the exceptional students will do exceptional things. There will always be a bell curve, and we cannot make everybody amazing.
But I want to be candid: the primary obstacle isn’t AI. It’s an addiction to other forms of media. Many students spend four or five hours a day on content that functions as intellectual junk food. That leaves very little time for difficult tasks. Reading Dostoyevsky requires uninterrupted concentration. It cannot easily compete with five-second videos engineered to stimulate dopamine.
Many people worry about AI, but it’s a misplaced worry. The much more obvious problem is the cheap, low-quality content we’re constantly consuming, resulting in low attention spans and addictive behavior. It gets down to brain chemistry: a low-level dopamine addiction that I see in many of my students.
And this is where the two phenomena are related: AI can amplify the production of junk content at scale, but the underlying vulnerability—our attraction to cheap stimulation—predates it. You can watch funny cat videos; AI can make a million of them in a couple of minutes. The exciting part—the way all of us can become better thinkers and glimpse the limits of human knowledge—coexists with a massive production of cheap content that stands in the way. You might just get stuck there, watching five-second clips that produce no real movement intellectually, socially, or politically.
Imminent: You argued many times that intelligence and creativity are collective. Do you think AI in education can be used to enhance collaboration rather than only individual performance?
Dennis: If we agree that AI is just another form of collaboration, then the approach changes. Traditionally, I would assign an essay to an individual student, but now I assign essays to teams. Students often resist group work because it reveals many social dynamics: somebody may be lazy while the other person does all the work; one student might have a full-time job, while another lives with their parents and has more time. And that’s exactly the kind of inequality that is baked into AI.
We need to confront the fact that creativity, intelligence, and knowledge are not perfectly equitable, transferable, or equally distributable qualities. In my class, I have a student who uses the free version of ChatGPT, a student who uses the $20-per-month version, and a student who uses the $200-per-month version. They will produce different work, and it’s not fair.
But those inequalities were always there. Before AI, some students had private tutors or financial advantages that supported their academic work. We simply preferred to imagine that homework was an individual endeavor.
Knowledge production has never been perfectly equitable. Collective intelligence is social and therefore political. Thus, it reflects the distribution of resources in society.
Imminent: Does it ever affect your work?
Dennis: Certainly there are some practical challenges, but in a way, it makes my work more exciting. My research is about the production of knowledge and text—how things are written. If you believe things are written by individual geniuses, it’s actually quite boring: just one author sitting there. But when production is social, you’re essentially studying the manufacture of intellectual goods.
Take translation: looking at how its production involves teams and tools is very interesting. Ten years ago, I really had to work to convince people they were not working alone. Now it is inescapable: a professional translator works in a team, uses technology. That makes my work more interesting: I get to study the environment itself, the workshop, the factory that produces intellectual goods at massive scale.
In the first chapter of The Wealth of Nations, Adam Smith asks: how is a coat made? And he starts tracing it: where the material was grown, how many people collected it, the shipping, the manufacturing, the pattern design. In the same way, contemporary and cultural goods are so complicated to make: international, assembled in pieces across time. It is complicated, but it is also genuinely fascinating.
Imminent: Your perspective is unique, having worked both in literary contexts and as an engineer. Do you think this moment could spark a fresh dialogue between disciplines?
Dennis: I don’t like the divide between humanities and engineering. It doesn’t reflect the modern world—and it wasn’t even accurate in the 18th century. The people who create tools for producing intellectual goods—Microsoft Word, Google Docs, ChatGPT—are often thought of as engineers, while we are seen as users. But it shouldn’t be that way. To write a poem, compose a song, create a recipe, or design clothing, you have to think like an engineer. Your passion is produced within complex systems, and you need to understand the machinery behind it. Once industrial production is involved, everyone becomes, in some way, an engineer.
Even professionals at the top of their fields eventually confront automation, electrification, and collectivization. These processes are partly engineering, partly human sciences. In today’s world, we all must become engineers, just as engineers must understand history, literature, and art. Hybrid education is the path forward, because neither discipline alone can navigate these challenges.
This is already happening. At Columbia, we are launching new hybrid programs. I spend my summers teaching at SKU University in Korea, where remarkable initiatives are underway. Carnegie Mellon has just launched a Computational Humanities PhD program, and Denmark hosts a Computational Humanities center. Universities are centuries-old and carry rich traditions—which are valuable—but when they combine that legacy with innovation, they often adopt hybrid approaches. This is the right direction.
Hybrid thinking also shapes professional teams: a writing team for a TV show might include a writer, an engineer, and a project manager—people with complementary skills and educational backgrounds. I consider myself hybrid, but I cannot master everything, so I seek out the right partners for each project. That’s how true innovation happens.
Imminent: Finally, in the U.S. context, does the educational model and infrastructure influence the adoption of AI? How does it compare to other countries?
Dennis: The strength of the U.S. is the depth of its education. It’s a large country with many excellent universities. But the beauty of the university is that it is a global project, going back to the Ancient Greeks. I’m deeply committed to my university and to nurturing my community, but my research is conducted within a field—it’s not contained within Columbia, or New York, or the United States. I collaborate with colleagues from Korea, Denmark, and Italy. There’s no “special sauce” you can keep to yourself: knowledge and ideas are shared.
Right now, no one has all the answers about integrating AI into education. What matters most is keeping the conversation open, allowing ideas to flow across cultures. I like to think of the university as a centuries-long global project. Of course, it’s political—we’re not immune to that. But even when countries are in conflict, researchers can collaborate because we care about the idea itself. This collaboration ensures that new developments in AI don’t benefit only one culture or country, but are shared equitably. That’s why universities everywhere must be protected.
In the U.S., universities face immense pressure—the pressure to become more politicized. We must resist it and remain somewhat idealistic. We work as a field to generate new knowledge and ideas together, keeping lines of communication open and experimenting collaboratively.
I always end my lectures and books on the same note: knowledge is collective and global. What we know about the Earth, the planets, art, and culture is never contained; it belongs to everyone. AI literally builds on this collective foundation, drawing from archives, libraries, and texts. It is an interface atop public, shared knowledge. Recognizing this, we must ensure that this intelligence remains in the public domain.
Still, I’m optimistic. Knowledge and AI cannot be fully contained. As we see with Wikipedia or Linux, there is room for free, open, and public resources alongside corporate structures. Education, by its very nature, is inherently democratic—and AI must follow that path. It is, and must remain, a global public good.