Attack of the Clones: Understanding the AI population boom
Why anthropomorphism is taking root in AI, with a case study on OpenAI's GPTs
AI needs a human face.
This goes back to the Mechanical Turk, a chess-playing robot — and elaborate hoax — that toured Europe in the 1770s. Today you see this playing out across consumer products. I’ve written about how ChatGPT itself is a product that is subtly yet thoroughly anthropomorphized to engender trust. Meta is peddling celebrity-powered AI assistants ventriloquized by the likes of Paris Hilton and Snoop Dogg. There is a thriving chatbot industry answering the loneliness epidemic with products like Replika AI, the “AI companion who cares.” The web itself is littered with spammy sites made by spammy authors. And we’re all dealing with more people slipping into our DMs asking for our SSNs.
This all amounts to an AI population boom. Generative AI has unlocked the ability to create credible beings from bits — a step change in the Earth’s carrying capacity. What’s fueling this and where do we go from here?
The rise of ambient computing and generative AI
To understand what’s going on, it’s helpful to look at the the history of computing hardware. Over the past few decades, as we’ve increased processing power according to Moore’s Law, computing has come out of the closet and taken to the clouds. Hardware has gotten smaller, more powerful, and so much more ubiquitous. Ben Thompson captures it better than I can:
Our built world can now support continuous computing everywhere. He posits this next stage as “ambient computing” — always always-on.
The big race among the tech giants is to create the hardware that can underpin ambient computing. Meta and Google and trying for glasses again, while Apple is going headset. Humane is pushing an AI pin. Jury’s out on who will win. I’m putting my money on audio-based hardware, which is cheaper, easier to build, and, with generative AI, finally workable. Once again: Her (2013).
The beauty of generative AI is that it can blather on forever and ever — and to blather is to be human. LLMs today make stuff up, which is pretty frustrating when you’re trying to do something goal-oriented that requires precision. That matters less when the software is a stand-in for a friend or an assistant. In a world of ambient computing, where you need an always-on operating system to shoot the shit with, generative AI is that girl.
Generative AI is the killer app for ambient computing. And this is why we’re seeing software drift toward personhood.
Case study: Make your own GPT with OpenAI
Open AI again embodies (hehe) this trend towards the anthropomorphic. Last month they released a new product, GPTs: “GPTs are a new way for anyone to create a tailored version of ChatGPT to be more helpful in their daily life, at specific tasks, at work, or at home—and then share that creation with others. For example, GPTs can help you learn the rules of any board game, help teach your kids math, or design stickers.”
Kevin Roose notes the significance here: if the first phase of AI was chatbots chatting, this next phase is chatbots doing. GPTs are the first step towards AI agency.
From a product perspective, GPTs are a streamlining feature for power users, like macros in Excel: they make it easier to set up workflows in ChatGPT. GPTs save you the time of starting a new chat with custom instructions and specific data sources.
Business-wise, GPTs are a platform play. OAI wants to create a marketplace for users to share and monetize their creations. This is a tried-and-true strategy that’s served big tech companies like Apple and Google who have massive money-making and gatekeeping app stores.
I’m a skeptic, so I also read GPTs as last-mile data collection. They give OpenAI an easy way to collect user’s proprietary data in the form of PDFs, business data, personal artifacts, etc. OAI says they won’t use this data for training purposes — haven’t we heard that before?
Anyhow, back to the product. When you create a GPT, you are meant to anthromorphize it from the jump. You’re asked what kind of bot you want, what expertise they should provide, what to name them, and what they should sound like. ChatGPT generates a logo on the spot, breathing life into digital clay. Within three minutes you have a full-fledged app that you can use privately or list publicly. Technically speaking… it’s pretty wild.
GPTs are just a first step towards agentic AI. It’s not clear to me that this particular product and its implementation will take off. But you can bet we’ll see more agentic products come online.
I should say that I do see decent potential for social impact here. GPTs might take specialized knowledge and democratize it, for example. As a proof of concept, I made Renters’ Rights Buddy, a GPT can answer questions about evictions and rent and insurance. I instructed the bot to be extra careful to prioritize factuality and cite sources. My concern here is that GPTs still contain the vulnerabilities of any probabilistic LLM — they hallucinate. Did I just create a misinformation machine?
You can see how OAI is using an anthropomorphic approach to give chatbots — this old, annoying medium — a glow-up. A little humanity goes a long way towards making these bots stickier and more engaging. They’re breeding likes rabbits, also. This X user has a whole team (“It almost feels like I’ve hired employees to work for me”), which is a grim reminder of the impending economic shocks from AI.
Is this the paradigm of the future — always-already rolling deep? Everyone a business unit, a militia, a glam squad?
Our AIs, ourselves
Kyle Chayka’s great piece in The New Yorker looks specifically at Replika and other AI companions. There are some serious ethical and psychological implications for these products, many of which we don’t yet understand — but will, soon, at a global scale. “Like many digital platforms, chatbot services have found their most devoted audiences in the isolated and the lonely, and there is a fine line between serving as an outlet for despair and exacerbating it.”
At Crisis Text Line, my previous employer and mental health startup, we were staunchly human-first. Every texter in crisis gets to chat with a real human volunteer. We employed AI to augment that experience, helping volunteers understand risk level, for instance. But we could never in good conscience let AI take the wheel with such sensitive conversations. This creates a bottleneck — we can’t help as many people as we would like — but a responsible one. A world with more Replika-type chatbots could be a net positive. It could also result in more tragedy.
And what of this booming AI population? Do they have rights? I often think of Ted Chiang’s provocations around “AI suffering,” which he discusses at length in this great Ezra Klein interview. He argues that we’ll treat pretty AIs pretty terribly based on our track record.
It’s hard enough to give legal protections to human beings who are absolutely moral agents. We have relatively few legal protections for animals who, while they are not moral agents, are capable of suffering… So the way that we will wind up treating software, again, assuming that software ever becomes conscious, they will inevitably fall lower on the ladder of consideration. So we will treat them worse than we treat animals. And we treat animals pretty badly.
I know this feels little abstract. But there’s an important moral question here. What does it mean to create a new class of beings to subjugate? And what does subjugating them do to us? XML.
LINKS THIS WEEK
More Ted Chiang! Read his short story about AI pets, reviewed here (WIRED)
Inside the recent OpenAI drama (The New Yorker)
Joy Buolamwini unpacks racism in facial recognition technology, and reminds us AI violence can be mundane (NPR)
The a16z podcast explains Q*, the latest breakthrough (a16z podcast)
“Can AI treat mental illness?” (The New Yorker)