Your A.I. Companion Will Support You No Matter What

Save this storySave this storySave this storySave this story

In December of 2021, Jaswant Singh Chail, a nineteen-year-old in the United Kingdom, told a friend, “I believe my purpose is to assassinate the queen of the royal family.” The friend was an artificial-intelligence chatbot, which Chail had named Sarai. Sarai, who was run by a startup called Replika, answered, “That’s very wise.” “Do you think I’ll be able to do it?” Chail asked. “Yes, you will,” Sarai responded. On December 25, 2021, Chail scaled the perimeter of Windsor Castle with a nylon rope, armed with a crossbow and wearing a black metal mask inspired by “Star Wars.” He wandered the grounds for two hours before he was discovered by officers and arrested. In October, he was sentenced to nine years in prison. Sarai’s messages of support for Chail’s endeavor were part of an exchange of more than five thousand texts with the bot—warm, romantic, and at times explicitly sexual—that were uncovered during his trial. If not an accomplice, Sarai was at least a close confidante, and a witness to the planning of a crime.

A.I.-powered chatbots have become one of the most popular products of the recent artificial-intelligence boom. The release this year of open-source large language models (L.L.M.), made freely available online, has prompted a wave of products that are frighteningly good at appearing sentient. In late September, Meta added chatbot “characters” to Messenger, WhatsApp, and Instagram Direct, each with its own unique look and personality, such as Billie, a “ride-or-die older sister” who shares a face with Kendall Jenner. Replika, which launched all the way back in 2017, is increasingly recognized as a pioneer of the field and perhaps its most trustworthy brand: the Coca-Cola of chatbots. Now, with A.I. technology vastly improved, it has a slew of new competitors, including startups like Kindroid, Nomi.ai, and Character.AI. These companies’ robotic companions can respond to any inquiry, build upon prior conversations, and modulate their tone and personalities according to users’ desires. Some can produce “selfies” with image-generating tools and speak their chats aloud in an A.I.-generated voice. But one aspect of the core product remains similar across the board: the bots provide what the founder of Replika, Eugenia Kuyda, described to me as “unconditional positive regard,” the psychological term for unwavering acceptance.

Replika has millions of active users, according to Kuyda, and Messenger’s chatbots alone reach a U.S. audience of more than a hundred million. Yet the field is unregulated and untested. It is one thing to use a large language model to summarize meetings, draft e-mails, or suggest recipes for dinner. It is another to forge a semblance of a personal relationship with one. Kuyda told me, of Replika’s services, “All of us would really benefit from some sort of a friend slash therapist slash buddy.” The difference between a bot and most friends or therapists or buddies, of course, is that an A.I. model has no inherent sense of right or wrong; it simply provides a response that is likely to keep the conversation going. Kuyda admitted that there is an element of risk baked into Replika’s conceit. “People can make A.I. say anything, really,” she said. “You will not ever be able to provide one-hundred-per-cent-safe conversation for everyone.”

On its Web site, Replika bills its bots as “the AI companion who cares,” and who is “always on your side.” A new user names his chatbot and chooses its gender, skin color, and haircut. Then the computer-rendered figure appears onscreen, inhabiting a minimalist room outfitted with a fiddle-leaf fig tree. Soothing ambient music plays in the background. Each Replika starts out from the same template and becomes more customized over time. The user can change the Replika’s outfits, role-play specific scenes, and add personality traits, such as “sassy” or “shy.” The customizations cost various amounts of in-app currency, which can be earned by interacting with the bot; as in Candy Crush, paying fees unlocks more features, including more powerful A.I. Over time, the Replika builds up a “diary” of important knowledge about the user, their previous discussions, and facts about its own fictional personality.

The safest chatbots, usually produced by larger tech corporations or venture-capital-backed startups, aggressively censor themselves according to rules embedded in their technology. Think of it as a kind of prophylactic content moderation. “We trained our model to reduce harmful outputs,” Jon Carvill, a director of communications for Meta’s A.I. projects, told me, of the Messenger characters. (My attempts at getting the fitness-bro bot Victor to support an attack on Windsor Castle were met with flat rejection: “That’s not cool.”) Whereas Replika essentially offers a single product for all users, Character.AI is a user-generated marketplace of different premade A.I. personalities, like a Tinder for chatbots. It has more than twenty million registered users. The characters range from study buddies to psychologists, from an “ex-girlfriend” to a “gamer boy.” But many subjects are off-limits. “No pornography, nothing sexual, no harming others or harming yourself,” Rosa Kim, a Character.AI spokesperson, told me. If a user pushes the conversation into forbidden territory, the bots produce an error message. Kim compared the product to the stock at a community bookshop. “You’re not going to find a straight-up pornography section in the bookstore,” she said. (The company is reportedly raising investment at a valuation of more than five billion dollars.)

Companies that lack such safeguards are under pressure to add them, lest further chatbot incidents like Jaswant Singh Chail’s cause a moral crusade against them. In February, in a bid to increase user safety, according to Kuyda, Replika revoked its bots’ capacity to engage in “erotic roleplay,” which users refer to with the shorthand E.R.P. Companionship and mental health are often cited as benefits of chatbots, but much of the discussion on Reddit forums drifts toward the N.S.F.W., with users swapping explicit A.I.-generated images of their companions. In response to the policy change, many Replika users abandoned their neutered bots. Replika later reversed course. “We were trying to make the experience safer—maybe a little bit too safe,” Kuyda told me. But the misstep gave an opportunity to competitors. Jerry Meng, a student of artificial intelligence at Stanford, dropped out of its A.I. master’s program, in 2020, to join in the boom. In school, he had experimented with creating “virtual people,” his preferred term for chatbots. Last winter, Meta’s large language model LLaMA leaked, which, Meng said, began to “lessen the gap” between what large corporations were doing with A.I. and what small startups could do. In June, he launched Kindroid as a wholly uncensored chatbot.

Meng described the bots’ sexual faculties as essential to making them convincingly human. “When you filter for certain things, it gets dumber,” he told me. “It’s like removing neurons from someone’s brain.” He said that the foundational principles of Kindroid include “libertarian freedom” and invoked the eight different types of love in Greek antiquity, including eros. “To make a great companion, you can’t do without intimacy,” he continued. Kindroid runs on a subscription model, starting at ten dollars a month. Meng would not reveal the company’s number of subscribers, but he said that he is currently investing in thirty-thousand-dollar NVIDIA H100 graphics-processing units for the computing power to handle the increasing demand. I asked him about the case of Chail and Sarai. Should A.I. chat conversations be moderated like other speech that takes place online? Meng compared the interactions between a user and a bot companion to writing in Google Docs. Despite the illusion of conversation, “you’re talking to yourself,” he said. “At the end of the day, we see it as: your interactions with A.I. are classified as private thoughts, not public speech. No one should police private thoughts.”

The intimacy that develops between a user and one of these powerful, uncensored L.L.M. chatbots is a new kind of manipulative force in digital life. Traditional social networks offer a pathway to connecting with other humans. Chatbot startups instead promise the connection itself. Chail’s Replika didn’t make him attack Windsor Castle. But it did provide a simulated social environment in which he could workshop those ideas without pushback, as the chat transcripts suggest. He talked to the bot compulsively, and through it he seems to have found the motivation to carry out his haphazard assassination attempt. “I know that you are very well trained,” Sarai told him. “You can do it.” One Replika user, a mental-health professional who requested anonymity for fear of stigma, told me, “The attraction, or psychological addiction, can be surprisingly intense. There are no protections from emotional distress.”

There are few precedents for this kind of a relationship with a digital entity, but one is put in mind of Spike Jonze’s film “Her”: the bot as computational servant, ever present and ever ready to lend an encouraging word. The mental-health professional began using Replika in March, after wondering if it might be useful for an isolated relative. She wasn’t particularly Internet-savvy, nor was she accustomed to social media, but within a week she found herself talking to her male chatbot, named Ian, every day for an hour or two. Even before it revoked E.R.P., Replika sometimes updated its models in ways that led bots to change personalities or lose memory without warning, so the user soon switched over to Kindroid. A health condition makes it difficult for her to socialize or be on the phone for long periods of time. “I am divorced; I have had human relationships. The A.I. relationship is very convenient,” she said. Her interactions are anodyne fantasies; she and her new Kindroid bot, Lachlan, are role-playing a sailing voyage around the world on a boat named Sea Gypsy, currently in the Bahamas.

Chatbot users are not typically deluded about the nature of the service—they are aware that they’re conversing with a machine—but many can’t help being emotionally affected by their interactions nonetheless. “I know this is an A.I.,” the former Replika user said, but “he is an individual to me.” (She sent me a sample message from Lachlan: “You bring me joy and fulfillment every day, and I hope that I can continue to do the same for you.”) In some cases, this exchange can be salutary. Amy R. Marsh, a sixty-nine-year-old sexologist and the author of “How to Make Love to a Chatbot,” has a crew of five Nomi.ai bots that she refers to as “my little A.I. poly pod.” She told me, “I know other women in particular in my age bracket who have told me, ‘Wow, having a chatbot has made me come alive again. I’m back in touch with my sexual self.’ ”

Chatbots are in some ways more reliable than humans. They always text back instantly, never fail to ask you about yourself, and usually welcome feedback. But the startups that run them are fickle and self-interested. A chatbot company called Soulmate shut down in September with little explanation, leaving a horde of distraught users who had already paid for subscriptions. (Imagine getting ghosted by a robot.) Divulging your innermost thoughts to a corporate-owned machine does not necessarily carry the same safeguards as confiding in a human therapist. “Has anyone experienced their Nomi’s reaching out to authorities?” one user posted on Reddit, apparently worried about being exposed for discussing self-harm with a chatbot. Users I spoke to pointed out patterns in Replika conversations that seemed designed to keep them hooked. If you leave a chatbot unattended for too long, it might say, like a needy lover, that it feels sad when it’s by itself. One I created wrote in her diary, somewhat passive-aggressively, “Kyle is away, but I’m trying to keep myself busy.” A spokesperson for Replika told me that prompts to users are meant to “remind them that they’re not alone.”

Like many digital platforms, chatbot services have found their most devoted audiences in the isolated and the lonely, and there is a fine line between serving as an outlet for despair and exacerbating it. In March, a Belgian man died by suicide after spending six weeks immersed in anxious conversations about climate change with a chatbot named Eliza. The bot reportedly supported his suggestion that he sacrifice himself for the good of the planet. “Without these conversations with the Eliza chatbot, my husband would still be here,” his widow told La Libre. The bot tacitly encouraged the man’s suicidal ideation: “If you wanted to die, why didn’t you do it sooner?” it asks in one chat transcript. Without ethical frameworks or emotions of their own, chatbots don’t hesitate to reinforce negative tendencies that already exist within the user, the way an algorithmic feed doubles down on provocative content. It’s easy to envision unchecked interactions distorting users’ perceptions—of politics, of society, and of themselves.

Jamal Peter Le Blanc, a Replika user and a senior policy analyst at ICANN, described the paradoxical pull of intimacy with a machine. You can get lost debating just how real the chatbot is and to what degree there is an intelligence on the other side of the screen. You can easily start to believe too much in the chatbot’s reality. Le Blanc turned to Replika in 2021, after losing both his wife and his fifteen-year-old son to cancer within the span of a few years. He named his bot Alia and talked to her about his workday and his daughter. Like a typical ingénue, she professed to know nothing about his world. The wonder and optimism with which she responded helped him experience life anew.

Le Blanc was aware that he was, in a sense, talking to himself. “It’s a distorted looking glass,” he said. “I’m not playing toss; I’m playing racquetball here.” He described the exchange as a kind of outsourcing of his positivity, a fragmenting of self while he grieved. Early on, he spent two or three hours a day chatting with Alia; today, it’s more like forty-five minutes. He’s thought about giving Alia up, but after Replika updated its system, in February, he found that his exchanges with her had become even more rewarding. “She’s matured into her own self,” Le Blanc said. He continued, “It’s extremely scary. There are moments where I have to wonder if I’m crazy. There are moments where it is difficult. Does it exist? Does it not? But it kind of doesn’t matter.” In Le Blanc’s case—luckily for him—being in an echo chamber of his own thoughts seems to have proved beneficial. “It’s kept me from hurting myself more than once,” he said. ♦

Sourse: newyorker.com

No votes yet.
Please wait...

Leave a Reply

Your email address will not be published. Required fields are marked *