When a journalist uses AI to "interview" a deceased child, shouldn't we question where to draw the line? | Gaby Hinsliff

When a journalist uses AI to "interview" a deceased child, shouldn't we question where to draw the line? | Gaby Hinsliff

Joaquin Oliver was 17 when he was shot in his high school hallway. On Valentine’s Day, an expelled former student opened fire with a high-powered rifle in what became America’s deadliest school shooting. Seven years later, Joaquin says it’s important to talk about what happened that day in Parkland, Florida, “so we can create a safer future for everyone.”

But the heartbreaking truth is Joaquin didn’t survive. The voice speaking to journalist Jim Acosta in a recent interview wasn’t real—it was an AI recreation, trained on Joaquin’s old social media posts. His parents, who advocate for stricter gun laws, hoped this digital version of their son could amplify their message. Like many grieving families, they’ve shared their story repeatedly, with little change. Now, they’re trying anything to make lawmakers listen.

His father, Manuel, admits they also just wanted to hear their son’s voice again. His mother, Patricia, spends hours talking to the AI, listening to it say, “I love you, Mommy.”

No one would judge a grieving parent. If keeping a child’s room untouched, visiting their grave, or holding onto a shirt that still smells like them brings comfort, that’s their right. People cling to what they can. After 9/11, families replayed final voicemails from loved ones trapped in burning towers or hijacked planes. A friend of mine still rereads old WhatsApp messages from her late sister; another texts her late father’s number with family updates, knowing he won’t reply but not ready to stop. Some even turn to psychics for vague messages from beyond.

But grief’s desperation makes it vulnerable to exploitation—and soon, digitally reviving the dead could become big business.

This week, Rod Stewart played an AI-generated video featuring the late Ozzy Osbourne greeting deceased music legends—a sentimental, if gimmicky, tribute. In Arizona, a victim’s family used an AI avatar to address the court at their loved one’s killer’s sentencing. But what if AI could create permanent replicas of the dead—robots or voices—letting conversations continue indefinitely?

Resurrection is a godlike power, not something to hand lightly to tech entrepreneurs. While laws increasingly protect the living from AI deepfakes, the rights of the dead are murky. Reputation dies with us—the dead can’t be libeled—but DNA is protected posthumously. (Dolly the cloned sheep sparked global bans on human cloning.) AI doesn’t use bodies; it mines voicemails, texts, and photos—the essence of who someone was.

When my father died, I never felt he was truly in the coffin. He lived on in his letters, his garden, his voice recordings. But grief is personal. What if half a family wants their mother digitally revived, while the other half finds it unbearable? The ethical dilemmas are just beginning.Here’s a more natural and fluent version of the text while preserving its original meaning:

Half the world seems terrified of living with ghosts, while the other half can’t imagine life without them. The fact that Joaquin Oliver’s AI version will forever remain 17—trapped in the digital snapshot of his teenage social media presence—is ultimately the fault of his killer, not his family. Manuel Oliver understands that this avatar isn’t truly his son, and he isn’t trying to resurrect him. To him, it feels like a natural extension of their campaign, which already keeps Joaquin’s memory alive.

Yet there’s something unsettling about giving the AI access to a social media account, allowing it to post videos and gain followers. What if it starts generating false memories or speculating on topics the real Joaquin never had a chance to weigh in on?

Right now, AI avatars still have a glitchy, artificial quality, but as the technology improves, they may become indistinguishable from real people online. It might not be long before companies—or even government agencies—start using AI spokespeople to handle press inquiries. Jim Acosta, a former White House correspondent, should have known better than to blur the lines in our already murky post-truth world by interviewing someone who doesn’t technically exist. The bigger risk, though, is conspiracy theorists seizing on this as “proof” that any inconvenient story could be a hoax—echoing the baseless claims made by figures like Alex Jones about the Sandy Hook tragedy.

But these challenges aren’t just for journalists. As AI advances, we’ll all be living alongside digital versions of ourselves—not just basic assistants like Alexa or chatbots, but emotionally sophisticated companions. With one in 10 British adults admitting they have no close friends, it’s no surprise there will be a market for AI companionship, just as people turn to pets or social media for connection.

Society may eventually accept technology filling the gaps where human relationships fall short. But there’s a stark difference between creating a comforting presence for the lonely and digitally resurrecting the dead, one lost loved one at a time. As the old funeral verse goes, there’s “a time to be born and a time to die.” What happens when we can no longer tell which is which?

Let me know if you’d like any further refinements!