
AI Voice Clone Called Her: The Jennifer DeStefano Virtual Kidnapping
In March 2023, Jennifer DeStefano received a call from what she believed was her teenage daughter's voice, crying and saying "Mom! I messed up!" A male voice then told her he had kidnapped her daughter and demanded a ransom. DeStefano's daughter was safe — on a ski trip — and the voice was entirely AI-generated. The case, one of the first publicly documented instances of AI voice cloning used in a virtual kidnapping scam, later led DeStefano to testify before the United States Senate.
The call
DeStefano was in a dance studio with her younger daughter when her phone rang. The caller ID showed her older daughter Brianna's number. The voice on the line was indistinguishable from Brianna's — the same cadence, the same intonation, the same emotional register. DeStefano told interviewers she had no doubt it was her daughter.
Seconds later, a man's voice came on the line. He claimed to have Brianna. He demanded USD 1 million, later dropping to USD 50,000. He instructed DeStefano not to hang up.
Other parents in the studio helped her call Brianna's phone on a separate device. Brianna picked up. She was fine. She was in Utah. The call that had sounded unmistakably like her had been fabricated entirely from synthesised audio.
How the technology works
AI voice cloning systems can generate a convincing facsimile of a person's voice from as little as a few seconds of audio. As of 2023, publicly available tools could produce this output in real time, allowing criminals to respond dynamically during a live call rather than playing pre-recorded clips.
The source material is typically gathered from publicly available content — social media videos, YouTube clips, TikToks, voicemails — that the target or their family members have posted online. A teenager who posts regularly on social media may have provided hundreds of hours of voice data without any awareness that it could be misused.


