SEMINOLE COUNTY, Fla. – Imagine standing in a store when a stranger runs up to you, phone in hand, showing you a video of someone stealing your car from the parking lot. The video looks completely real — but it was made with artificial intelligence.
Melanie Valentine says that’s exactly what happened to her at a Home Depot in West Palm Beach late last year.
“I had a young guy run up to me, shoving his phone in my face, saying, screaming, ‘Someone is stealing your truck! Someone is stealing your truck!’” Valentine said.
She says the video was convincing.
“He was showing me a video of my husband’s truck where I parked it, with a guy opening the door to the truck, getting in the truck and driving away,” Valentine said. “It was very real.”
She said the man then urged her to follow him outside.
“He was saying, ‘follow me, follow me, we can catch them.’ And that’s when I was like, OK, what was happening?” Valentine said. “He was very insistent. Come outside. We can catch them, you know, which of course made no sense.”
[WATCH: Law enforcement warns of ‘growing concern’ over AI prank videos]
That’s when Valentine says she realized something wasn’t right. Later that month, she learned the same content creator had targeted several other customers at the same store.
“We had a couple incidents at a local Home Depot where he was approaching people in the aisles,” said Capt. Roy Bevell of the West Palm Beach Police Department. “He approached the man and said, ‘Hey, here’s a video of your wife outside with another person.’”
A police report obtained by News 6 shows Alexis Martinez-Arizala also showed an AI-generated video of a man being dragged away at a gas station in West Palm Beach.
This week, the Seminole County Sheriff’s Office announced Martinez-Arizala is facing charges after one of his alleged pranks targeted one of their deputies at a store in Lake Mary.
“That’s wild,” Valentine said after News 6 told her about the arrest. “Just the gall to do that to a police officer.”
Valentine says the public needs to know this kind of deception is happening.
“I think that people need to be made aware that these things are going to happen, that this is going on,” Valentine said. “Just the way that it is being used for nefarious things, that’s pretty disturbing and upsetting.”
How AI makes fakes look real
Greg Gogolin, a professor and director of the Center for Cybersecurity and Data Science at Ferris State University, demonstrated how AI can be used to manipulate video and audio — using himself as the subject.
“This is created from a single photograph and a voice sample,” Gogolin said, showing the News 6 team what looked like himself talking. “Everything is fake. This isn’t a video.”
All it takes, he says, is about 10 seconds of audio to create a voice clone — though Gogolin noted his own sample took about an hour to build.
The barrier to entry is even lower when it comes to video.
“At the minimum, you need a snapshot of someone,” Gogolin said. “Now, you could do a screenshot. And that’s why social media is such a challenge, because people post so many things out there. And if you have a single snapshot that’s a reasonable quality, you can go with that.”
Gogolin said AI has existed since the 1950s, but hit a plateau before breakthroughs came from researchers at Google and Meta, and when ChatGPT launched about two and a half years ago.
“The productivity gains are so fast,” Gogolin said. “Based on these new models that are coming out, it’s actually kind of scary where we’re going to be in a very, very short period of time.”
The rapid pace of change has even shifted the way Gogolin teaches.
“If I were teaching a class in machine learning last year, I would have taught it much different than the class that I’m teaching right now,” he said. “Now, I’m teaching people how to use the tools to generate the code, rather than to write the code from scratch.”
[WATCH: Elon Musk deepfake scam costs Florida couple thousands of dollars]
How to spot a fake
Even a year ago, Gogolin says, spotting a fake was relatively straightforward. That’s no longer the case.
“You look for is the person blinking? Little mannerisms,” Gogolin said. “The models have kind of overcome that. So, it can be really hard.”
There are still some clues to watch for, though.
“If you have a video that was created from a snapshot picture, it will look like that person. But the way the facial expressions and things are that are generated from AI, they’re going to look different than what the person actually did,” Gogolin said. “Now, if you are generating something from a video to make another video, that’s a little bit different. But you can also look at things, maybe like backgrounds. Voice. And this is getting harder, but listen to the voice and see if the lips actually match what’s being said.”
When a prank becomes a crime
The Seminole County Sheriff’s Office says the Martinez-Arizala case highlights “growing concerns” surrounding the misuse of technology. But defining what crosses a legal line remains complicated.
West Palm Beach police say if someone is in a public space, a First Amendment right to record limits what can be considered a criminal act.
“They’re not demanding any type of money or causing a police response, which is what would make it criminal,” Bevell said. “But just the fact that they’re approaching people and creating a moment of chaos for them is not good.”
Gogolin says the legal system is struggling to keep up.
“Legislation always lags technology,” he said. “Even something like privacy — the laws on privacy are much stronger in Europe than they are in the United States.”
Valentine believes there should be consequences.
“It was really frustrating to me that he continued to do this after he did it to me, and he really had no consequences,” Valentine said. “But until the laws change, you know, here we are.”