EyeQ Tech review EyeQ Tech EyeQ Tech tuyển dụng review công ty eyeq tech eyeq tech giờ ra sao EyeQ Tech review EyeQ Tech EyeQ Tech tuyển dụng crab meat crab meat crab meat importing crabs live crabs export mud crabs vietnamese crab exporter vietnamese crabs vietnamese seafood vietnamese seafood export vietnams crab vietnams crab vietnams export vietnams export
Lifestyle

Blackmailers-for-hire are weaponizing ‘deepfake’ revenge porn

Imagine you receive a video of yourself engaged in an explicit sexual act, with a demand to hand over money or else the clip will be sent to your family, friends and co-workers.

Only, it’s not really you, but a shockingly convincing fake that looks and sounds just like you.

Your options are to pay up and hope the blackmailer keeps good on their word or run the gauntlet of convincing everyone — including strangers who find it online — that you’re innocent.

It’s a worrying reality that’s not only possible thanks to rapidly evolving and widely available technology, but already playing out in a new trend called “malicious deepfakes.”

A victim told the Washington Post that she discovered a video of her face digitally stitched onto the body of a adult film actress circulating online.

“I feel violated — this icky kind of violation,” the woman, who is in her 40s, told the newspaper. “It’s this weird feeling, like you want to tear everything off the internet. But you know you can’t.”

The report said similar fakes, made using open source machine learning technology developed by Google, had been used to threaten, intimidate and extort women.

Hollywood megastar Scarlett Johansson has been victim to the sickening trend, with dozens of hard-to-spot fake sex tapes circulating online.

In just one instance, a video described as a “leaked sex tape” — but which is actually a deepfake — has been viewed almost two million times.

“Nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired,” Johansson said.

Other stars, from Taylor Swift to “Wonder Woman” actress Gal Gadot, have been inserted into similarly vile videos.

But the technology has given those with sinister and malicious motives the opportunity to quickly ruin someone’s life at the click of a mouse.

Held at ransom

In 2016, a man in California was charged with targeting his ex-wife while earlier this year Indian investigative journalist Rana Ayyub found herself the victim of a deepfake video.

It spread quickly via social media in apparent retaliation to a piece she had written exposing government corruption.

“The slut-shaming and hatred felt like being punished by a mob for my work as a journalist, an attempt to silence me,” Ayyub wrote in an op-ed for The New York Times.

“It was aimed at humiliating me, breaking me by trying to define me as a ‘promiscuous’, ‘immoral woman.’”

There are forums online devoted to deepfakes, where users can make requests for women they want inserted into unflattering and usually pornographic scenarios.

And creators charge too — usually about $20 per piece for those wanting fabricated videos of exes, co-workers, friends, enemies and classmates.

Researchers testing the technology created a video of Barack Obama delivering a speech, mapping his facial movements with thousands of available images and pieces of footage.

With the use of artificial intelligence, the end product looked and sounded just like the former president, except he had never actually uttered those words.

And while the outcome was innocent, fun and purely an example, it showed how easily someone could use the technology for evil.

For the victim interviewed by the Washington Post, it emerged that the creator needed just 450 images of her, all sourced from search engines and social media.

Siwei Lyu, associate professor of computer science at the University of Albany, said there were some subtle clues that a video could be fake.

“When a deepfake algorithm is trained on face images of a person, it’s dependent on the photos that are available on the internet that can be used as training data,” Lyu said.

“Even for people who are photographed often, few images are available online showing their eyes closed.

“Without training images of people blinking, deepfake algorithms are less likely to create faces that blink normally.”

However, Lyu admits software designed to scan for fakes is struggling to keep up with advancements in the technology that creates them.

“People who want to confuse the public will get better at making false videos — and we and others in the technology community will need to continue to find ways to detect them.”

Enormous risks

In a research paper published this year, Robert Chesney from the University of Texas and Danielle Citron from the University of Maryland said the damage could be “profound.”

“Victims may feel humiliated and scared,” they wrote.

“When victims discover that they have been used in fake sex videos, the psychological damage may be profound — whether or not this was the aim of the creator of the video.”

They could be used to extort and threaten victims, to spread misinformation in the increasingly worrying era of fake news, or to bribe elected officials, the report warned.

Or these faked clips, which are difficult to detect, could be used to terrify the public — such as “emergency officials ‘announcing’ an impending missile strike on Los Angeles or an emergent pandemic in New York City, provoking panic and worse.”

But policing the internet is a notoriously difficult task.

“Individuals and businesses will face novel forms of exploitation, intimidation, and personal sabotage,” Chesney and Citron wrote. “The risks to our democracy and to national security are profound as well.

“Machine learning techniques are escalating the technology’s sophistication, making deep fakes ever more realistic and increasingly resistant to detection.”