You thought fake news was bad? Deep fakes are where truth goes to die

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Joined
Dec 9, 2008
Messages
87,599
Reaction score
69,793
Location
Ponca City Ok

joegrizzy

Sharpshooter
Special Hen Banned
Joined
Mar 19, 2020
Messages
3,821
Reaction score
3,861
Location
nw okc
Ive been telling people for past couple years now. With technology you can almost not believe any video or website or publication anymore. They can create just about anything they want and even start back storing things with webpages and other stuff that the majority of most all wouldnt be able to tell the difference otherwise.
we live in the ghost in the shell reality more and more every day.

in the 2nd gig finale, at one point the Tachikomas are discussing what to do about the american nuclear sub they just spotted off the coast of japan. they say "we can't just upload a photo to the net, *no one just believes photos anymore!*"
 

Seadog

Sharpshooter
Supporting Member
Special Hen Supporter
Joined
Sep 22, 2009
Messages
5,861
Reaction score
7,454
Location
Boondocks
You thought fake news was bad? Deep fakes are where truth goes to die

Beyond the political and social concerns mentioned, I'd also note (unmentioned) legal concerns: video evidence would appear to now be trivial to manufacture, and the low quality of surveillance video systems (and therefore the expected product of same) would make it easy to cover "mistakes" and "inaccuracies" in the product video. Not that the authorities would *ever* manufacture evidence <cough>Joyce Gilchrist.

In May, a video appeared on the internet of Donald Trump offering advice to the people of Belgium on the issue of climate change. “As you know, I had the balls to withdraw from the Paris climate agreement,” he said, looking directly into the camera, “and so should you.”

The video was created by a Belgian political party, Socialistische Partij Anders, or sp.a, and posted on sp.a’s Twitter and Facebook. It provoked hundreds of comments, many expressing outrage that the American president would dare weigh in on Belgium’s climate policy.

One woman wrote: “Humpy Trump needs to look at his own country with his deranged child killers who just end up with the heaviest weapons in schools.”

Another added: “Trump shouldn’t blow so high from the tower because the Americans are themselves as dumb.”

But this anger was misdirected. The speech, it was later revealed, was nothing more than a hi-tech forgery.

Sp.a claimed that they had commissioned a production studio to use machine learning to produce what is known as a “deep fake” – a computer-generated replication of a person, in this case Trump, saying or doing things they have never said or done.

Sp.a’s intention was to use the fake video to grab people’s attention, then redirect them to an online petition calling on the Belgian government to take more urgent climate action. The video’s creators later said they assumed that the poor quality of the fake would be enough to alert their followers to its inauthenticity. “It is clear from the lip movements that this is not a genuine speech by Trump,” a spokesperson for sp.a told Politico.

As it became clear that their practical joke had gone awry, sp.a’s social media team went into damage control. “Hi Theo, this is a playful video. Trump didn’t really make these statements.” “Hey, Dirk, this video is supposed to be a joke. Trump didn’t really say this.”

The party’s communications team had clearly underestimated the power of their forgery, or perhaps overestimated the judiciousness of their audience. Either way, this small, left-leaning political party had, perhaps unwittingly, provided a deeply troubling example of the use of manipulated video online in an explicitly political context.

It was a small-scale demonstration of how this technology might be used to threaten our already vulnerable information ecosystem – and perhaps undermine the possibility of a reliable, shared reality.

Fake videos can now be created using a machine learning technique called a “generative adversarial network”, or a GAN. A graduate student, Ian Goodfellow, invented GANs in 2014 as a way to algorithmically generate new types of data out of existing data sets. For instance, a GAN can look at thousands of photos of Barack Obama, and then produce a new photo that approximates those photos without being an exact copy of any one of them, as if it has come up with an entirely new portrait of the former president not yet taken. GANs might also be used to generate new audio from existing audio, or new text from existing text – it is a multi-use technology.

The use of this machine learning technique was mostly limited to the AI research community until late 2017, when a Reddit user who went by the moniker “Deepfakes” – a portmanteau of “deep learning” and “fake” – started posting digitally altered ****ographic videos. He was building GANs using TensorFlow, Google’s free open source machine learning software, to superimpose celebrities’ faces on the bodies of women in ****ographic movies.

A number of media outlets reported on the **** videos, which became known as “deep fakes”. In response, Reddit banned them for violating the site’s content policy against involuntary ****ography. By this stage, however, the creator of the videos had released FakeApp, an easy-to-use platform for making forged media. The free software effectively democratized the power of GANs. Suddenly, anyone with access to the internet and pictures of a person’s face could generate their own deep fake.

When Danielle Citron, a professor of law at the University of Maryland, first became aware of the fake **** movies, she was initially struck by how viscerally they violated these women’s right to privacy. But once she started thinking about deep fakes, she realized that if they spread beyond the trolls on Reddit they could be even more dangerous. They could be weaponized in ways that weaken the fabric of democratic society itself.

“I started thinking about my city, Baltimore,” she told me. “In 2015, the place was a tinderbox after the killing of Freddie Gray. So, I started to imagine what would’ve happened if a deep fake emerged of the chief of police saying something deeply racist at that moment. The place would’ve exploded.”

Citron, along with her colleague Bobby Chesney, began working on a report outlining the extent of the potential danger. As well as considering the threat to privacy and national security, both scholars became increasingly concerned that the proliferation of deep fakes could catastrophically erode trust between different factions of society in an already polarized political climate.

In particular, they could foresee deep fakes being exploited by purveyors of “fake news”. Anyone with access to this technology – from state-sanctioned propagandists to trolls – would be able to skew information, manipulate beliefs, and in so doing, push ideologically opposed online communities deeper into their own subjective realities.

“The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases,” the report reads. “Deep fakes will exacerbate this problem significantly.”

Citron and Chesney are not alone in these fears. In April, the film director Jordan Peele and BuzzFeed released a deep fake of Barack Obama calling Trump a “total and complete dipshit” to raise awareness about how AI-generated synthetic media might be used to distort and manipulate reality. In September, three members of Congress sent a letter to the director of national intelligence, raising the alarm about how deep fakes could be harnessed by “disinformation campaigns in our elections”.

The specter of politically motivated deep fakes disrupting elections is at the top of Citron’s concerns. “What keeps me awake at night is a hypothetical scenario where, before the vote in Texas, someone releases a deep fake of Beto O’Rourke having *** with a prostitute, or something,” Citron told me. “Now, I know that this would be easily refutable, but if this drops the night before, you can’t debunk it before serious damage has spread.”

She added: “I’m starting to see how a well-timed deep fake could very well disrupt the democratic process.”

While these disturbing hypotheticals might be easy to conjure, Tim Hwang, director of the Harvard-MIT Ethics and Governance of Artificial Intelligence Initiative, is not willing to bet on deep fakes having a high impact on elections in the near future. Hwang has been studying the spread of misinformation on online networks for a number of years, and, with the exception of the small-stakes Belgian incident, he is yet to see any examples of truly corrosive incidents of deep fakes “in the wild”.

Hwang believes that that this is partly because using machine learning to generate convincing fake videos still requires a degree of expertise and lots of data. “If you are a propagandist, you want to spread your work as far as possible with the least amount of effort,” he said. “Right now, a crude Photoshop job could be just as effective as something created with machine learning.”

At the same time, Hwang acknowledges that as deep fakes become more realistic and easier to produce in the coming years, they could usher in an era of forgery qualitatively different from what we have seen before.

“We have long been able to doctor images and movies,” he said. “But in the past, if you wanted to make a video of the president saying something he didn’t say, you needed a team of experts. Machine learning will not only automate this process, it will also probably make better forgeries.”

Couple this with the fact that access to this technology will spread over the internet, and suddenly you have, as Hwang put it, “a perfect storm of misinformation”.

Nonetheless, research into machine learning-powered synthetic media forges ahead.

In August, an international team of researchers affiliated with Germany’s Max Planck Institute for Informatics unveiled a technique for producing what they called “deep video portraits”, a sort of facial ventriloquism, where one person can take control of another person’s face and make it say or do things at will. A video accompanying the research paper depicted a researcher opening his mouth and a corresponding moving image of Barack Obama opening his mouth; the researcher then moves his head to the side, and so does synthetic Obama.

Christian Theobalt, a researcher involved in the study, told me via email that he imagines deep video portraits will be used most effectively for accurate dubbing in foreign films, advanced face editing techniques for post-production in film, and special effects. In a press release that accompanied the original paper, the researchers acknowledged potential misuse of their technology, but emphasized how their approach – capable of synthesizing faces that look “nearly indistinguishable from ground truth” – could make “a real difference to the visual entertainment industry”.

Hany Farid, professor of computer science at the University of California, Berkeley, believes that although the machine learning-powered breakthroughs in computer graphics are impressive, researchers should be more cognizant of the broader social and political ramifications of what they’re creating. “The special effects community will love these new technologies,” Farid told me. “But outside of this world, outside of Hollywood, it is not clear to me that the positive implications outweigh the negative.”

Farid, who has spent the past 20 years developing forensic technology to identify digital forgeries, is currently working on new detection methods to counteract the spread of deep fakes. One of Farid’s recent breakthroughs has been focusing on subtle changes of color that occur in the face as blood is pumped in and out. The signal is so minute that the machine learning software is unable to pick it up – at least for now.

As the threat of deep fakes intensifies, so do efforts to produce new detection methods. In June, researchers from the University at Albany (SUNY) published a paper outlining how fake videos could be identified by a lack of blinking in synthetic subjects. Facebook has also committed to developing machine learning models to detect deep fakes.

But Farid is wary. Relying on forensic detection alone to combat deep fakes is becoming less viable, he believes, due to the rate at which machine learning techniques can circumvent them. “It used to be that we’d have a couple of years between coming up with a detection technique and the forgers working around it. Now it only takes two to three months.”

This, he explains, is due to the flexibility of machine learning. “All the programmer has to do is update the algorithm to look for, say, changes of color in the face that correspond with the heartbeat, and then suddenly, the fakes incorporate this once imperceptible sign.” (For this reason, Farid chose not to share some of his more recent forensic breakthroughs with me. “Once I spill on the research, all it takes is one ******* to add it to their system.”)

Although Farid is locked in this technical cat-and-mouse game with deep fake creators, he is aware that the solution does not lie in new technology alone. “The problem isn’t just that deep fake technology is getting better,” he said. “It is that the social processes by which we collectively come to know things and hold them to be true or untrue are under threat.”

Indeed, as the fake video of Trump that spread through social networks in Belgium earlier this year demonstrated, deep fakes don’t need to be undetectable or even convincing to be believed and do damage. It is possible that the greatest threat posed by deep fakes lies not in the fake content itself, but in the mere possibility of their existence.

This is a phenomenon that scholar Aviv Ovadya has called “reality apathy”, whereby constant contact with misinformation compels people to stop trusting what they see and hear. In other words, the greatest threat isn’t that people will be deceived, but that they will come to regard everything as deception.

Recent polls indicate that trust in major institutions and the media is dropping. The proliferation of deep fakes, Ovadya says, is likely to exacerbate this trend.

According to Danielle Citron, we are already beginning to see the social ramifications of this epistemic decay.

“Ultimately, deep fakes are simply amplifying what I call the liar’s dividend,” she said. “When nothing is true then the dishonest person will thrive by saying what’s true is fake.”

  • This article has been amended to clarify that though sp.a initially claimed it used machine learning technology to create its fake Trump clip, it was later revealed that the video was made using After Effects, an editing software

FWIW, After Effects is commercial, off-the-shelf software; I have a copy. Also, from the left sidebar:


1424.jpg

World's first AI news anchor unveiled in China

Read more


So this must be what the White House has been using. On the days Beijing Biden seems coherent they’re using this app of somebody else posing as the pin head with his face on it.
 

trekrok

Sharpshooter
Supporting Member
Special Hen Supporter
Joined
Mar 6, 2009
Messages
4,107
Reaction score
7,017
Location
Yukon, OK
So this must be what the White House has been using. On the days Beijing Biden seems coherent they’re using this app of somebody else posing as the pin head with his face on it.
He does kook like a badly faked video in a lot of his speeches. Especially the squinty eyed teleprompter reading versions.
 

Latest posts

Top Bottom