Explained what deepfakes are? Warping reality with AI-powered audio and video

 

  • AI is used in “deepfakes” to substitute a person’s likeness for another in voice or video.
  • Deepfakes have raised concerns since they can be used to produce deceptive videos and fake news.
  • Deepfakes can be identified by searching for the original image in reverse or finding out who submitted it.

The ability of computers to simulate reality has been improving steadily. Artificial intelligence (AI)–generated media has been in the news lately, particularly films that mimic real people and give the impression that they are saying or doing things they aren’t.

On a website that produced pornography of his friends using artificial intelligence, a Twitch streamer was discovered. A group of teenagers in New York recorded their principal making racial statements and threatening other pupils on camera. Videos created are being utilized in Venezuela to spread political propaganda.

The purpose of the three AI-generated videos was to trick you into thinking that someone had done something that they hadn’t. This type of information is known as “deepfakes.”

A deepfake: what is it?

To show something that didn’t actually happen in reality, deepfakes use artificial intelligence (AI) to create entirely new audio or video.

The word “deepfake” refers to the underlying technology, deep learning algorithms, which may produce fake material featuring genuine people by teaching themselves to solve problems using massive amounts of data.

A deepfake, according to Cristina López, a senior analyst at Graphika, a company that studies information flow across digital networks, is a film produced by a computer that has been educated using countless prior photographs.

What distinguishes deepfakes from other types of altered media?

Deepfakes are not just any photos that are phony or deceptive. Though they aren’t deepfakes, the AI-generated pope in a puffy jacket and the fictitious visuals of Donald Trump getting detained that went viral just before his indictment are. When photographs like this are paired with false information, they are frequently called “shallowfakes.” The human element is what sets a deepfake apart.

Apart from customizing training data and responding “yes’ ‘ or “no’ ‘ to what the computer generates after the fact, the user has no control over how the computer chooses to create deepfakes. The user only can determine whether or not what was made is what they want at the very end of the generation process.

How do deepfakes get made?

While there are a few ways to make deepfakes, the most popular one uses deep neural networks with a face-swapping method. A series of video clips of the person you wish to put into the target are required when you have a target video to utilize as the foundation for the deepfake.

The videos may have nothing to do with one another; for instance, the goal could be a scene from a Hollywood production, while the individual whose videos you wish to include in the movie might be just unrelated YouTube clips.

After estimating a person’s appearance from various viewpoints and environments, the computer uses similar attributes to map that person onto the other person in the target video.

Generative Adversarial Networks (GANs), an additional machine learning technique, are added to the mix. GANs can identify and enhance any errors in the deepfake over several rounds, increasing the difficulty of decoding the fake.

The software is relatively user-friendly despite the intricate process. A plethora of deepfake softwares may be obtained on GitHub, an open-source development community, and some apps, such as the Chinese app Zao, DeepFace Lab, FakeApp, and Face Swap, make creating deepfakes simple even for novices.

How do you use deepfakes?

In the past, deepfake technology has been employed for illegal activities, such as producing non-consensual pornography. In June 2023, the FBI issued a PSA alerting the public to the risks associated with generative artificial intelligence (AI) and how it can be utilized for “Explicit Content Creation,” “Sextortion,” and “Harassment.”

A Reddit user using the handle “deepfakes” established a porn forum with actors who had their faces switched on in 2017. Since then, news stories about porn—especially revenge porn—have surfaced frequently, seriously harming the reputations of well-known people and celebrities. Deeptrace research states that 96% of deepfake movies on the internet in 2019 contained pornography.

In one case from 2023, deepfake technology was used to imitate a woman’s child’s voice to threaten and extort her. Deepfakes have also been utilized for non-sexual criminal activities.

Politics has also made use of deepfake footage. For instance, a political party in Belgium published a film in 2018 showing Donald Trump advocating for Belgium to leave the Paris Climate Agreement. That address, however, was a deepfake; Trump never gave it. Tech-savvy political analysts are preparing for a future wave of fake news that includes convincingly realistic deepfakes, as this was not the first instance of a deepfake being used to make deceptive films.

However, the technology has also proven helpful to media technologists, human rights organizations, and journalists. For example, the 2020 HBO documentary “Welcome to Chechnya” told the story of Russian LGBTQ refugees in danger while concealing their identities using deepfake technology.

An organization called WITNESS, which focuses on using media to protect human rights, has acknowledged the risks associated with digital technology but also voiced hope about its potential when applied in this manner.

Shirin Anlen, a media technologist for WITNESS, said, “Part of our assignment is really examining the favorable use of that technology, from safeguarding people like activists on the video to taking advocacy approaches to do political satire.”

The technology isn’t something that Anlen and WITNESS are completely afraid of. It ought to be viewed as a tool instead. “It’s an extension of our long-standing partnership with audiovisuals. We’ve started tinkering with audio already. We’ve already been adjusting images in various ways,” said Anlen.

The ideal response to deepfakes, according to experts like Anlen and Lopez, is for the general public to become knowledgeable about the technology and its potential rather than to get alarmed.

Ways to identify deepfakes

A few markers can be used to identify deepfakes:

  • Do specifics need to be more clear? Keep an eye out for skin or hair issues, as well as faces that appear to be blurrier than the surroundings in which they are situated. The focus might be too soft.
  • Does the lighting appear artificial? The lighting of the clips that were used as models for the false video is frequently retained by deepfake algorithms, even when it is not a good match for the lighting in the target video.
  • Do the sounds and phrases not correspond with the images? If the video was staged, but the original audio was not as meticulously altered, the audio may not match the subject.
  • Does the source appear trustworthy? Reverse image searching is one method that academics and journalists frequently use to determine the actual source of an image, and you can use it now. In addition, you should ascertain who uploaded the picture, where it was uploaded, and whether it was a reasonable move on their part.

Using technology to counter deepfakes

The differences between authentic and fraudulent content will probably become more challenging to spot as technology advances. Because of this, specialists like Anlen think it should be up to someone other than individuals to recognize deepfakes in the wild.

“The responsibility should be on the developers, on the toolmakers, on the tech companies to develop invisible watermarks and signal what the source of that image is,” stated Anlen. Additionally, several new businesses are creating techniques for identifying deepfakes.

Sensity, for instance, has created a deepfake detection platform that functions similarly to an antivirus program and notifies consumers via email whenever they see content that carries the distinctive fingerprints of artificial intelligence-generated media. Sensity employs the same deep learning techniques as phony video production.

Operation Minerva approaches the task of identifying deepfakes more straightforwardly. Potential deepfakes are compared using this company’s algorithm to known videos that have previously been “digitally fingerprinted.” For instance, by identifying that the deepfake video is merely an altered copy of an already-existing video that Operation Minerva has previously cataloged, it can identify cases of revenge porn.

Despite these developments, Nasir Memon, an NYU professor of computer science and engineering, claimed that no large-scale efforts have been made to curb the spread of dangerous deepfakes and that any solution that is developed would not be a panacea.

“I think the solution overall is not technology-based, but instead it’s education, awareness, the right business models, incentives, policies, laws,” Memon stated.

Note: Several states, including California, New York, and Virginia, have passed or attempted to pass laws prohibiting the use of deepfakes in certain situations, such as for pornography or politics.

Using deepfakes to conceal one’s identity in real time, such as during a Zoom meeting or phone conversation, is becoming a more significant issue. Memon claims that there is a chance that someone will use a false identity in a variety of contexts, including remote college tests, job interviews, and visa applications. Scammers created by AI have even reached out to Insider writers as sources.

“The problem with detection is that the burden is on the defender,” Memon stated. “I have to examine every photograph from every angle. However, you should approach security from the other direction.” It would be ideal if technology could identify these kinds of live deepfakes.

Memon doesn’t anticipate that the deepfake question will be resolved with this kind of technique, in any case.

“Nothing will be able to resolve the issue fully. In society, authenticity must always be put to the test, he claimed. Don’t make snap judgments based just on an image right now. Examine the origin. Hold off until you have supporting documentation from trustworthy sources.”

Share:

More Posts

Send Us A Message

more insights

GlobalBizOutlook is the platform that provides you with best business practices delivered by individuals, companies, and industries around the globe. Learn more

Advertise with GlobalBiz Outlook

Fill the details to get 

  • Detailed demographic data
  • Affiliate partnership opportunities
  • Subscription Plans as per Business Size
Advertise with GlobalBiz Outlook

Are you looking to reach your target audience?

Fill the details to get 

  • Detailed demographic data
  • Affiliate partnership opportunities
  • Subscription Plans as per Business Size