Search
Close this search box.
Search
Close this search box.

Deep fakes, deep impacts: critical thinking in the AI era

Deep fakes, deep impacts: critical thinking in the AI era

Can humans learn to reliably detect AI-generated fakes? How do they impact us on a cognitive level?

OpenAI’s Sora system recently previewed a new wave of synthetic video and AI-powered media. It probably won’t be long before any form of realistic media – audio, video, or image – can be generated with prompts in mere seconds. 

As these AI systems grow ever more capable, we’ll need to hone new skills in critical thinking to separate truth from fiction.

To date, Big Tech’s efforts to slow or stop deep fakes have come to little other than sentiment, not because of a lack of conviction but because AI content is so lifelike. 

That makes it tough to detect at the pixel level, while other detection signals, like metadata and watermarks, have their flaws. 

Moreover, even if AI-generated content was detectable at scale, it’s challenging to separate authentic, purposeful content from that intended to spread misinformation. 

Passing content to human reviewers and using community notes (information attached to content, often seen on X) provides a possible solution. However, this leads to further subjectivity and risks of labeling content incorrectly. For example, in the  Israel-Palestine conflict, disturbing images were labeled as real when they were fake.

When a real image is labeled fake, this could create a ‘liar’s dividend,’ where someone or something can brush off the truth and declare it fake. 

The question is, in the absence of technical methods for stopping deep fakes on the technology side, what can we do about it?

And, to what extent do deep fakes impact our decision-making and psychology? For example, when people are exposed to fake political images, does this have a tangible impact on their voting behaviors? 

Let’s take a look at a couple of studies that assess precisely that. 

Do deep fakes affect our opinions and psychological states?

One 2020 study, “Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News,” explored how deep fake videos influence public perception, particularly regarding uncertainty and trust in news shared on social media. 

The research involved a large-scale experiment with 2,005 participants from the UK, designed to measure responses to different types of deep fake videos of former US President Barack Obama.

Participants were randomly assigned to view one of three videos:

  1. A 4-second clip showing Obama making a surprising statement without any context.
  2. A 26-second clip that included some hints about the video’s artificial nature but was primarily deceptive.
  3. A full video with an “educational reveal” where the deep fake’s artificial nature was explicitly disclosed, featuring Jordan Peele explaining the technology behind the deep fake.

Key findings

The study explored three key areas:

  • Deception: The study found minimal evidence of participants believing the false statements in the deep fakes. The percentage of participants who were misled by the deep fakes was relatively low across all treatment groups.
  • Uncertainty: However, a key result was increased uncertainty among viewers, especially those who saw the shorter, deceptive clips. About 35.1% of participants who watched the 4-second clip and 36.9% who saw the 26-second clip reported feeling uncertain about the video’s authenticity. In contrast, only 27.5% of those who viewed the full educational video felt this way.
  • Trust in news: This uncertainty negatively impacted participants’ trust in news on social media. Those exposed to the deceptive deep fakes showed lower trust levels than those who viewed the educational reveal.
Deep fakes, deep impacts: critical thinking in the AI era
A large proportion of people were deceived or uncertain about the different video types. Source: Sage Journals.

This shows that exposure to deep fake imagery causes longer-term uncertainty, contributing to shorter-term deceptive impacts. Over time, fake imagery will weaken faith in all information, including truthful information. 

This was also the result of a more recent 2023 study, “Face/Off: Changing the face of movies with deepfake,” which similarly concluded that fake imagery has pronounced and potentially long-term impacts. 

People ‘remember’ fake content after exposure

Conducted with 436 participants, the Face/Off study investigated how deep fakes might influence our recollection of films.

Participants took part in an online survey designed to examine their perceptions and memories of both real and imaginary movie remakes.

The survey’s core involved presenting participants with six movie titles, which included a mix of four actual film remakes and two fictitious ones.

The presentation of these movies was randomized to avoid any order effects and was done in two formats: half of the movies were introduced through short text descriptions, and the other half were paired with brief video clips.

Fictitious movie remakes consisted of versions of “The Shining,” “The Matrix,” “Indiana Jones,” and “Captain Marvel,” complete with detailed descriptions that falsely claimed the involvement of high-profile actors in these non-existent remakes.

For example, participants were told about a supposed remake of “The Shining” starring Brad Pitt and Angelina Jolie, which never happened.

In contrast, the real movie remakes presented in the survey, such as “Charlie & The Chocolate Factory” and “Total Recall,” were described accurately and accompanied by genuine film clips. This mix of real and fake remakes was intended to investigate how participants discern between factual and fabricated content.

Participants were queried about their familiarity with each movie, asking if they had seen the original film or the remake or had any prior knowledge of them. 

Key findings

  • False memory phenomenon: A key outcome of the study is the revelation that nearly half of the participants (49%) developed false memories of watching fictitious remakes, such as imagining Will Smith as Neo in “The Matrix.” This illustrates the enduring effect that suggestive media, whether deep fake videos or textual descriptions, can have on our memory, challenging our recollection of cultural events.
  • Specifically, “Captain Marvel” topped the list, with 73% of participants recalling its AI remake, followed by “Indiana Jones” at 43%, “The Matrix” at 42%, and “The Shining” at 40%. Among those who mistakenly believed in these remakes, 41% thought the “Captain Marvel” remake was superior to the original.
  • Comparative influence of deep fakes and text: Another discovery is that deep fakes, despite their visual and auditory realism, were no more effective in altering participants’ memories than textual descriptions of the same fictitious content. This suggests that the format of the misinformation – visual or textual – doesn’t significantly alter its impact on memory distortion within the context of film.
AI text
Memory responses for each of the four fictitious movie remakes. For example, a large number of people said they ‘remembered’ a false remake of Captain Marvel. Source: PLOS One.

The false memory phenomenon involved in this study is widely researched. It shows how humans effectively construct or reconstruct false memories we’re certain are real when they’re not. 

Deep fakes activate this behavior, meaning viewing certain content can change our perception, even when we consciously understand it’s inauthentic. 

In both studies, deep fakes have a tangible and potentially long-term impact. The effect might sneak up on us and accumulate over time.

We also need to remember that fake content circulates to millions of people, so small changes in perception and behavior will scale across the global population.

What do we do about deep fakes?

Going to war with deep fakes means going to war with the human brain. 

While the rise of fake news and misinformation has forced people to develop new media literacy in recent years, AI-generated synthetic media will require a new level of adjustment. We have confronted such inflection points before with past communications revolutions from photography to CGI special effects, but AI will demand an evolution of our critical senses. 

We must go beyond merely believing our eyes and rely more on corroborating sources and analyzing contextual clues. 

It’s essential to interrogate the contents’ incentives or biases. Does it align with known facts or contradict them? Is there corroborating evidence from other trustworthy sources? 

Another key aspect is establishing legal standards for identifying faked or manipulated media and holding creators accountable.

This is in progress with the US DEFIANCE Act, UK Online Safety Act, and equivalents in China and many other countries establishing legal procedures for handling deep fakes. 

Education systems will also need to prioritize analytical skills and critical thought. 

Strategies for unveiling the truth

Let’s conclude with five strategies for identifying and interrogating potential deep fakes. 

While no single strategy is flawless, fostering critical mindsets is the best thing we can do collectively to minimize the impact of AI misinformation. 

  1. Source verification: Examining the credibility and origin of information is a fundamental step. Authentic content often originates from reputable sources with a track record of reliability.
  2. Technical analysis: Despite their sophistication, deep fakes may exhibit subtle flaws, such as irregular facial expressions or inconsistent lighting. Scrutinize content and consider whether it’s digitally altered. 
  3. Cross-referencing: Verifying information against multiple trusted sources can provide a broader perspective and help confirm the authenticity of content.
  4. Digital literacy: Understanding the capabilities and limitations of AI technologies is key to assessing content. Education in digital literacy across schools and the media, including the workings of AI and its ethical implications, will be crucial. 
  5. Cautious interaction: Interacting with AI-generated misinformation could amplify its effects. Be careful of liking/sharing/reposting content you’re dubious of. 

As deep fakes evolve, so will the techniques required to detect them and mitigate harm. 2024 will be revealing, as around half the world’s population is set to vote in major elections.

As we move forward, ethical AI practices, digital literacy, regulation, and critical engagement will be pivotal in shaping a future where technology amplifies, rather than obscures, the essence of the truth. 

The post Deep fakes, deep impacts: critical thinking in the AI era appeared first on DailyAI.

Vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident
Lexie Ayers
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

The most complete solution for web publishing

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Tags

Share this post:

Related Posts
Category
Lorem ipsum dolor sit amet, consectetur adipiscing elit eiusmod tempor ncididunt ut labore et dolore magna

This website uses cookies. By continuing to use this site, you accept our use of cookies.