Home  

My Websites 

My Bio

 

In the 1930 sci-fi novel “Brave New World,” Aldous Huxley imagines a dystopian future where truth has lost all meaning. Advances in “feelies” – movies that precisely manipulate people’s senses and emotions – allow powerful controllers to cement social conditioning and propaganda like never before. As deepfake technology proliferates, we inch closer to Huxley’s chilling vision. With the 2020 election looming, deepfakes have emerged as a disruptive new factor endangering electoral integrity, media trust, and faith in institutions.

Deepfakes leverage artificial intelligence to digitally stitch a person’s likeness into photos or videos to make them appear to do or say things they never actually did. Although still identifiable by experts, these machine-learning forgeries are already advanced enough to dupe unsuspecting viewers. The first mainstream deepfake was leaked in 2017 grafting actor Nicholas Cage’s face onto actress Amy Adams in a scene from “Man of Steel.” Myriad fakes emerged thereafter from Jordan Peele impersonating Obama to falsified CEO comments triggering stock dips. Entertainment at first, but the implications are profound.

Potential for widespread misinformation using deepfake technology undermines trust in democratic elections. Deepfakes could falsely depict candidates making inflammatory remarks right before people vote. Imagine a fake video of one nominee casually discussing plans to microchip citizens released days before the 2024 election. Even if debunked, ensuing confusion could impact results. In barely contested races, deepfakes could flip outcomes. And authoritarian regimes have outsourced fake content farms to tarnish opponents. Nina Schick, author of “Deep Fakes and the Infocalypse,” says deepfakes make misinformation scalable, targeted, and financial accessible.

Prior signs indicate trouble ahead. In Gabon’s 2019 election, the president’s allies tried delegitimizing results using a deepfaked video of an opponent making vote-rigging claims. Mainstream TV aired it as genuine. During 2020 U.S. Senate races, a PAC pushed a manipulated video of Mitch McConnell’s opponent, Amy McGrath, laboring to name Kentucky’s six bordering states. Though not a deepfake, it illustrated the ease of using doctored footage to undercut candidates.

Experts warn the 2024 presidential election is extremely vulnerable given expected advances in deepfake technology and microscopic scrutiny of candidates. Political scientist David Doer headlined a 2019 conference saying, “Whoever is elected president in 2020 will likely face a deepfake disseminated via hyper-partisan social media.” Researchers at Cyabra and Deeptrace found 2020 election discourse already flooded with rudimentary fakes, especially on Instagram and YouTube. Deepfake detectors even flagged a Joe Biden campaign ad as inauthentic, foreshadowing a world where no clip is implicitly trusted.

Beyond elections, manipulated videos corrode faith in journalism and attributable news itself. Seeing is no longer believing in the deepfake era. As manipulated footage spreads, news consumers either grow skeptical of all video, limiting platforms for facts, or else lapse into nihilism whereby real and fake blend indistinguishably. Either response undercuts truth and accountability essential for democracy. When documentary evidence becomes suspect, journalism struggles to check those in power.

In July 2021, criminals used deepfake audio mimicking a CEO’s voice to authorize a fraudulent transfer stealing $243,000 from a British energy firm. As voice synthesis improves, expect proliferation of “vishing” scams over phone banking. But applied to news, such technology muddies attribution. Quotes from prominent figures can simply be faked sans verification. The lines between real and invented blur. Even experts struggle to identify state-of-the-art deepfakes like those from Anthropic, an AI safety firm. Meta recently released a “universal speech translator” capable of translating previously recorded remarks seemingly instantaneously into other languages while retaining copied voices. This technology could effectively puppeteer public officials using synthetic media.

Potential to manipulate stock prices via falsified content also threatens economic stability. University of Michigan researchers co-published a disturbing paper entitled “How Disinformation Could Trigger Financial Turbulence,” exploring risks of rogue actors using AI and social media to unleash market volatility via coordinated false information. Deepfakes take this threat to another level by forging company CEO announcements. Beyond financial systems, manufacturing authentic looking video evidence to convict the innocent remains another doomsday scenario.

Prior to deepfakes, doctored images and propaganda damaged public discourse and perceptions of governance – consider photoshopped Time covers framing opponents or edited Planned Parenthood videos used to push restrictive laws. But allowing deepfakes, unchecked algorithms are hijacked to systematically replace truth with profitable or ideologically useful lies.

In Brazil’s 2018 election, a slanderous WhatsApp fake news campaign depicting the leftist candidate Fernando Haddad as sympathetically distributing penis-shaped baby bottles to children was shared millions of times in days and linked to beatings of LGBT people. Deepfakes intensify this effect exponentially. University of Washington researchers already showed realistic AI-tricked videos are perceived as significantly more credible than their editing-software counterparts. As deepfakes improve and democratize, even grassroots individuals and movements could use them to re-shape narratives, the truth will be a precious and ephemeral commodity.

The solution matrix must be multifaceted given deepfakes syndicate rapidly via platforms optimizing for engagement over truth. Legislation criminalizing malicious deepfakes has been enacted and proposed in various countries. But legal remedies are slow and often behind the AI curve. Better machine learning detection models provide a second line of defense to filter deepfakes, but constantly updating algorithms is cumbersome. Crowd-sourced verification remains an option, but risks normalization if deepfakes become ubiquitous. Ultimately, the platforms themselves must take responsibility for content moderation and limiting algorithmic amplification of dangerous manipulated media. But financial incentives pose an obstacle.

With open democratic elections, trust in media ecosystem, financial markets, and faith in instruments of governance at stake, the stakes could not be higher. As Huxley envisioned, increasingly believable technology risks creating a charged atmosphere where citizens cannot discern leaders’ authentic positions or distinguish reporting from propaganda. An information dystopia threatens whereby the very notion of evidence-based truth dissolves amidst a cacophony of competing fakes. With the worldwide turn toward authoritarianism, allowing deepfakes and micro-targeted computational propaganda to proliferate unchecked is science fiction we cannot afford to see come true.

©  All Rights Reserved, Intramation Technology