Netflix documentary Dirty Pop is the latest to use AI to generate footage that didn’t previously exist. Ryan ponders a strange, hallucinatory new world of filmmaking.
If you’ve been wondering what filmmaker Bennett Miller’s been up to in the decade since he put out his last movie, Foxcatcher, then the 30th July delivered the answer: he’s been busy delving into the fast-moving and often weird world of artificial intelligence.
Miller has, he revealed, been working on a feature film on the subject, as well as an exhibition of images generated using DALL·E, and also a documentary about AI which appears to be on hold due to legal issues of some sort.
Miller is far from the only filmmaker fascinated by AI, with its usage is gradually creeping into the industry much as it is elsewhere. In fictional storytelling, we’ve seen generative AI used to whip up an opening title sequence for Marvel’s ho-hum Secret Invasion TV series or provide background imagery for the indie horror, Late Night With The Devil ā both provoking minor rumbles of online controversy.
One area of the industry in which the use of AI really appears to be gathering pace, however, is in documentaries. You may have seen one or two examples in recent years: one early controversy was the use of speech synthesis to re-create the voice of the late Anthony Bourdain for the 2021 documentary, Roadrunner. Rather worryingly, filmmaker Morgan Neville didn’t disclose that the voice was essentially fake, or that the words weren’t Bourdain’s.
Then there was The Andy Warhol Diaries, the Netflix series from 2022 that ā admittedly with the approval of his estate ā used a combination of actor Bill Irwin and AI software to conjure up a digital likeness of the late pop artist.
More recently, there was the Netflix crime documentary What Jennifer Did, about a depressing domestic murder case that made headlines in 2010. It appeared to use generative AI to produce several photographs of the documentary’s subject, with tech outlet Futurism noting that those images bore tell-tale signs of being generated using a piece of software like Midjourney, from weirdly elongated teeth to other strange artefacts.
This brings us to Dirty Pop: The Boyband Scam, which may feature the most extensive use of AI in a documentary this writer has encountered so far. Again produced for Netflix, the limited series tells the story of Lou Pearlman, the pop impresario who helped launch such 1990s boy bands as Backstreet Boys and *Nsync, but also had a major side-line in pyramid schemes, money laundering and other dodgy business practices.
Pearlman died in 2016, not long after he was tracked down and apprehended by police. To get around his absence, Dirty Pop’s makers used AI to essentially doctor existing footage and have Pearlman talking to us from beyond the grave via what looks like a grainy VHS tape. The makers at least warn viewers of what they’re about to see, with a caption at the start of each episode reading, “This is real footage of Lou Pearlman; this footage has been digitally altered to generate his voice and synchronise his lips.”
The words placed into the digital Pearlman’s mouth were taken, the caption tells us, from the late businessman/conmanās book, Bands, Brands & Billions. They form the spine of a story that incorporates interviews with real, living people in Pearlman’s orbit, including members of those above-mentioned bands and the unfortunate investors who lost their money through his schemes.
It’s an eerie thing to behold, not least because the AI Pearlman looks uncannily like the character Brian O’Blivion (played by Jack Creley) in David Cronenberg’s 1983 sci-fi thriller, Videodrome. A media theorist who only appears on television and never in person, O’Blivion actually died years before the film’s events (much like Pearlman); his appearances are assembled from tiny bits of pre-existing video clips of him conversing directly with the camera. Videodrome is itself about the impact of media and technology on our subjective view of reality; despite its CRT-era hardware, it feels more prescient with each passing year.
What’s interesting about Dirty Pop is the media response to it. Where earlier uses of AI in media have generated enough controversy to make headlines on major outlets like Variety (as Late Night With The Devil did), prompting its makers to release a statement to address the matter, its usage in the Netflix documentaries mentioned here has generated less fuss. The Independent ran a piece gathering together a handful of angry tweets about Dirty Pop, but the AI Pearlman otherwise seems to have otherwise drifted by unnoticed.
Suggested product
SPECIAL BUNDLE! Film Stories issue 54 PLUS signed Alien On Stage Blu-ray pre-order!
£29.99
Similarly, the BBC documentary series Paranormal: The Village That Saw Aliens opens with a caption which tells viewers, “Some words of investigator Randall Jones Pugh have been voiced using AI.”
A Welsh UFO investigator who published at least one book on his theories about the subject, Jones Pugh died about 20 years ago, and so Paranormal’s filmmakers used AI to synthesise his voice ā thus continuing a trend set by Roadrunner’s digital Anthony Bourdain and Dirty Pop’s re-animated Lou Pearlman. To quote Star Wars: The Rise Of Skywalker, “The dead speak.”
Ethically, the use of AI to fill gaps in documentaries is questionable at best. A documentarian’s job is, to the best of their ability, reflect reality and tell a story as truthfully as they’re able. Of course, history’s littered with examples of filmmakers massaging the truth to tell a more compelling narrative. In the 1936 documentary Night Mail, a shot of Post Office workers toiling away in a sorting carriage ā purportedly on a moving train ā was actually filmed on a set. The filmmakers even asked the postal workers to sway back and forth to fake the train’s movement.
Nevertheless, documentaries are intended to be a record of true events, and the point of using contemporary material like news footage or photographs of subjects is to give viewers an understanding of a person or event’s historical context. Generating fake photos or footage using AI is essentially like a drop of poison in a well; if one element has been falsified, then that renders the rest of it potentially suspect.
The use of AI in documentaries is so new that it’s only in recent months that a group of filmmakers, calling themselves the Archival Producers Alliance, has gotten together to try to set up a set of guidelines over how the technology should be used.
“We recognise that AI is here, and it is here to stay,” said the organisation’s co-founder Jennifer Petrucelli in April (via IndieWire). “And we recognise that it brings with it potential for amazing creative opportunities. At the same time, we want to really encourage people to take a collective breath and move forward with thoughtfulness and intention as we begin to navigate this new and rapidly changing landscape.”
The organisation, in essence, recommends that documentary filmmakers use primary sources ā that is, genuine, original footage, photographs and recorded interviews ā wherever possible. It also warns against using generative AI for imagery beyond, say, lightly restoring a damaged photograph for āfear of forever muddying the historical record.”
It could certainly be argued that the use of AI-generated photographs in What Jennifer Did might constitute the kind of muddying the organisation talks about. The images are presented as real photographs, either depicted on someone’s phone or physically sitting on a shelf. As Futurism pointed out earlier this year, essentially faking images of a real woman who probably won’t be out of jail until 2040 is ethically questionable at best.
And while the makers of Dirty Pop were right to flag at the start of their episodes that certain clips have been made using AI, this writer would argue that correctly flagging such usage should go further ā sequences involving artificially-generated images should be captioned as they appear on the screen, much as scenes that re-create events using actors often are. Given the casual way so many of us watch television, it’s all too easy to be looking down at a phone or making a cup of tea during an opening credits sequence, and so the unreal nature of a photo on a shelf or a video clip of a dead pop mogul could be missed.
It goes without saying that the genie is well and truly out of the bottle when it comes to generative AI in the media. Other documentary filmmakers, such as Dawn Porter, have already voiced their concerns about its use; “We are supposed to be the truth, and it might be the truth as we see it, but we are also supposed to be transparent,” Porter told Variety last year. “I’m very nervous that people are not going to be transparent about what techniques they are using and why.”
At present, there’s no official legislation about the use of AI in documentaries, meaning that it’s up to filmmakers to be ethical and trustworthy when they produce their work. And as faked clips of Vice President Kamala Harris have recently proved, it’s easier than ever to create real-looking, potentially reputation-damaging footage and have it shared around on social media.
With AI on the rise and few guardrails in place, we’re all beginning to find ourselves like Max Renn in Videodrome ā gazing at the media landscape and unable to tell what is real and what is fake.
As Brian O’Blivion told Renn in one unforgettable scene, “Your reality is already half video hallucination. If you’re not careful, it will become total hallucination. You’ll have to learn to live in a very strange new world.”