Will AI end traditional filmmaking as we know it? We take a look at some differing opinions, as well as the 1980s popularity of the SodaStream.
Do you remember the age of Peak SodaStream? It lasted from roughly 1979 to the late 1980s, and saw the UK’s collective imagination gripped by the endless possibilities of fizzy water. With the press of a button – and a cat-frighteningly loud noise – the SodaStream device blasted carbon dioxide into a bottle of regular tap water. You could then add a dash of syrup and turn the now carbonated water into, say, a nice glass of fizzy lemonade.
Things really took off for the SodaStream in 1985, when Cadbury Schweppes bought the company and started selling bottles of branded syrup – Fanta, Tizer and the like. In theory, the SodaStream could have completely transformed the drinks industry. No more lugging two-litre bottles of Coca-Cola home. Those 330ml Irn-Bru cans could be consigned to the dustbin. All any household needed was a SodaStream, a few small bottles of syrup and a functioning tap.
Which brings us to the subject of AI. Last week (on the 8th October), entertainment industry outlet TheWrap put on a live event, TheGrill 2024, in which a variety of panellists talked about the impact of new and future tech innovations on the film and TV industry. Glancing over some of the various opinions of those panellists provides a barometer of where the movie business is right now when it comes to the use of generative AI in filmmaking.
On one hand, there are those like Brian Robillard, the chief operating officer of the AI firm Deep Voodoo. Rather than something that will end careers and displace jobs, Robillard argues that AI is “a tool for humans,” which can be integrated into existing technologies to save time and money. Robillard points to an unnamed user of his company’s software who, rather than rely on time-consuming makeup effects, employed AI to essentially project digital prosthetics onto an actor’s face in real time.
“Normally, that actor would have to sit in a prosthetics chair for six hours a day, on a three-month production shoot and the whole crew would have to be sitting there,” Robillard said. “Now, with our technology, they just walk onto set as they are with just the wardrobe, and in real-time it puts the hair and prosthetics over them. They see it in the live feed. It goes into the dailies and then, ultimately, into the production.”
On the other side of the argument is Justine Bateman, a filmmaker and advocate for AI industry regulation. At the same Tuesday event, she warned that the use of AI instead of, say, a team of makeup artists, could have a potentially disastrous effect on the film business if it’s widely repeated.
“All these conversations and all these investment decisions are completely neglecting a gargantuan wildcard – human beings and their decisions,” Bateman said. “Is it going to burn down the business? If you start taking out chunks of duties, maybe the whole marketing department, maybe a camera, maybe all the actors or half the actors, or the crew doesn’t get their days to qualify for insurance because you’re only using them for three weeks instead of 12. Whatever it is, the structure will collapse.”
Sitting somewhere between the cheerleaders and the people sounding a warning bell are those in the industry who hope to steer a course through the middle of the impending AI boom. In 2023, Creative Artists Agency (CAA) executive Alexandra Shannon established The Vault, designed to “capture and store” the voices and likenesses of its actors. Those actors can then authorise that digitised sound and imagery, much like a stock image library exchanges photographs for cash.
Read more: Joker 2 | Here comes the fallout, as blame game begins
According to Shannon, the venture’s “focused on enabling our talent to capture their digital likeness and their voice and be able to own that so our clients […] own their authorised, authenticated version of themselves. They’re in control of it. We’ve created permissions around who can use it and how.”
What we’ve yet to see, however, is a credible use case for generative AI in a major film or TV show. At this stage, Sora, which turns text prompts into digital video clips, is little more than a tech demo.
Last week, for example, saw the emergence of an AI-generated ‘shot-for-shot remake’ of a trailer for Princess Mononoke – Studio Ghibli’s animated eco-fable. Made with a number of bits of generative AI software (though not Sora, which isn’t yet available to the public), the clip was telling in ways its creator, PJ Acetturo, probably hadn’t intended. First, its shots only roughly lined up with Hayao Miyazaki’s original, hand-drawn frames; second, it highlights just how far AI still has to go when it comes to photo-realism, lip-synching and generally avoiding the terrifying uncanny valley.
Another AI-derived clip that recently went viral pays homage to James Cameron while at the same time showcasing the abilities of a piece of AI software called Pika. It allows users to upload an image and then, with the click of a button, generate a short clip in which that object is squashed with an industrial press, explodes, or is sliced open like a victoria sponge.
The results are impressive enough, but again, it’s a tech demo; to date, none of the clips this writer’s seen would pass muster if placed in a film or TV show. A clip of a collapsing diner, for example, looks impressive given that it’s been generated from a single still image, but there are all kinds of distortions and weird artefacts that would look distracting if it were placed in, say, a live-action thriller about an earthquake.
AI’s supporters argue that these are teething issues, and that photo-realistic AI is only a matter of years or even months away. For now, though, the technology simply doesn’t appear to be there yet.
Even leaving aside the environmental and moral implications of AI, there’s the question of cost. Assuming all the current issues with generative AI are resolved – such as maintaining consistency between shots – then at some point, the companies that make this software are going to want to be paid for it. Research into AI costs billions; it was estimated last year that OpenAI costs $700,000 per day to run ChatGPT. A report in July suggested that OpenAI could lose $5bn this year.
At some point, these pieces of software – which require huge amounts of energy to run – are going to need to turn a profit. Investors are currently pumping billions into the AI sector because they’re betting it’ll one day take off and earn them trillions in return. If that doesn’t happen, then investors will start placing their money elsewhere.
None of this is to say that AI is going to magically vanish, however. Even if companies like OpenAI start charging for their services – such that they’re actually making a profit, like traditional companies tend to do – it’s likely Hollywood studios will happily swallow the cost if it saves them money elsewhere. Generative AI may also develop to the point where it can be used to create aerial shots or backgrounds that are genuinely indistinguishable from the real thing.
If that does indeed happen, the fallout in terms of job losses will almost certainly be terrible, but it’s at least arguable that traditional filmmaking will survive in some form. Back at TheGrill conference, Justine Bateman was asked about the seeming divergence we’re seeing in Hollywood at present; between those, like Bateman, who warn about its threat to livelihoods and the craft of storytelling, and those who want to use AI “not to make films better, but to right profit margins.”
“It’s like we’re all on a railroad track and now the railroad track is split in two,” Bateman said, before sounding a note of optimism of what filmmaking might look like in the wake of AI.
“The art of filmmaking is going to continue and I think we’ll see after this [AI] inferno a new genre in the arts,” Bateman said. “We haven’t seen a new genre in the arts since the 90s of any real significance. There’s been some exceptional work of the last 15 years, but for the most part, the focus has just been on generating volume content – which is not filmmaking.”
Read more: Forbidden Planet | Screenwriter J Michael Straczynski on James Cameron’s unmade sci-fi movie
To return to that somewhat tortured SodaStream analogy from earlier: in the mid-90s, sales of the fizzy water maker collapsed. Supermarkets had begun selling ultra-cheap bottles of own-brand Coca-Cola and the like, and all of a sudden, having a noisy device in your house – which required regular gas refills from your local chemist to keep them running – didn’t seem quite so exciting.
SodaStreams still exist (you can buy one now, and they’re surprisingly expensive). But those two-litre bottles of pop are still around, too, just as they were about 40 years ago.
For all the predictions of doom, there’s the potential that we’ll see a similar outcome in filmmaking: the frenzy of peak AI, then the aftermath, where generative software and the old ways of shooting and cutting movies exist side-by-side. We can only hope.
—Thank you for visiting! If you’d like to support our attempts to make a non-clickbaity movie website:
Follow Film Stories on Twitter here, and on Facebook here.
Buy our Film Stories and Film Junior print magazines here.
Become a Patron here.