How Filmmakers Can Survive AI

No Film School shows How Indie Filmmakers Can Survive AI: 9 Insights from the Pros

The insights are from a SXSW 2026 panel titled Creativity, Commerce & Chaos: Tech & Indie Filmmaking that featured Lauren Oliver (co-founder of IncanterAI), Shaked Berenson (founder/CEO of Studio Dome), and Gregory Jensen (Accenture’s Media lead) and hosted by GG Hawkins.

The highlight? They view AI as a:

“disruptive toolkit designed to level the playing field for creators who don’t have Hollywood-sized bank accounts.”

The nine strategies, from the article, are:

  1. Training your “Digital Eye”: Cinematography Knowledge Still Matters
    Because AI doesn’t know the emotional difference between a high-angle and a low-angle shot your knowledge about filmmaking is your greatest asset.
  2. The “Micro-Problem” Strategy: Build Your Own Pipeline
    Look at AI as a series of specialized assistants, rather than a single “creative partner”, and use it to solve niche problems such as Audio Issues, Localization and VFX Generative Fill.
  3. AI-Native vs. AI-Assisted: The Gap in Decision-Making
    The panel highlighted the growing divide between “AI Native” creators (who often lack traditional film training) and “Filmmakers using AI.” Filmmakers don’t just prompt; they curate.
  4. Personalization vs. The Shared Experience
    One panelist floated “Dynamic Content“; the idea that a screen could generate a version of a film tailored to the viewer’s preferences (e.g., skipping gore for a sensitive viewer or emphasizing a certain subplot).
  5. The “Dishes” Principle: Using Tech to Buy Back Time
    The ultimate goal for the indie filmmaker isn’t to let AI do the “fun” part (the writing and directing). It’s to let it do the “chores” like organizing a scene list, budgeting with predictive models, or building a character-tracking app.
  6. The “15-Minute Wall”: Understanding AI’s Memory Problem
    One of the most practical “craft” takeaways for filmmakers is the current technical limitation — AI models have a “short-term memory” problem. So don’t try to “generate” your whole movie yet. Use AI for shots, textures, or cleanup, but rely on your principal photography to maintain the “scaffolding” and continuity of your characters.
  7. The “Signal Through the Noise”: Curation as a Creative Act
    With the barrier to entry dropping, the sheer volume of “content” is exploding and the role of the filmmaker shifts toward being a human gatekeeper. Your job is now curating.
  8. Directing for “Dynamic Viewing”
    Personalized Exhibition: AI could allow for versions of a film that adapt to the viewer so when shooting, consider capturing “excess” coverage.
  9. AI as Your “Marketing & Advice” Department
    AI can act as your business consultant or Creative Producer. Feed it your script and ask: “What is the most marketable 30-second hook for a Gen Z audience?” or “What film festivals have a history of programming films with these specific themes?”

The article concludes with a warning:

“The intersection of AI and indie film isn’t about the technology—it’s about curiosity over fear. If we leave the tools to Silicon Valley, they will build a “push-button” industry. If we engage with it as filmmakers, we can build a more accessible, disruptive, and human-led future.”

My take: art is always an abstraction of reality. Cinema, and by extension TV and screen-based media, use a vocabulary and grammar that is barely 130-years old. I predict most viewers will accept the new tools in time, like they did animation and Computer Generated Imagery.

The state of GAIV in March 2026

Tim Simmons of Theoretically Media has just released an astounding short film that showcases the state of Generative AI Video in March 2026 called “Dragon Blue“:

He generously shares his workflow online and in his newsletter.

Like most generative video pipelines, Tim made single frames and then animated them, using Google Nanobanana Pro and ByteDance Seedance 2.0 via Dreamina.

He used Claude Cowork as his “Production Office” (subscribing to the Pro plan). The key here is to grant it access to one folder on your computer and give it instructions in a .md markdown file.

He used Luma’s Agent Canvas as his “Studio”. Watch the masterclass to see his process.

And sign up for his newsletter to download the SKILL.md instructions markdown file.

My take: wow! Such a great short film! Such a generous man! Way to go, Tim!

Netflix buys AI post-production company from Ben Affleck

Netflix has bought InterPositive, Ben Affleck‘s stealth AI post-production company.

From the media release:

“InterPositive’s mission — to use emerging technology in ways that protect and expand creative choice — is deeply aligned with Netflix’s long-standing belief that innovation should serve storytellers and the creative process.”

According to Variety:

“The InterPositive system builds an AI model based on an existing production’s dailies, then lets a filmmaker introduce that model into the postproduction process to provide the ability to do things like mix and color, relight shots, and add visual effects.”

Note that InterPositive owns a patent on technology that Ben invented called “Method, system, and computer-readable medium for training a captioner model to generate captions for video content by analyzing and predicting cinematic elements”. It describes systems designed for enhancing video content analysis and generation through cinematic element recognition and metadata utilization.

Price has not been disclosed.

My take: is this the beginning of Netflix turning into a “dream factory”? Imagine sitting down on the couch and prompting the movie you’d like to see. Or a spin-off with some of your favourite characters. Or — and I want this so much — “Yeah, this movie, but make it 90 minutes instead of two hours and forty-five minutes.”

All-in-one platform Higgsfield

As you begin to explore generative video, you’ll probably quickly yearn for an all-in-platform that gives you access to most (if not all) of the best tools.

Enter Higgsfield. This video is a great overview and gives you a cheat sheet for both images and video as of February 2026.

There are a few features that make Higgsfield stand out.

Cinema Studio.

Apps.

My take: as still and motion imaging becomes easier than ever, what still remains most important is a great story.

Still on the fence about AI-generated video? Watch this!

Tim Simmons of Theoretically Media gives us The Ultimate AI Video Starter Guide for 2026!

He starts off with a short history of AI-generated images and video. He then moves on to cover the main ways to create video today:

  1. Text to Video
  2. Image to Video
  3. Video to Video
  4. Ingredients to Video

He then reviews some image generators that you can use to create first frames and other ingredients:

  • Nano Banana Pro
  • Midjourney
  • Flux (Black Forest Labs)
  • SeeDream (ByteDance)

He then reviews some video generators:

  • Google Veo 3.1
  • Kling 2.6
  • OpenAI Sora 2
  • Runway Gen 4.5
  • Luma Labs Ray 3
  • SeeDance (ByteDance)

He even mentions three platforms that bring all the tools together under one roof:

  • Freepik
  • Higgsfield
  • Flora

This is the best 18 minutes you’ll spend on YouTube today!

My take: there is skill involved in each component of this creative workflow. I think we are passed the point of ignoring these tools.

Writers: give and get feedback for free

StoryPeer.com is a new platform developed by Gabriel Dimilo to help writers get free feedback on their work by giving feedback on other writers’ works anonymously.

Nathan Graham Davis summarized the new site succinctly in this video.

The platform uses tokens to facilitate reviews: offer them to other writers to give notes and hence earn them by reviewing work.

Everyone starts off with seven tokens.

Readers will pick your project based on its title, logline, genre, and length. And how many tokens you offer. A rule of thumb I suggest is to offer one token for every 30 pages.

My take: beware, this can become addictive! I’ve already reviewed two features and received notes on one of my shorts.

How to use AI in your workflow as of November 2025

Seattle’s Yutao Han, aka Tao Prompts, has just released a great summary of the main ways to use AI tools to generate video clips.

They are:

  1. Text to Video (Google Veo 3.1 can even voice dialogue.)
  2. Image to Video (Probably the best way to ensure consistent characters.)
  3. Video to Video (More work, but worth it!)
  4. Lip Sync (The weakest link IMHO.)
  5. Ingredients to Video (This hints at the future.)
  6. Chat Edit (Sort of combines Video to Video and Text Prompting.)

Tao’s insights are very educational!

My take: really nice to have this summary of the various approaches. Note that you most definitely will use some combination of each; don’t just fixate on one tool.

AI Avatar Rankings as of mid-2025

Dan Taylor-Watt asks Which avatar generator can create the most convincing Dan Taylor-Watt?

AI Avatar Head-to-Head by Dan Taylor-Watt

Which avatar generator can create the most convincing Dan Taylor-Watt?

Read on Substack

He tests seven AI platforms on four key criteria: visual likeness, audio likeness, movement and lip syncing.

The contenders are:

  1. HeyGen
  2. Synthesia
  3. AI Studios
  4. Mirage Studio
  5. Argil
  6. D-ID
  7. Colossyan

His top three conclusions are:

“1. Generating convincing avatar clones of real people is difficult and the human eye and ear are unforgiving of anything that’s slightly off.

2. HeyGen is still #1, although its voice likenesses remain imperfect and not a huge step on from 2 years ago.

3. Generating convincing voice likenesses appears to be more challenging than video likenesses, with ElevenLabs’ lead in this domain very apparent.”

He concludes with a great chart the summarizes cost, aspect ratio, consent and rating among other factors.

My take: great summary! I wonder how long the conclusion will remain true, with Sora 2’s Cameo feature coming soon.

Hollywood vs. Tilly Norwood

Lily Ford of The Hollywood Reporter reports that the Creator of AI Actress Tilly Norwood Responds to Backlash: “She Is Not a Replacement for a Human Being”.

A new AI-generated actress named Tilly Norwood has caused a stir in Hollywood, with her creator, Eline Van der Velden of the company Particle6, claiming talent agencies are interested in signing her.

The news has sparked a fiery backlash from human actors, who see the creation as a threat to their livelihoods and the integrity of their craft.

In a response on Instagram, Van der Velden defended Tilly as a work of art and a new creative tool, not a replacement for human performers.

“To those who have expressed anger over the creation of my AI character, Tilly Norwood, she is not a replacement for a human being, but a creative work—a piece of art.”

Van der Velden argued that AI characters should be judged as their own genre, much like animation, puppetry or CGI, and could coexist with traditional acting.

The Screen Actors Guild (SAG-AFTRA), however, disagrees, stating, “To be clear, ‘Tilly Norwood’ is not an actor, it’s a character generated by a computer program.

My take: Still less robotic than some of the Transformers cast.

 

Use Moonvalley Marey on fal.ai

If you’ve been dying to try out Moonvalley, fal.ai has just made it happen.

See Image2Video, Text2Video, Motion Transfer, and Pose Transfer, all at 1080p.

Not cheap, but cheaper than some other ways of getting that shot.

My take: CGI and VFX are being automated, like weaving and typesetting and other lost jobs.