The state of GAIV in March 2026

Tim Simmons of Theoretically Media has just released an astounding short film that showcases the state of Generative AI Video in March 2026 called “Dragon Blue“:

He generously shares his workflow online and in his newsletter.

Like most generative video pipelines, Tim made single frames and then animated them, using Google Nanobanana Pro and ByteDance Seedance 2.0 via Dreamina.

He used Claude Cowork as his “Production Office” (subscribing to the Pro plan). The key here is to grant it access to one folder on your computer and give it instructions in a .md markdown file.

He used Luma’s Agent Canvas as his “Studio”. Watch the masterclass to see his process.

And sign up for his newsletter to download the SKILL.md instructions markdown file.

My take: wow! Such a great short film! Such a generous man! Way to go, Tim!

Netflix buys AI post-production company from Ben Affleck

Netflix has bought InterPositive, Ben Affleck‘s stealth AI post-production company.

From the media release:

“InterPositive’s mission — to use emerging technology in ways that protect and expand creative choice — is deeply aligned with Netflix’s long-standing belief that innovation should serve storytellers and the creative process.”

According to Variety:

“The InterPositive system builds an AI model based on an existing production’s dailies, then lets a filmmaker introduce that model into the postproduction process to provide the ability to do things like mix and color, relight shots, and add visual effects.”

Note that InterPositive owns a patent on technology that Ben invented called “Method, system, and computer-readable medium for training a captioner model to generate captions for video content by analyzing and predicting cinematic elements”. It describes systems designed for enhancing video content analysis and generation through cinematic element recognition and metadata utilization.

Price has not been disclosed.

My take: is this the beginning of Netflix turning into a “dream factory”? Imagine sitting down on the couch and prompting the movie you’d like to see. Or a spin-off with some of your favourite characters. Or — and I want this so much — “Yeah, this movie, but make it 90 minutes instead of two hours and forty-five minutes.”

All-in-one platform Higgsfield

As you begin to explore generative video, you’ll probably quickly yearn for an all-in-platform that gives you access to most (if not all) of the best tools.

Enter Higgsfield. This video is a great overview and gives you a cheat sheet for both images and video as of February 2026.

There are a few features that make Higgsfield stand out.

Cinema Studio.

Apps.

My take: as still and motion imaging becomes easier than ever, what still remains most important is a great story.

Still on the fence about AI-generated video? Watch this!

Tim Simmons of Theoretically Media gives us The Ultimate AI Video Starter Guide for 2026!

He starts off with a short history of AI-generated images and video. He then moves on to cover the main ways to create video today:

  1. Text to Video
  2. Image to Video
  3. Video to Video
  4. Ingredients to Video

He then reviews some image generators that you can use to create first frames and other ingredients:

  • Nano Banana Pro
  • Midjourney
  • Flux (Black Forest Labs)
  • SeeDream (ByteDance)

He then reviews some video generators:

  • Google Veo 3.1
  • Kling 2.6
  • OpenAI Sora 2
  • Runway Gen 4.5
  • Luma Labs Ray 3
  • SeeDance (ByteDance)

He even mentions three platforms that bring all the tools together under one roof:

  • Freepik
  • Higgsfield
  • Flora

This is the best 18 minutes you’ll spend on YouTube today!

My take: there is skill involved in each component of this creative workflow. I think we are passed the point of ignoring these tools.

Writers: give and get feedback for free

StoryPeer.com is a new platform developed by Gabriel Dimilo to help writers get free feedback on their work by giving feedback on other writers’ works anonymously.

Nathan Graham Davis summarized the new site succinctly in this video.

The platform uses tokens to facilitate reviews: offer them to other writers to give notes and hence earn them by reviewing work.

Everyone starts off with seven tokens.

Readers will pick your project based on its title, logline, genre, and length. And how many tokens you offer. A rule of thumb I suggest is to offer one token for every 30 pages.

My take: beware, this can become addictive! I’ve already reviewed two features and received notes on one of my shorts.

How to use AI in your workflow as of November 2025

Seattle’s Yutao Han, aka Tao Prompts, has just released a great summary of the main ways to use AI tools to generate video clips.

They are:

  1. Text to Video (Google Veo 3.1 can even voice dialogue.)
  2. Image to Video (Probably the best way to ensure consistent characters.)
  3. Video to Video (More work, but worth it!)
  4. Lip Sync (The weakest link IMHO.)
  5. Ingredients to Video (This hints at the future.)
  6. Chat Edit (Sort of combines Video to Video and Text Prompting.)

Tao’s insights are very educational!

My take: really nice to have this summary of the various approaches. Note that you most definitely will use some combination of each; don’t just fixate on one tool.

AI Avatar Rankings as of mid-2025

Dan Taylor-Watt asks Which avatar generator can create the most convincing Dan Taylor-Watt?

AI Avatar Head-to-Head by Dan Taylor-Watt

Which avatar generator can create the most convincing Dan Taylor-Watt?

Read on Substack

He tests seven AI platforms on four key criteria: visual likeness, audio likeness, movement and lip syncing.

The contenders are:

  1. HeyGen
  2. Synthesia
  3. AI Studios
  4. Mirage Studio
  5. Argil
  6. D-ID
  7. Colossyan

His top three conclusions are:

“1. Generating convincing avatar clones of real people is difficult and the human eye and ear are unforgiving of anything that’s slightly off.

2. HeyGen is still #1, although its voice likenesses remain imperfect and not a huge step on from 2 years ago.

3. Generating convincing voice likenesses appears to be more challenging than video likenesses, with ElevenLabs’ lead in this domain very apparent.”

He concludes with a great chart the summarizes cost, aspect ratio, consent and rating among other factors.

My take: great summary! I wonder how long the conclusion will remain true, with Sora 2’s Cameo feature coming soon.

Hollywood vs. Tilly Norwood

Lily Ford of The Hollywood Reporter reports that the Creator of AI Actress Tilly Norwood Responds to Backlash: “She Is Not a Replacement for a Human Being”.

A new AI-generated actress named Tilly Norwood has caused a stir in Hollywood, with her creator, Eline Van der Velden of the company Particle6, claiming talent agencies are interested in signing her.

The news has sparked a fiery backlash from human actors, who see the creation as a threat to their livelihoods and the integrity of their craft.

In a response on Instagram, Van der Velden defended Tilly as a work of art and a new creative tool, not a replacement for human performers.

“To those who have expressed anger over the creation of my AI character, Tilly Norwood, she is not a replacement for a human being, but a creative work—a piece of art.”

Van der Velden argued that AI characters should be judged as their own genre, much like animation, puppetry or CGI, and could coexist with traditional acting.

The Screen Actors Guild (SAG-AFTRA), however, disagrees, stating, “To be clear, ‘Tilly Norwood’ is not an actor, it’s a character generated by a computer program.

My take: Still less robotic than some of the Transformers cast.

 

Use Moonvalley Marey on fal.ai

If you’ve been dying to try out Moonvalley, fal.ai has just made it happen.

See Image2Video, Text2Video, Motion Transfer, and Pose Transfer, all at 1080p.

Not cheap, but cheaper than some other ways of getting that shot.

My take: CGI and VFX are being automated, like weaving and typesetting and other lost jobs.

Easy motion transfer in Runway Act-Two

CyberJungle, the Youtube channel of Hamburg-based Senior IT Product Manager Cihan Unur, has raved about Runway’s excellent motion-capture feature, Act-Two.

He says:

“This new feature can turn you into literally any character you can imagine. We are talking about advanced motion capture now with full body face and hands tracking. Act-Two is an AI human hybrid performance acting tool. It’s full scene transformation powered by your acting and a single image. No green screen, no fancy setup, just a video of you and the one image or video clip to guide the look. It transforms you into any character, even nonhuman.”

Act-Two requires a Standard plan or higher ($15/month.)

Official Help webpage.

My take: Wow! What I find particularly impressive is the lip-sync that is the best I’ve seen and for the most part totally believable.