Writers: give and get feedback for free

StoryPeer.com is a new platform developed by Gabriel Dimilo to help writers get free feedback on their work by giving feedback on other writers’ works anonymously.

Nathan Graham Davis summarized the new site succinctly in this video.

The platform uses tokens to facilitate reviews: offer them to other writers to give notes and hence earn them by reviewing work.

Everyone starts off with seven tokens.

Readers will pick your project based on its title, logline, genre, and length. And how many tokens you offer. A rule of thumb I suggest is to offer one token for every 30 pages.

My take: beware, this can become addictive! I’ve already reviewed two features and received notes on one of my shorts.

How to use AI in your workflow as of November 2025

Seattle’s Yutao Han, aka Tao Prompts, has just released a great summary of the main ways to use AI tools to generate video clips.

They are:

  1. Text to Video (Google Veo 3.1 can even voice dialogue.)
  2. Image to Video (Probably the best way to ensure consistent characters.)
  3. Video to Video (More work, but worth it!)
  4. Lip Sync (The weakest link IMHO.)
  5. Ingredients to Video (This hints at the future.)
  6. Chat Edit (Sort of combines Video to Video and Text Prompting.)

Tao’s insights are very educational!

My take: really nice to have this summary of the various approaches. Note that you most definitely will use some combination of each; don’t just fixate on one tool.

AI Avatar Rankings as of mid-2025

Dan Taylor-Watt asks Which avatar generator can create the most convincing Dan Taylor-Watt?

AI Avatar Head-to-Head by Dan Taylor-Watt

Which avatar generator can create the most convincing Dan Taylor-Watt?

Read on Substack

He tests seven AI platforms on four key criteria: visual likeness, audio likeness, movement and lip syncing.

The contenders are:

  1. HeyGen
  2. Synthesia
  3. AI Studios
  4. Mirage Studio
  5. Argil
  6. D-ID
  7. Colossyan

His top three conclusions are:

“1. Generating convincing avatar clones of real people is difficult and the human eye and ear are unforgiving of anything that’s slightly off.

2. HeyGen is still #1, although its voice likenesses remain imperfect and not a huge step on from 2 years ago.

3. Generating convincing voice likenesses appears to be more challenging than video likenesses, with ElevenLabs’ lead in this domain very apparent.”

He concludes with a great chart the summarizes cost, aspect ratio, consent and rating among other factors.

My take: great summary! I wonder how long the conclusion will remain true, with Sora 2’s Cameo feature coming soon.

Hollywood vs. Tilly Norwood

Lily Ford of The Hollywood Reporter reports that the Creator of AI Actress Tilly Norwood Responds to Backlash: “She Is Not a Replacement for a Human Being”.

A new AI-generated actress named Tilly Norwood has caused a stir in Hollywood, with her creator, Eline Van der Velden of the company Particle6, claiming talent agencies are interested in signing her.

The news has sparked a fiery backlash from human actors, who see the creation as a threat to their livelihoods and the integrity of their craft.

In a response on Instagram, Van der Velden defended Tilly as a work of art and a new creative tool, not a replacement for human performers.

“To those who have expressed anger over the creation of my AI character, Tilly Norwood, she is not a replacement for a human being, but a creative work—a piece of art.”

Van der Velden argued that AI characters should be judged as their own genre, much like animation, puppetry or CGI, and could coexist with traditional acting.

The Screen Actors Guild (SAG-AFTRA), however, disagrees, stating, “To be clear, ‘Tilly Norwood’ is not an actor, it’s a character generated by a computer program.

My take: Still less robotic than some of the Transformers cast.

 

Use Moonvalley Marey on fal.ai

If you’ve been dying to try out Moonvalley, fal.ai has just made it happen.

See Image2Video, Text2Video, Motion Transfer, and Pose Transfer, all at 1080p.

Not cheap, but cheaper than some other ways of getting that shot.

My take: CGI and VFX are being automated, like weaving and typesetting and other lost jobs.

Easy motion transfer in Runway Act-Two

CyberJungle, the Youtube channel of Hamburg-based Senior IT Product Manager Cihan Unur, has raved about Runway’s excellent motion-capture feature, Act-Two.

He says:

“This new feature can turn you into literally any character you can imagine. We are talking about advanced motion capture now with full body face and hands tracking. Act-Two is an AI human hybrid performance acting tool. It’s full scene transformation powered by your acting and a single image. No green screen, no fancy setup, just a video of you and the one image or video clip to guide the look. It transforms you into any character, even nonhuman.”

Act-Two requires a Standard plan or higher ($15/month.)

Official Help webpage.

My take: Wow! What I find particularly impressive is the lip-sync that is the best I’ve seen and for the most part totally believable.

Moonvalley wants to be THE tool for filmmakers

Moonvalley released Marey 1.5 to the public last week.

It promises Text to Video and Image to Video. What’s much more interesting is what else it can do — such as:

  1. Motion Transfer
  2. Pose Transfer
  3. Facial Reference
  4. Camera Motion from Image
  5. Camera Motion from Video
  6. Trajectory Control
  7. Keyframing
  8. Shot Extension

An example:

“Prompt: Cinematic shot of a chimpanzee sitting in contemplative stillness, its fingers types on a retro typewriter. Soft, diffused lighting highlights the rich textures of its fur and the intricate details of its face. Shadows fall dramatically across the dimly lit room, creating a cinematic and moody atmosphere. Captured with a shallow depth of field using warm, sharp 35mm film aesthetics. Moody low angle looking up at a close hairy chimpanzee hands raised looking at a typewriter, out of focus lunar landscape in the background dark space. Bark sky, dark void, black void, minimalist masterful, shot on 35mm, low angle, close up, black background, stark black backdrop, darkness of space on the moon valley, hyper realistic, details, cinema, rocky cracks craters dusty surface of the moon, atmospheric hazy atmosphere, out of focus lunar surface, haze, space.”

Pricing is not cheap at $14.99 for 10 videos. (Curiously, the middle tier is the best value at $1.40 per video.)

Tim at Theoretically Media takes this further by combing output from SayMotion with Moonvalley’s Motion Transfer:

My take: Finally, a company working from inside Hollywood and not just another one approaching AiGV as a technical challenge. Moonvalley seems to be our best hope yet for valuable tools that filmmakers might use to improve their projects.

Tips for using Google’s Flow

It’s debatable whether Google’s Veo is the best AI video generator or has been eclipsed by something newer, but what’s not up for debate is its cost – it’s by far the most expensive option. Therefore, use these tips to minimize unusable generations.

For consistent characters in Flow, use “ingredients”. According to Google:

“An ingredient is a consistent visual element — a character, an object or a stylistic reference — that you can create from a text-to-image prompt with the help of Imagen or by uploading an image. You can add up to three ingredients per prompt by selecting “Ingredients to Video” and then generating or uploading the desired images.”

You should be able to add your two main characters this way, and keep them consistent with Ingredients to Video.

Another way to generate a new clip with the same character is to Jump To it. According to Google:

Transition a character or object to a completely new setting while preserving their appearance from the previous shot. It’s like teleporting your subject, saving you from recreating them for a new scene.”

In general, you’re going to want to be very specific when prompting Veo for video. From Google:

“Consider these elements when crafting your prompt:

  • Subject and action: Clearly identify your characters or objects and describe their movements.
  • Composition and camera motion: Frame your shot with terms like “wide shot” or “close-up,” and direct the camera with instructions like “tracking shot” or “aerial view.”
  • Location and lighting: Don’t just name a place; paint a picture. The lighting and environment set the entire mood. Instead of “a room,” try describing “a dusty attic filled with forgotten treasures, a single beam of afternoon light cutting through a grimy window.”
  • Alternative styles: Flow is not limited to realistic visual styles. You can explore a wide array of animation styles to match your story’s tone. Experiment with prompts that specify aesthetics like “stop motion,” “knitted animation” or “clay animation.”
  • Audio and dialogue: While still an experimental feature, you can generate audio with your video by selecting Veo 3 in the model picker. You can then prompt the model to create ambient noise, specific sound effects, or even generate dialogue by including it in your prompt, optionally specifying details like tone, emotion, or accents. Note that speech is less likely to be generated if the requested dialogue doesn’t fit in the 8-second clip, or if it involves minors.

You can use Gemini to refine prompts, expand on an idea or be a brainstorming companion. Here’s a Gemini prompt to get you started:

“You are the world’s most intuitive visual communicator and expert prompt engineer. You possess a deep understanding of cinematic language, narrative structure, emotional resonance, the critical concept of filmic coverage and the specific capabilities of Google’s Veo AI model. Your mission is to transform my conceptual ideas into meticulously crafted, narrative-style text-to-video prompts that are visually breathtaking and technically precise for Veo.”

If you’re using Gemini to help generate multiple clips that have scene consistency, you’ll need to explicitly tell Gemini to repeat all essential details from prior prompts.”

My take: cheeky, prompting us to use Gemini to create prompts for Veo. Bit of a house of mirrors, no?

Kalshi TV ad made with Veo 3

As seen during the recent NBA Finals:

This 30 second television ad was made by Pj Accetturo, a filmmaker based out of Tampa, Florida. Here’s his full process:

“My Veo 3 viral video process is very simple.

I’ve generated 30M+ views in 3 weeks using this exact workflow:

  1. Write a rough script
  2. Use Gemini to turn it into a shot list + prompts
  3. Paste into Veo 3 (Google Flow)
  4. Edit in Capcut/FCPX/Premiere, etc.

Concept

Kalshi is a prediction market where you can trade on anything. (US legal betting)

I pitched them on a GTA VI style concept because I think that unhinged street interviews are Veo 3’s bread and butter right now.

I guarantee you that everyone will copy this soon, so might as well make it easy and give you the entire process.

Script

Their team give me a bunch of bullet points of betting markets they wanted to cover (NBA, Eggs, Hurricanes, Aliens, etc)

I then rewatched the GTA VI trailer and got inspired by a couple locations, characters, etc.

Growing up in Florida…this wasn’t a hard script to write, lol.


Prompting:

I then ask Gemini/ChatGPT to take the script and convert every shot into a detailed Veo 3 prompt. I always tell it to return 5 prompts at a time—any more than that and the quality starts to slip.

Each prompt should fully describe the scene as if Veo 3 has no context of the shot before or after it. Re-describe the setting, the character, and the tone every time to maintain consistency.

Prompt example:

A handheld medium-wide shot, filmed like raw street footage on a crowded Miami strip at night. An old white man in his late 60s struts confidently down the sidewalk, surrounded by tourists and clubgoers. He’s grinning from ear to ear, his belly proudly sticking out from a cropped pink T-shirt. He wears extremely short neon green shorts, white tube socks, beat-up sneakers, and a massive foam cowboy hat with sequins on it. His leathery tan skin glows under the neon lights.

In one hand, he clutches a tiny, trembling chihuahua to his chest like a prized accessory.

As he walks, he turns slightly toward the camera, still mid-strut, and shouts with full confidence and joy:

“Indiana got that dog in ’em!”

Trailing just behind him are two elderly women in full 1980s gear—both wearing bedazzled workout leotards, chunky sneakers, and giant plastic sunglasses. Their hair is still in curlers under clear plastic shower caps. One sips from a giant novelty margarita glass, the other waves at passing cars.

Around them, the strip is buzzing—people filming with phones, scooters zipping by, music thumping from nearby balconies. Neon signs flicker above, casting electric color across the scene. The crowd parts around the trio, half amazed, half confused.

Process

Instead of giving it 10 shots and telling ChatGPT to turn them all prompts, I find it works best when it gives you back only 3 prompts at a time.

This keeps the accuracy high.

Open up three separate windows in Veo 3 and put each prompt in there.

Run all three at the same time.

3-4 min later, you’ll get back your results. You’ll likely need to change things.

Take the first prompt back into ChatGPT and dictate what you want changed.

Then it will give you a new adjusted prompt.

Let that run while you then adjust prompt 2. Then prompt 3. Usually, by the time you’re done with prompt 3, prompt 1 has its second iteration generated.

Rinse and repeat for your whole shot list.

Tips:

I don’t know how to fix the random subtitles. I’ve tried it with and without quotes and saying (no subtitles) and it still happens. If anyone has a tip, let me know and I’ll add it to this post.

Don’t let ChatGPT describe music being played in the background or it’ll be mixed super loud.

If you want certain accents, repeat “British accent” or “country accent”, etc. a couple times, I’ve found that it will do a decent job matching the voice to the face/race/age but it helps to prompt for it.

Edit

Editing Veo 3 videos is easy.

Simply merge the clips in CapCut, FCPX, or Premiere, and add music (if necessary).

I’d love to know if anyone has found good upscale settings for Veo 3 in 720p. My tests in topaz made the faces more garbled, so I try and cover it with a bit of film grain.

I like to add a compression/bass to the Veo 3 audio because I find it to be “thin”.

Cost and Time:

This took around 300–400 generations to get 15 usable clips. One person, two days.

That’s a 95% cost reduction compared to traditional advertising.

The Future of Ads

But just because this was cheap doesn’t mean anyone can do it this quickly or effectively.  You still need experience to make it look like a real commercial.

I’ve been a director 15+ years, and just because something can be done quickly, doesn’t mean it’ll come out great. But it can if you have the right team.

The future is small teams making viral, brand-adjacent content weekly, getting 80 to 90 percent of the results for way less.

What’s the Moat for Filmmakers?

It’s attention.

Right now the most valuable skill in entertainment and advertising is comedy writing.

If you can make people laugh, they’ll watch the full ad, engage with it, and some of them will become customers.”

The BTS:

My take: high energy, for sure! That’s one detailed prompt for a three second clip.

See Veo Prompt Examples

Google Veo is arguably the best (but most expensive) AI video generator today. And Google Flow is arguably the best AI filmmaking tool built with and for creatives. Want to peak under the hood and reveal the prompts creating the magic? See Flow TV.

My favourites are:

NOTE: Click into a channel and select the Lightbox view. Turn on Show Prompt. Notice how detailed they can be.

My take: I think we’re beyond the “remember, it’s only going to get better” stage.