About Michael Korican

A long-time media artist, Michael’s filmmaking stretches back to 1978. Michael graduated from York University film school with Special Honours, winning the Famous Players Scholarship in his final year. The Rolling Stone Book of Rock Video called Michael's first feature 'Recorded: Live!' "the first film about rock video". Michael served on the board of L.I.F.T. when he lived in Toronto during the eighties and managed the Bloor Cinema for Tom and Jerry. He has been prolific over his past eight years in Victoria, having made over thirty-five shorts, won numerous awards, produced two works for BravoFACT! and received development funding for 'Begbie’s Ghost' through the CIFVF and BC Film.

Be the hero of your own story.

Jason Hellerman reminds us on No Film School that No One Is Coming To Save Your Filmmaking Career.

He confesses:

“The hardest Hollywood truth I have had to come to terms with is that no one is going to come and save me.”

“I’m going to have to save myself.”

“I talk to so many young writers and directors who think someone needs to pluck them from obscurity so they can begin their careers. That’s just not true. Nothing is holding you back except your willingness to work hard and to create new things to show people. Start with small budgets, write things you can shoot. Then build from there. Get really good, get undeniable. Sure, to have a career in Hollywood a little luck is involved, too. But I am a firm believer in making your own luck.”

  • Keep learning.
  • Keep creating new things.
  • Keep putting yourself out there.
  • Be your own advocate and your own voice.
  • Stay relevant.

He concludes with, “It is on you. You’re the hero of your story.”

My take: I really needed to be reminded of this, this week. Thanks, Jason.

FaceFusion 3: the best free face swapper

Tim of Theoretically Media has a great review of FaceFusion 3.0.0 on YouTube:

In it he discusses:

  1. How to install FaceFusion 3 using Pinokio
  2. How to face swap for video
  3. The limitations of FaceFusion
  4. Face swapping with AI-generated characters
  5. Lipsync
  6. Expression controls
  7. Aging controls

A huge bonus to this pipeline is face_editor. See 14:02 for tools to alter the many elements on faces, such as smiles, frowns and eye lines. Even age.

My take: we are way beyond deep fakes now. The ability to change expression is extremely powerful! Every performance can be altered.

Kling is redefining CGI, with Grading up next

Tim Simmons from Theoretically Media just released a new look at Kling AI’s new 1.5 model:

In it he relates what’s new:

1080p Professional Mode: Kling 1.5 now generates videos at 1080p resolution when using Professional Mode. While it costs more credits, the output quality is significantly better and sets a new standard for AI video generation.

Motion Brush: Kling has introduced Motion Brush, a long-awaited tool in the AI video generation space. Currently, it’s only supported in the 1.0 model but will be available in 1.5 soon. Stay tuned!

End Frames: End frames have been introduced in the 1.0 model and are coming soon to the 1.5 model, allowing for smoother transitions and more control over your videos.

Using Negative Prompts: Improve your outputs by adding negative prompts to filter out undesired elements. Copy and paste the following negative prompts into your settings:

ARTIFACTS, SLOW, UGLY, BLURRY, DEFORMED, MULTIPLE LIMBS, CARTOON, ANIME, PIXELATED, STATIC, FOG, FLAT, UNCLEAR, DISTORTED, ERROR, STILL, LOW RESOLUTION, OVERSATURATED, GRAIN, BLUR, MORPHING, WARPING”

Of particular note is the emotion it’s able to generate.

Plus, Tim signals that Kling is about to add a full-featured Video Editor. Stay tuned indeed!

My take: of course, some will lament these advances. Yes, tasks that workers once spent their lives performing are now accomplished immediately. Looking at you, Medieval scribe, hot metal typesetter, telephone exchange operator. More job transformation is sure to come. We are well into the Digital Age and its promise is bearing increasingly wondrous fruit.

Flux.1 prompting and guidance guides

CyberJungle, the Youtube channel of Hamburg-based Senior IT Product Manager Cihan Unur, recently posted a great video on consistent generated characters.

There are lots of great insights in this 20-minute video. Two outstanding takeaways:

First: a prompting guide for Flux.1. At 15:28 he reveals three prompting styles: list, natural language and hybrid.

Second: a guidance guide for Flux.1. At 17:18 he shows Photorealistic and Cinematic images with a wide scope of guidance values. He posits:

“The essence of guidance setting is a compromise or a balance between photo realism and prompt understanding.”

See 18:36 for the Photorealistic results. He prefers a level of two.

See 19:54 for the Cinematic guidance level he prefers: again two.

My take: to me, too often generated images look over-the-top and so ideal, they’re unrealistic. The key seems to be dialing the guidance down to two. Who knew? Now, you do.

You can now star in generated video

Last week we explored the latest Generated Video (GV) pipeline. This week Seattle’s Yutao Han, aka Tao Prompts, goes further and illustrates How to Create Ai Videos of Yourself!

The goal here is to consistently end up with the same real person in multiple generated video clips.

“In this tutorial we’ll learn how to use the Flux image generator to train a custom AI model specifically for your own face and generate AI photos of yourself. Then we’ll animate those photos with the Kling AI video generator, which in my opinion generates the best AI videos right now.”

In a nutshell, the process is:

  1. Create an archive of at least ten photos of your star
  2. Upload this to the Ostris flux-dev-lora-trainer model on Replicate
  3. Train the LORA custom image model and use it to generate key frames
  4. Upscale these images on Magnific, optionally
  5. Generate six second clips in Kling AI with these images

My take: it seems week by week we’re getting closer to truly usable generated video that rivals (or even surpasses) Hollywood’s CGI/VFX. Imagine being able to train more than one LORA model into Flux for Kling. I have it on good authority that that is just around the corner.

New Generated Video pipeline?

A couple of very recent videos point to a potential new Generated Video, or GV, pipeline.

The first is “Create Cinematic Ai Videos with Kling Ai! – Ultra Realistic Results” by Seattle’s Yutao Han, aka Tao Prompts.

The second is “How-To Create Uncensored Images Of Anyone (Free)” by Lisbon’s Igor Pogany, aka The AI Advantage.

Imagine combining both into a new GV pipeline:

  1. Train custom character models
  2. Create key frames utilizing these custom models
  3. Animate clips with these key frames
  4. Upscale these clips
  5. Edit together.

My take: a lot of people will immediately claim this is heresy, and threatens the very foundations of cinema as we’ve come to know it over the last one hundred years. And they would be right. And yet, time marches on. I believe some variation of this is the future of ultra-low budget production. Very soon the quality will surpass the shoddy CGI that many multi-million dollar Hollywood productions have been foisting on us lately.

Compare Image Generators at a glance

Matt Wolfe has just released a wonderful comparison of top image generators tackling four different types of pictures on YouTube.

The four image categories are:

  • Human Realism
  • Landscapes
  • Scenery incorporating Text
  • Surrealistic Images

The platforms are:

  1. Ideogram 2.0
  2. MidJourney 6.1
  3. Mystic
  4. Phoenix
  5. Flux.1 (Grok)
  6. Dall-e 3
  7. SD3
  8. Firefly 3
  9. Meta Emu
  10. Imagen 3
  11. Playground v3

See the Figma board to see all eleven contenders at once.

My take: as a visual learner, I really appreciate this side-by-side comparison. Thank you, Matt!

August 2024 AI Video Pipeline

Love it or hate it, as of August 2024, AI Video still has a long way to go.

In this video, AI Samson lays out the current AI Video Pipeline. Although there are a few fledgling story-building tools in development, full-featured “story mode” is not yet available in AI video generators. The current pipeline is:

  1. Create the first and last frames of your clips
  2. Animate the clips between these frames
  3. Create audio and lip-sync the clips
  4. Upscale the clips
  5. Create music and SFX
  6. Edit everything together offline.

It seems new platforms emerge weekly but AI Samson makes these recommendations:

00:23 AI Art Image Generators
09:19 AI Video Generators
16:28 Voice Generators
18:02 Music Generators
20:44 Lip-Syncing
21:52 Upscaling

Keep an eye open for LTX Studio though.

My take: You know, the current pipeline makes me think of an animation pipeline. It’s eerily similar to the Machinima pipeline I used to create films in the sandbox mode of the video game The Movies over ten years ago:

September 8 deadline for CIFF Pitch Sessions

Folks who follow this blog, know that I love Telefilm‘s Talent to Watch competition. It remains your best chance at funding your first feature film in Canada.

Until they allowed direct submissions from underrepresented folks two years ago, this is normally a two-stage process. Each of approximately 70 industry partners get to forward one (and sometimes two or three) projects to Telefilm and then the Talent to Watch jury selects eighteen or so for funding.

The prize? $250,000. One quarter of a million dollars.

Don’t belong to one of the Industry Partners? No problem!

The Chilliwack Independent Film Festival has got you covered. Launched last year, Pitch Sessions lets you throw your first feature project into the ring; five are selected to then pitch in person at the festival and the winner becomes CIFF’s nominee to Telefilm’s next Talent to Watch competition.

Oh yah, the top five also get free passes and a hotel room for the festival.

The deadline to apply to Pitch Sessions at the 2024 Chilliwack Independent Film Festival is September 8.

My take: If you’ve got a spare $100 and you want to hone your pitch in public, this is a great opportunity. Note that each industry partner sets their own rules but this is the only one I know of that incorporates a live pitch. Just be aware that Telefilm typically doesn’t open the Talent to Watch competition until mid-April.

Deadline approaches for women to apply to BANFF Spark

The application deadline for this year’s BANFF Spark Accelerator for Women in the Business of Media: Producers Edition is rapidly approaching: August 12, 2024.

BANFF Spark provides market access, training, and networking opportunities to help build more Canadian women-owned media businesses.

“Since the program began in 2019, BANFF Spark has already provided opportunities for more than 200 women entrepreneurs. The program is open to all candidates and is designed to empower women of colour, Indigenous women, women with disabilities, 2SLGBTQI+ women, and non-binary individuals.”

All selected participants will receive:

  1. Online workshops (that address the core components of business development).
  2. Networking opportunities with top industry professionals.
  3. A full-access pass to the 2025 Banff World Media Festival (June 8-11, 2025) and its complement of top industry sessions and international marketplace.
  4. A $1500 CAD travel stipend to attend the 2025 Banff World Media Festival
    (on the condition of in-person Festival attendance).

Apply here.

My take: I love that this is focussed on people and not squarely on projects. I don’t have to mansplain this, just apply!