Telefilm 2023-2024 Annual Report: Talent Fund analysis

Telefilm Canada has released its 2023-2024 Annual Report. Over the next weeks, I’ll dig deeper. This week: the Talent Fund.

Telefilm links the Talent Fund with Talent to Watch, its first feature program. They report:

  • “Through the generosity of its donors and partners from across Canada, the Talent Fund raised $270,600.”
  • The Talent Fund invested $300,000 towards the $3,600,000 invested in 18 first features in 2023.
  • “In total, the Fund financed 7% of the Talent to Watch program in 2023-2024.”
  • “The Talent Fund has a balance of close to $50,000.”

You can read the full annual report here.

My take: as a Certified Independent Production Fund, the majority of the Talent Fund‘s past funding came from “CRTC tangible benefits” from major Canadian media companies and other media funds; private donations continue to be minimal and fund perhaps two documentaries a year, or one+ narrative feature films. The allocation of some of the new online streaming tax to the Talent Fund will be critical for its continued existence. See The Path Forward.

Drama is not selling

Naman Ramachandran reports on Variety that Sales Agents Shift Away From Drama Films Amid Market Challenges: ‘It’s Led Us to Diversify Our Slate’.

“The global appetite for drama films has significantly diminished, according to a panel of international sales agents at the BFI London Film Festival.”

He quotes Sophie Green, head of acquisitions and development at Bankside Films, as saying: “The big sort of takeaway at the moment from the market is anything but drama. That really is kind of like double underlined everywhere that we go.”

Drama has become increasingly difficult to finance and sell, leading to a shift toward genre films and comedies.

My take: the pendulum swings this way and then that way. Dramas will be back, but perhaps they demand too much empathy from audiences just right now.

The numbers behind Telefilm’s Talent to Watch 2024-25 projects

Telefilm Canada has selected 17 Talent to Watch projects to share $3.45 million from 150 submissions.

It’s quite revealing to look at the numbers in detail.

Let’s start with Genre.

Documentary 8
Drama 4
Science Fiction/Fantasy 3
Drama-Comedy 1
Thriller 1

Province?

Quebec 7
Ontario 7
British Columbia 1
Alberta 1
Manitoba 1

Let’s look at Language next.

English 9
French 3
English/Sudanese Arabic 1
Portuguese (Azorean dialect)/French/English 1
French/English/Algonquin 1
French/English 1
French/Creole 1

And let’s finish up with Stream.

Filmmaker Apply-Direct 11
Industry Partner 3
Festival 2
Indigenous 1

In addition, if Gender is assumed from names and excluding Executive Producers:

Female approx. 25
Male approx. 15

Some observations:

  • The number of submissions rose almost 20% from last year.
  • Non-fiction continues to be almost as successful as Fiction.
  • Almost all of the successful projects are from Quebec and Ontario.
  • Almost one third of the successful projects include world languages in addition to English and/or French.
  • The vast majority of successful projects continue to be Filmmaker Apply-Direct.
  • Less than 20% of the successful projects are from Industry Partners.
  • Women far outnumber men and other expressions of gender.
  • No projects are selected from Atlantic Canada.

The cynical might posit that Telefilm’s Talent to Watch program continues to compensate for the broader industry.

My take: this is the third year that filmmakers could apply directly and Telefilm has rewarded them well! Therefore, if you can apply direct, bypass your local industry partner, for odds of approximately one in nine.

FaceFusion 3: the best free face swapper

Tim of Theoretically Media has a great review of FaceFusion 3.0.0 on YouTube:

In it he discusses:

  1. How to install FaceFusion 3 using Pinokio
  2. How to face swap for video
  3. The limitations of FaceFusion
  4. Face swapping with AI-generated characters
  5. Lipsync
  6. Expression controls
  7. Aging controls

A huge bonus to this pipeline is face_editor. See 14:02 for tools to alter the many elements on faces, such as smiles, frowns and eye lines. Even age.

My take: we are way beyond deep fakes now. The ability to change expression is extremely powerful! Every performance can be altered.

Kling is redefining CGI, with Grading up next

Tim Simmons from Theoretically Media just released a new look at Kling AI’s new 1.5 model:

In it he relates what’s new:

1080p Professional Mode: Kling 1.5 now generates videos at 1080p resolution when using Professional Mode. While it costs more credits, the output quality is significantly better and sets a new standard for AI video generation.

Motion Brush: Kling has introduced Motion Brush, a long-awaited tool in the AI video generation space. Currently, it’s only supported in the 1.0 model but will be available in 1.5 soon. Stay tuned!

End Frames: End frames have been introduced in the 1.0 model and are coming soon to the 1.5 model, allowing for smoother transitions and more control over your videos.

Using Negative Prompts: Improve your outputs by adding negative prompts to filter out undesired elements. Copy and paste the following negative prompts into your settings:

ARTIFACTS, SLOW, UGLY, BLURRY, DEFORMED, MULTIPLE LIMBS, CARTOON, ANIME, PIXELATED, STATIC, FOG, FLAT, UNCLEAR, DISTORTED, ERROR, STILL, LOW RESOLUTION, OVERSATURATED, GRAIN, BLUR, MORPHING, WARPING”

Of particular note is the emotion it’s able to generate.

Plus, Tim signals that Kling is about to add a full-featured Video Editor. Stay tuned indeed!

My take: of course, some will lament these advances. Yes, tasks that workers once spent their lives performing are now accomplished immediately. Looking at you, Medieval scribe, hot metal typesetter, telephone exchange operator. More job transformation is sure to come. We are well into the Digital Age and its promise is bearing increasingly wondrous fruit.

You can now star in generated video

Last week we explored the latest Generated Video (GV) pipeline. This week Seattle’s Yutao Han, aka Tao Prompts, goes further and illustrates How to Create Ai Videos of Yourself!

The goal here is to consistently end up with the same real person in multiple generated video clips.

“In this tutorial we’ll learn how to use the Flux image generator to train a custom AI model specifically for your own face and generate AI photos of yourself. Then we’ll animate those photos with the Kling AI video generator, which in my opinion generates the best AI videos right now.”

In a nutshell, the process is:

  1. Create an archive of at least ten photos of your star
  2. Upload this to the Ostris flux-dev-lora-trainer model on Replicate
  3. Train the LORA custom image model and use it to generate key frames
  4. Upscale these images on Magnific, optionally
  5. Generate six second clips in Kling AI with these images

My take: it seems week by week we’re getting closer to truly usable generated video that rivals (or even surpasses) Hollywood’s CGI/VFX. Imagine being able to train more than one LORA model into Flux for Kling. I have it on good authority that that is just around the corner.

New Generated Video pipeline?

A couple of very recent videos point to a potential new Generated Video, or GV, pipeline.

The first is “Create Cinematic Ai Videos with Kling Ai! – Ultra Realistic Results” by Seattle’s Yutao Han, aka Tao Prompts.

The second is “How-To Create Uncensored Images Of Anyone (Free)” by Lisbon’s Igor Pogany, aka The AI Advantage.

Imagine combining both into a new GV pipeline:

  1. Train custom character models
  2. Create key frames utilizing these custom models
  3. Animate clips with these key frames
  4. Upscale these clips
  5. Edit together.

My take: a lot of people will immediately claim this is heresy, and threatens the very foundations of cinema as we’ve come to know it over the last one hundred years. And they would be right. And yet, time marches on. I believe some variation of this is the future of ultra-low budget production. Very soon the quality will surpass the shoddy CGI that many multi-million dollar Hollywood productions have been foisting on us lately.

August 2024 AI Video Pipeline

Love it or hate it, as of August 2024, AI Video still has a long way to go.

In this video, AI Samson lays out the current AI Video Pipeline. Although there are a few fledgling story-building tools in development, full-featured “story mode” is not yet available in AI video generators. The current pipeline is:

  1. Create the first and last frames of your clips
  2. Animate the clips between these frames
  3. Create audio and lip-sync the clips
  4. Upscale the clips
  5. Create music and SFX
  6. Edit everything together offline.

It seems new platforms emerge weekly but AI Samson makes these recommendations:

00:23 AI Art Image Generators
09:19 AI Video Generators
16:28 Voice Generators
18:02 Music Generators
20:44 Lip-Syncing
21:52 Upscaling

Keep an eye open for LTX Studio though.

My take: You know, the current pipeline makes me think of an animation pipeline. It’s eerily similar to the Machinima pipeline I used to create films in the sandbox mode of the video game The Movies over ten years ago:

Paris 2024 Opening Ceremony was spectacular!

The Opening Ceremonies of the Paris 2024 Olympic Games were four hours long and I enjoyed every minute!

Lady Gaga performed!

And Céline Dion closed the show from the Eiffel Tower:

From the Olympics:

“For the first time in the history of the Olympic Summer Games, the Opening Ceremony will not take place in a stadium. The parade of athletes will be held on the Seine with boats for each national delegation. Winding their way from east to west, the 10,500 athletes will cross through the centre of Paris. The parade will come to the end of its 6-kilometre route in front of the Trocadéro, where the remaining elements of Olympic protocol and final shows will take place. Eighty giant screens and strategically placed speakers will allow everyone to enjoy the magical atmosphere of this show reverberating throughout the French capital.”

My take: It was spectacular! As it should be, costing a reported 120 million Euro.

July 2024 Tier List for AI Video

Igor Pogany of The AI Advantage recently released a YouTube video that succinctly summarizes the current state of AI Video.

The tools he reviews are:

His favourites (dated mid-July 2024) are:

Runway GEN-3 Alpha and Luma Dream Machine for their clip outputs, but watch out for LTX Studio because of their overall project approach.

See the full tier list at 12:48 for the tl;dr.

My take: this is a super-valuable video that can get you up-to-date in under 14 minutes. Well worth your time.