He suggests using Fal.AI to train a custom LoRA ( fal.ai/models/fal-ai/flux-lora-fast-training ) with at least 10 images of the subject. Then use this model to generate images ( fal.ai/models/fal-ai/flux-lora ) and increase their resolution using an up-res tool. Finally, you can now move on to animating them.
He details how to train a LoRA on Kling using at least eleven videos of your character. Admittedly, this pipeline is a little more involved. He also suggests FreePik as another option.
My take: basically, if you can imagine it, you can now create it.
Stephen Follows has just released The Horror Movie Report, the most comprehensive case study of the horror genre ever, with data from over 27,000 films.
The report is offered in English and Spanish and comes in two editions:
Film Fan Edition is aimed at general audiences. (£24.99)
Film Professional Edition is designed for those in the film industry and includes extra insights on profitability, and budgets, and comes with all the data as spreadsheets. (£79.99)
At over 400 pages, the report contains chapters on:
Horror Audiences
Subgenres
Script Origins
Cast
Crew
Budgets
Financials
Box Office
Profitability
Other Income
Film Festivals
Post Production
Posters and Marketing
Objectionable Content
Cultural Impact
Stephen is a leading film industry analyst known for his extensive research on film statistics; I’ve quoted his posts many times.
My take: Peter, this would make a great holiday gift for someone who aspires to produce a profitable film, no matter what the genre. The Professional Edition even comes with downloadable Excel files. Excel files!
The problem with a lot of image generators is that they love selfies: front-facing portraits. But what if you want a profile? Ben has a two-step work-around:
“Generate a close-up photo of your subject’s ear and then use the editor to zoom out and create the rest of the image.”
He explains:
“The reason this works is because what Midjourney needed was a pattern interrupt. Take advantage of its usual way to generate images by finding the usual way to generate an image with a more unusual focus. It’s better to choose a focus that is already often viewed from the angle we want.
focus on a ponytail if we want to see the back of someone’s head
use a receding hairline to see someone from straight above
focus on the back pocket of a pair of jeans if you want the…
I wouldn’t recommend looking up someone’s nostril (I mean it’s an angle that works but I just wouldn’t recommend it.)
The point is we can generate any of these things using extremely simple prompts and get very unusual angles to be seeing a person from. And then starting from there once we have the angle well defined we can simply zoom out and make our chosen feature less prominent by changing our prompt to something else and so in the new image the angle we wanted is extremely well defined not by tons of keywords but by the part of the image we already generated.”
This works for Expressions as well. He explains:
“If we start with a photo of just a smile or just closed eyes or just a mischievous smirk, Midjourney will spend all of its effort to create a high quality closeup version of the exact expression we wanted that now, in just one more generation, we can apply to our character by simply zooming out.”
“The hardest Hollywood truth I have had to come to terms with is that no one is going to come and save me.”
“I’m going to have to save myself.”
“I talk to so many young writers and directors who think someone needs to pluck them from obscurity so they can begin their careers. That’s just not true. Nothing is holding you back except your willingness to work hard and to create new things to show people. Start with small budgets, write things you can shoot. Then build from there. Get really good, get undeniable. Sure, to have a career in Hollywood a little luck is involved, too. But I am a firm believer in making your own luck.”
Keep learning.
Keep creating new things.
Keep putting yourself out there.
Be your own advocate and your own voice.
Stay relevant.
He concludes with, “It is on you. You’re the hero of your story.”
My take: I really needed to be reminded of this, this week. Thanks, Jason.
CyberJungle, the Youtube channel of Hamburg-based Senior IT Product Manager Cihan Unur, recently posted a great video on consistent generated characters.
There are lots of great insights in this 20-minute video. Two outstanding takeaways:
First: a prompting guide for Flux.1. At 15:28 he reveals three prompting styles: list, natural language and hybrid.
Second: a guidance guide for Flux.1. At 17:18 he shows Photorealistic and Cinematic images with a wide scope of guidance values. He posits:
“The essence of guidance setting is a compromise or a balance between photo realism and prompt understanding.”
My take: to me, too often generated images look over-the-top and so ideal, they’re unrealistic. The key seems to be dialing the guidance down to two. Who knew? Now, you do.
My take: a lot of people will immediately claim this is heresy, and threatens the very foundations of cinema as we’ve come to know it over the last one hundred years. And they would be right. And yet, time marches on. I believe some variation of this is the future of ultra-low budget production. Very soon the quality will surpass the shoddy CGI that many multi-million dollar Hollywood productions have been foisting on us lately.
Love it or hate it, as of August 2024, AI Video still has a long way to go.
In this video, AI Samson lays out the current AI Video Pipeline. Although there are a few fledgling story-building tools in development, full-featured “story mode” is not yet available in AI video generators. The current pipeline is:
Create the first and last frames of your clips
Animate the clips between these frames
Create audio and lip-sync the clips
Upscale the clips
Create music and SFX
Edit everything together offline.
It seems new platforms emerge weekly but AI Samson makes these recommendations:
My take: You know, the current pipeline makes me think of an animation pipeline. It’s eerily similar to the Machinima pipeline I used to create films in the sandbox mode of the video game The Movies over ten years ago:
Folks who follow this blog, know that I love Telefilm‘s Talent to Watch competition. It remains your best chance at funding your first feature film in Canada.
Until they allowed direct submissions from underrepresented folks two years ago, this is normally a two-stage process. Each of approximately 70 industry partners get to forward one (and sometimes two or three) projects to Telefilm and then the Talent to Watch jury selects eighteen or so for funding.
The prize? $250,000. One quarter of a million dollars.
Don’t belong to one of the Industry Partners? No problem!
The Chilliwack Independent Film Festival has got you covered. Launched last year, Pitch Sessions lets you throw your first feature project into the ring; five are selected to then pitch in person at the festival and the winner becomes CIFF’s nominee to Telefilm’s next Talent to Watch competition.
Oh yah, the top five also get free passes and a hotel room for the festival.
My take: If you’ve got a spare $100 and you want to hone your pitch in public, this is a great opportunity. Note that each industry partner sets their own rules but this is the only one I know of that incorporates a live pitch. Just be aware that Telefilm typically doesn’t open the Talent to Watch competition until mid-April.