“1080p Professional Mode: Kling 1.5 now generates videos at 1080p resolution when using Professional Mode. While it costs more credits, the output quality is significantly better and sets a new standard for AI video generation.
Motion Brush: Kling has introduced Motion Brush, a long-awaited tool in the AI video generation space. Currently, it’s only supported in the 1.0 model but will be available in 1.5 soon. Stay tuned!
End Frames: End frames have been introduced in the 1.0 model and are coming soon to the 1.5 model, allowing for smoother transitions and more control over your videos.
Using Negative Prompts: Improve your outputs by adding negative prompts to filter out undesired elements. Copy and paste the following negative prompts into your settings:
Of particular note is the emotion it’s able to generate.
Plus, Tim signals that Kling is about to add a full-featured Video Editor. Stay tuned indeed!
My take: of course, some will lament these advances. Yes, tasks that workers once spent their lives performing are now accomplished immediately. Looking at you, Medieval scribe, hot metal typesetter, telephone exchange operator. More job transformation is sure to come. We are well into the Digital Age and its promise is bearing increasingly wondrous fruit.
CyberJungle, the Youtube channel of Hamburg-based Senior IT Product Manager Cihan Unur, recently posted a great video on consistent generated characters.
There are lots of great insights in this 20-minute video. Two outstanding takeaways:
First: a prompting guide for Flux.1. At 15:28 he reveals three prompting styles: list, natural language and hybrid.
Second: a guidance guide for Flux.1. At 17:18 he shows Photorealistic and Cinematic images with a wide scope of guidance values. He posits:
“The essence of guidance setting is a compromise or a balance between photo realism and prompt understanding.”
My take: to me, too often generated images look over-the-top and so ideal, they’re unrealistic. The key seems to be dialing the guidance down to two. Who knew? Now, you do.
The goal here is to consistently end up with the same real person in multiple generated video clips.
“In this tutorial we’ll learn how to use the Flux image generator to train a custom AI model specifically for your own face and generate AI photos of yourself. Then we’ll animate those photos with the Kling AI video generator, which in my opinion generates the best AI videos right now.”
In a nutshell, the process is:
Create an archive of at least ten photos of your star
Upload this to the Ostris flux-dev-lora-trainer model on Replicate
Train the LORA custom image model and use it to generate key frames
Upscale these images on Magnific, optionally
Generate six second clips in Kling AI with these images
My take: it seems week by week we’re getting closer to truly usable generated video that rivals (or even surpasses) Hollywood’s CGI/VFX. Imagine being able to train more than one LORA model into Flux for Kling. I have it on good authority that that is just around the corner.
My take: a lot of people will immediately claim this is heresy, and threatens the very foundations of cinema as we’ve come to know it over the last one hundred years. And they would be right. And yet, time marches on. I believe some variation of this is the future of ultra-low budget production. Very soon the quality will surpass the shoddy CGI that many multi-million dollar Hollywood productions have been foisting on us lately.
Love it or hate it, as of August 2024, AI Video still has a long way to go.
In this video, AI Samson lays out the current AI Video Pipeline. Although there are a few fledgling story-building tools in development, full-featured “story mode” is not yet available in AI video generators. The current pipeline is:
Create the first and last frames of your clips
Animate the clips between these frames
Create audio and lip-sync the clips
Upscale the clips
Create music and SFX
Edit everything together offline.
It seems new platforms emerge weekly but AI Samson makes these recommendations:
My take: You know, the current pipeline makes me think of an animation pipeline. It’s eerily similar to the Machinima pipeline I used to create films in the sandbox mode of the video game The Movies over ten years ago:
You’ve seen the Sora samples. The Dream Machine videos. How does LTX Studio, touted as “the future of storytelling, transforming imagination into reality,” stand up?
“There are whole bunch of things it does not do, but I love where it’s going and where I hope it’s going to go…. It’s brilliant for keeping track of all of the shots that you really do need to keep track of. It’s brilliant for scene wide settings and project wide settings, something I’ve been craving, and it’s really, really good at that. It’s great for casting. It’s brilliant for allowing you to then kind of just drop those characters in. I love the generative tools that will allow you to erase bits that you don’t need in your starting shot and to add other bits that you need that will help you tidy up the shot…. My two big gripes and I don’t think these are bugs that they’re going to fix, this is just fundamental features that it needs to be in there. One of them is every shot is slow motion…. Secondly, breaking the fourth wall. It drives me out of my mind!”
Here’s a peek at actually using LTX Studio by Riley Brown:
My take: In addition to Haydn’s slo mo and fourth wall gripes, I would add these requirements as well: movement and expression control including blinking and lip-sync. Mid-2024, one has to use each of the many AI tools for what it does best and then bring all the bits together in post. As an early proponent of Machinima (using video games to make movies,) I’m watching this space with interest. My conclusion: advances are being made but we’re nowhere near lucid dreaming.
Record in ProRes Log on an iPhone 15 Pro at 4K 30. He says, “4K 24 especially at ProRes Log just looks kind of choppy.”
In Settings, change to “Most Compatible” from “High Efficiency” and lock the white balance.
Turn on Exposure Adjustment and set it to .7. He says, “If your highlights are blown out it’s going to be a lot harder to actually bring that detail back once you go into color grading.”
Use the main 1X lens under adequate lighting, avoiding top-down noon sunlight. Try angling the light on the opposite side of the subject. He says, “The lighting is probably the most important element actually in making those cameras look good.”
Use the Grid to help create interesting compositions and make sure your camera movement is motivated.
In the edit, convert the Log footage with a Color Space Transform into Rec 709 and colour grade as usual.
Use Halation to lend the footage the characteristic film highlights glow and use a plugin called RSMB to add motion blur.