Bourdain speaks from the beyond in new doc

Roadrunner: A Film About Anthony Bourdain, directed and produced by Morgan Neville, was released in the United States on July 16, 2021 by Focus Features. Celebrity chef and TV presenter Anthony Bourdain died by suicide on June 8, 2018, in France while on location, and this film explores his complex psyche.

But a controversy has erupted over the director’s inclusion of an AI-generated voiceover.

Helen Rosner reviewed the film in The New Yorker and noticed something strange:

“There is a moment at the end of the film’s second act when the artist David Choe, a friend of Bourdain’s, is reading aloud an e-mail Bourdain had sent him: “Dude, this is a crazy thing to ask, but I’m curious” Choe begins reading, and then the voice fades into Bourdain’s own: “…and my life is sort of shit now. You are successful, and I am successful, and I’m wondering: Are you happy?” I asked (director) Neville how on earth he’d found an audio recording of Bourdain reading his own e-mail. Throughout the film, Neville and his team used stitched-together clips of Bourdain’s narration pulled from TV, radio, podcasts, and audiobooks. “But there were three quotes there I wanted his voice for that there were no recordings of,” Neville explained. So he got in touch with a software company, gave it about a dozen hours of recordings, and, he said, “I created an A.I. model of his voice.” In a world of computer simulations and deepfakes, a dead man’s voice speaking his own words of despair is hardly the most dystopian application of the technology. But the seamlessness of the effect is eerie. “If you watch the film, other than that line you mentioned, you probably don’t know what the other lines are that were spoken by the A.I., and you’re not going to know,” Neville said. “We can have a documentary-ethics panel about it later.””

Well, the panel has been convened.

In a follow-up article, Rosner writes: “Neville used the A.I.-generated audio only to narrate text that Bourdain himself had written” and reveals the director’s “initial pitch of having Tony narrate the film posthumously á la Sunset Boulevard — one of Tony’s favorite films.”

People seem offended that the director has literally put words into Bourdain’s mouth, albeit his own words. Personally, I don’t have an issue with this but think there should have been a disclaimer off the top revealing, “Artificial Intelligence was used to generate 45 seconds of Mr. Bourdain’s voiceover in this film.”

My take: what I want to know is, how can I license the Tony Bourdain AI to narrate my movie?

Portal installation links two city centres

Futuristic-looking round visual portals have appeared in Vilnius, Lithuania, and Lublin, Poland, allowing citizens to see each other in real time.

The two portals connect Vilnius’s Train Station with Lublin’s Central Square, about 600 km away.

Benediktas Gylys, initiator of PORTAL says:

“Humanity is facing many potentially deadly challenges; be it social polarisation, climate change or economic issues. However, if we look closely, it’s not a lack of brilliant scientists, activists, leaders, knowledge or technology causing these challenges. It’s tribalism, a lack of empathy and a narrow perception of the world, which is often limited to our national borders. That’s why we’ve decided to bring the PORTAL idea to life – it’s a bridge that unifies and an invitation to rise above prejudices and disagreements that belong to the past. It’s an invitation to rise above the us and them illusion.”

PORTAL is a collaboration of the Benediktas Gylys Foundation, the City of Vilnius, the City of Lublin and the Crossroads Centre for Intercultural Creative Initiatives.

More portals are planned between Vilnius, Lithuania and London, England and Reykjavik, Iceland.

See the official website.

My take: back in the early Nineties (before the Internet caught the public eye) I conceived of a similar network of interconnected public spaces, called Central Square. My vision was similar to Citytv‘s Speakers’ Corner but was to be located in large public outdoor spaces and used to broadcast citizen reports, rants or demonstrations. It would have included sound, which PORTAL seems to have overlooked. I think it was to have appeared on television sets on some of the high-numbered channels. Of course, once increased bandwidth could support Internet video, web cams took off instead. See EarthCam.com for a list.

Pushing drone footage to the next level

Drone footage. You’ve seen lots of dreamy sequences from high in the sky. But on March 8, 2021, a small Minneapolis company released a 90-second video with footage the likes of which you’ve never seen before. Here’s the local KARE-TV coverage:

Trevor Mogg of Digital Trends adds:

“Captured by filmmaker and expert drone pilot Jay Christensen of Minnesota-based Rally Studios, the astonishing 90-second sequence, called Right Up Our Alley, comprises a single shot that glides through Bryant Lake Bowl and Theater in Minneapolis. The film, which has so far been viewed more than five million times on Twitter alone, was shot using a first-person-view (FPV) Cinewhoop quadcopter, a small, zippy drone that’s used, as the name suggests, to capture cinematic footage.”

Here’s their corporate website and the original tweet.

Oscar Liang has a great tutorial on Cinewhoops.

Johnny FPV has a great first person view overview.

My take: ever had dreams of flying? This might be even better.

How NFTs will unleash the power of the Blockchain

NFT. WTF?

Let’s break this down to the individual letters.

F = Fungible. “Fungible” assets are exchangeable for similar items. We can swap the dollars in each other’s pockets or change a $10 bill into two $5 bills without breaking a sweat.

T = Token. Specifically, a cryptographic token validated by the blockchain decentralized database.

N = Non. Duh.

So NFT is a Non-Fungible Token, or in other words, a unique asset that is validated by the blockchain. This solves the real-world problem of vouching for the provenance of that Van Gogh in your attic; in the digital world, the blockchain records changes in the price and ownership, etc. of an asset in a distributed ledger that can’t be hacked. (Just don’t lose your crypto-wallet.)

Early 2021 has seen an explosion in marketplaces for the creation and trading of NFTs. Like most asset bubbles, it’s all tulips until you need to sell and buyers are suddenly scarce.

But I believe NFTs hold the key to unleashing the power of the blockchain for film distribution.

Cathy Hackl of Forbes writes about the future of NFTs:

“Non-fungible tokens are blockchain assets that are designed to not be equal. A movie ticket is an example of a non-fungible token. A movie ticket isn’t a ticket to any movie, anytime. It is for a very specific movie and a very specific time. Ownership NFTs provide blockchain security and convenience, but for a specific asset with a specific value.”

What if there was an NFT marketplace dedicated to streaming films? Filmmakers would mint a series of NFTs and each viewer would redeem one NFT to stream the movie. This would allow for frictionless media dissemination and direct economic compensation to filmmakers.

Here’s a tutorial on turning art in NFTs.

My take: while I think NFTs hold promise in film distribution, the key will be to lower the gas price; the fee paid when creating NFTs in the first place.

Digital Humans coming soon!

Epic Games and Unreal Engine have announced MetaHuman Creator, coming later in 2021.

MetaHuman Creator is a cloud-streamed app designed to take real-time digital human creation from weeks or months to less than an hour, without compromising on quality. It works by drawing from an ever-growing library of variants of human appearance and motion, and enabling you to create convincing new characters through intuitive workflows that let you sculpt and craft the result you want. As you make adjustments, MetaHuman Creator blends between actual examples in the library in a plausible, data-constrained way. You can choose a starting point by selecting a number of preset faces to contribute to your human from the diverse range in the database.”

Right now, you can start with 18 different bodies and 30 hair styles.

When you’re happy with your human, you can download the asset via Quixel Bridge, fully rigged and ready for animation and motion capture in Unreal Engine, and complete with LODs. You’ll also get the source data in the form of a Maya file, including meshes, skeleton, facial rig, animation controls, and materials.”

Got that? See documentation.

The takeaway is that your digital humans can live in your Unreal Engine environment. Is this the future of movies?

My take: This reminds me of my experiments in machinima ten years ago. I used a video game called The Movies that had a character generator (that would sync mouth movements with pre-recorded audio,) environments and scenes to record shots I would then assemble into movies. See Cowboys and Aliens (The Harper Version) for one example. You know, in these COVID times, I wonder if Unreal Engine’s ability to mash together video games and VFX will become a safer way to create entertainment that does not require scores of people to film together in the same studio at the same time.

Shoot your next film in Virtual Unreality

Oakley Anderson-Moore reports for No Film School on How One Studio Is Thriving During COVID (and Why It’s a Big Deal for Indies).

(The studio tour proper starts just before 14 minutes in this promotional video.)

“During the pandemic, one studio stayed open when most others closed. How? L.A. Castle Studios has developed ‘a better way to shoot.’ And owner Tim Pipher believes it’s the way of the future — perhaps no more so than for independent film. ‘I guess some of it comes down to luck,’ explained Pipher to No Film School. His studio has been slammed with work in the midst of the shutdowns. ‘COVID or no COVID, we think we’ve got a better way to shoot.'”

What sets this green-screen studio apart from others is the ability to shoot with a live-composited set.

Simply put, you and your actor can now create inside virtual reality.

How is this possible? It’s achieved by marrying movie making and video game 3D environments. The core software is Epic GamesUnreal Engine.

See the Unreal Engine website and its Marketplace.

Check out L.A. Castle Studios.

My take: I love this technology! Basically, it’s Star Trek‘s Holodeck with green instead of black walls. Keep in mind, as a filmmaker, you still have to address every other component other than location: for instance casting, costumes, makeup, props, blocking, lighting, shot selection and performance. Do I know any Unreal Engine gurus?

Intel Labs creates photorealistic 3D VR from photos

Jacob Fox on PCGamesN suggests that new tech from Intel Labs could revolutionise VR gaming.

He describes:

“A new technique called Free View Synthesis. It allows you to take some source images from an environment (from a video recorded while walking through a forest, for example), and then reconstruct and render the environment depicted in these images in full ‘photorealistic’ 3D. You can then have a ‘target view’ (i.e. a virtual camera, or perspective like that of the player in a video game) travel through this environment freely, yielding new photorealistic views.”

David Heaney on Upload VR clarifies: “Researchers at Intel Labs have developed a system capable of digitally recreating a scene from a series of photos taken in it.

“Unlike with previous attempts, Intel’s method produces a sharp output. Even small details in the scene are legible, and there’s very little of the blur normally seen when too much of the output is crudely ‘hallucinated’ by a neural network.”

Read the full paper.

My take: this is fascinating! This could yield the visual version of 3D Audio.

Disney scientists perfect deep fakes

We propose an algorithm for “fully automatic neural face swapping in images and videos.

So begins a startling revelation by Disney Researchers Jacek NaruniecLeonhard HelmingerChristopher Schroers and Romann M. Weber in a paper delivered virtually at The 31st Eurographics Symposium on Rendering in London recently.

Here’s the abstract:

“In this paper, we propose an algorithm for fully automatic neural face swapping in images and videos. To the best of our knowledge, this is the first method capable of rendering photo-realistic and temporally coherent results at megapixel resolution. To this end, we introduce a progressively trained multi-way (comb network) and a light- and contrast-preserving blending method. We also show that while progressive training enables generation of high-resolution images, extending the architecture and training data beyond two people allows us to achieve higher fidelity in generated expressions. When compositing the generated expression onto the target face, we show how to adapt the blending strategy to preserve contrast and low-frequency lighting. Finally, we incorporate a refinement strategy into the face landmark stabilization algorithm to achieve temporal stability, which is crucial for working with high-resolution videos. We conduct an extensive ablation study to show the influence of our design choices on the quality of the swap and compare our work with popular state-of-the-art methods.”

Got that?

My advice: just watch the video and be prepared to be wowed.

My take: Deep fakes were concerning enough. However, this technology actually has production value. I envision a (very near) future where “substitute actors” (sub-actors?) are the ones who give the performances on set and then this Disney technology replaces their faces the those of the “stars” they represent. In fact, if I was an agent, I’d be looking for those subactors now so I could package the pair. A star who didn’t want to mingle with potentially COVID-19 carriers could send their doubles to any number of projects at the same time. All that would be left is to do a high resolution 3D scan and some ADR work. Of course — Jimmy Fallon already perfected this technique five years ago:

TikTok emerges as worthy Vine replacement

Joshua Eferighe posits on OZY that The Next Big Indie Filmmaker Might Be a TikToker.

Joshua’s key points:

  • “The social media platform is shaping the future of filmmaking.
  • Novice filmmakers are using the platform’s sophisticated editing tools to learn the trade and test their work.
  • Unlike Instagram, TikTok’s algorithm allows users without many followers to go viral, adding to its popularity.”

What is TikTok? The Chinese app claims to be “the leading destination for short-form mobile video. Our mission is to inspire creativity and bring joy.”

Why is TikTok valuable to filmmakers? The hashtag #cinematics with 3.7 billion views.

See these risks and this safety guide.

My take: Shorter is better! Remember Vine?

Some SmartTVs to become obsolete

Catie Keck reports in Gizmodo: Here’s Why Netflix Is Leaving Some Roku and Samsung Devices.

She says,

“Select Roku devices, as well as older Samsung or Vizio TVs, will soon lose support for Netflix beginning in December…. With respect to Roku devices in particular, the issue boils down to older devices running Windows Media DRM. Since 2010, Netflix has been using Microsoft PlayReady. Starting December 2, older devices that aren’t able to upgrade to PlayReady won’t be able to use the service.”

Netflix says,

“If you see an error that says: ‘Due to technical limitations, Netflix will no longer be available on this device after December 1st, 2019. Please visit netflix.com/compatibledevices for a list of available devices.’ It means that, due to technical limitations, your device will no longer be able to stream Netflix after the specified date. To continue streaming, you’ll need to switch to a compatible device prior to that date.”

Antonio Villas-Boas writes on Business Insider:

“This has surfaced one key weakness in Smart TVs — while the picture might still be good, the built-in computers that make these TVs ‘smart’ will become old and outdated, just like a regular computer or smartphone. That was never an issue on ‘dumb’ TVs that are purely screens without built-in computers to run apps and stream content over the internet.”

He concludes, “You should buy a streaming device like a Roku, Chromecast, Amazon Fire TV, or Apple TV instead of relying on your Smart TV’s smarts.”

My take: does this happen to cars as well?