New lightfield lens records depth info

John Aldred reports on DIYPhotography that the K|Llens One lens is about to released on Kickstarter.

He says:

“The K|Llens One lens, teased earlier this year by German company K|Lens, is finally about to released on Kickstarter. They say that this is the world’s first light field lens that can be used with regular DSLR and mirrorless cameras — and it works for both stills and video. Designed for full-frame cameras, the lens is a “ground-breaking mix of state-of-the-art lens and software technology” which K|Lens says will open up new worlds of creativity to users.”

The lens shoots nine images at once, with each taking up 1/9th the area of the sensor in a 3×3 grid. Custom software then manipulates those images into the desired result.

Because this lens turns any camera into a 3D camera it might have application for specific tasks like Visual Effects, where having depth information is vital for compositing.

Aldred adds:

“Interestingly, while all of the software was developed in-house, the lens itself, they say, was developed in cooperation with Carl Zeiss Jena GmbH, who they say will also be doing all of the manufacturing. So, while K|Lens might be a company that few have heard of, it will essentially be a Zeiss lens. And not just their name stamped on somebody else’s product as Huawei did with Leica, as they’re actually making the thing.”

See the company website.

My take: I’ve blogged about the light field a few times in the last decade and I really like the promise. Could it be the end of out of focus shots for ever? All we need is a similar “sound field” that would allow us to capture every sound source at once and later go into the soundscape to re-record those sources much closer. Right? (Hmm. Is this that?)

Colour Display AR Smart Glasses

Deirdre O Donnel reveals on Notebookcheck some of the most advanced Smart Glasses yet.

She writes:

“Thunderbird is an augmented reality (AR) -focused start-up supported by the display-centric OEM TCL. Now, the two brands have unveiled something apparently three years in the making: the new Smart Glasses Pioneer Version, with a groundbreaking color micro-LED display geared toward an optimal AR experience. This pair of spectacles is, as the name suggests, the kind of ‘true’ smart glasses that integrate a working, partially transparent display capable of overlaying a mixed-reality display over the wearer’s real-world surroundings. Thunderbird and TCL make the new device sound like a blend of features from the Facebook Ray-Bans and Xiaomi’s own concept Smart Glasses. They do integrate a camera — obtrusively found on the nose-piece — and touch controls on the outside of the ear-hooks to interact with the glasses and the content, phone-like apps, smart-home and -car controls they are rated to sync with.”

My take: These are much better than Google Glass and Snap Spectacles. Still too nerdy for me though, but they might appeal to someone wearing a Smart Watch. BONUS: here’s the excellent music from the Thunderbird video: Black Math’s Point Blank (Alternate).

Google AI can now enhance low res pix

Remember those laughable TV episodes in which someone asks, “Can you enhance that?”

Well, laugh no more. Google AI has mastered “high fidelity image generation.”

You can just about hear it: “HAL, unlock the enhancing algorithm.”

Google explains their new method:

“Diffusion models work by corrupting the training data by progressively adding Gaussian noise, slowly wiping out details in the data until it becomes pure noise, and then training a neural network to reverse this corruption process. Running this reversed corruption process synthesizes data from pure noise by gradually denoising it until a clean sample is produced.”

Add noise to the picture, and then denoise it?

Here is the Super-Resolution via Repeated Refinement paper.

And the Cascaded Diffusion Models for High Fidelity Image Generation paper.

My take: It was Arthur C. Clarke who said, “Any sufficiently advanced technology is indistinguishable from magic.” Google has just given us more magic. And we so smugly said those enhancing programs can’t add resolution back into a pixilated picture. Looks like we were wrong, yet again.

Bourdain speaks from the beyond in new doc

Roadrunner: A Film About Anthony Bourdain, directed and produced by Morgan Neville, was released in the United States on July 16, 2021 by Focus Features. Celebrity chef and TV presenter Anthony Bourdain died by suicide on June 8, 2018, in France while on location, and this film explores his complex psyche.

But a controversy has erupted over the director’s inclusion of an AI-generated voiceover.

Helen Rosner reviewed the film in The New Yorker and noticed something strange:

“There is a moment at the end of the film’s second act when the artist David Choe, a friend of Bourdain’s, is reading aloud an e-mail Bourdain had sent him: “Dude, this is a crazy thing to ask, but I’m curious” Choe begins reading, and then the voice fades into Bourdain’s own: “…and my life is sort of shit now. You are successful, and I am successful, and I’m wondering: Are you happy?” I asked (director) Neville how on earth he’d found an audio recording of Bourdain reading his own e-mail. Throughout the film, Neville and his team used stitched-together clips of Bourdain’s narration pulled from TV, radio, podcasts, and audiobooks. “But there were three quotes there I wanted his voice for that there were no recordings of,” Neville explained. So he got in touch with a software company, gave it about a dozen hours of recordings, and, he said, “I created an A.I. model of his voice.” In a world of computer simulations and deepfakes, a dead man’s voice speaking his own words of despair is hardly the most dystopian application of the technology. But the seamlessness of the effect is eerie. “If you watch the film, other than that line you mentioned, you probably don’t know what the other lines are that were spoken by the A.I., and you’re not going to know,” Neville said. “We can have a documentary-ethics panel about it later.””

Well, the panel has been convened.

In a follow-up article, Rosner writes: “Neville used the A.I.-generated audio only to narrate text that Bourdain himself had written” and reveals the director’s “initial pitch of having Tony narrate the film posthumously á la Sunset Boulevard — one of Tony’s favorite films.”

People seem offended that the director has literally put words into Bourdain’s mouth, albeit his own words. Personally, I don’t have an issue with this but think there should have been a disclaimer off the top revealing, “Artificial Intelligence was used to generate 45 seconds of Mr. Bourdain’s voiceover in this film.”

My take: what I want to know is, how can I license the Tony Bourdain AI to narrate my movie?

Portal installation links two city centres

Futuristic-looking round visual portals have appeared in Vilnius, Lithuania, and Lublin, Poland, allowing citizens to see each other in real time.

The two portals connect Vilnius’s Train Station with Lublin’s Central Square, about 600 km away.

Benediktas Gylys, initiator of PORTAL says:

“Humanity is facing many potentially deadly challenges; be it social polarisation, climate change or economic issues. However, if we look closely, it’s not a lack of brilliant scientists, activists, leaders, knowledge or technology causing these challenges. It’s tribalism, a lack of empathy and a narrow perception of the world, which is often limited to our national borders. That’s why we’ve decided to bring the PORTAL idea to life – it’s a bridge that unifies and an invitation to rise above prejudices and disagreements that belong to the past. It’s an invitation to rise above the us and them illusion.”

PORTAL is a collaboration of the Benediktas Gylys Foundation, the City of Vilnius, the City of Lublin and the Crossroads Centre for Intercultural Creative Initiatives.

More portals are planned between Vilnius, Lithuania and London, England and Reykjavik, Iceland.

See the official website.

My take: back in the early Nineties (before the Internet caught the public eye) I conceived of a similar network of interconnected public spaces, called Central Square. My vision was similar to Citytv‘s Speakers’ Corner but was to be located in large public outdoor spaces and used to broadcast citizen reports, rants or demonstrations. It would have included sound, which PORTAL seems to have overlooked. I think it was to have appeared on television sets on some of the high-numbered channels. Of course, once increased bandwidth could support Internet video, web cams took off instead. See EarthCam.com for a list.

Pushing drone footage to the next level

Drone footage. You’ve seen lots of dreamy sequences from high in the sky. But on March 8, 2021, a small Minneapolis company released a 90-second video with footage the likes of which you’ve never seen before. Here’s the local KARE-TV coverage:

Trevor Mogg of Digital Trends adds:

“Captured by filmmaker and expert drone pilot Jay Christensen of Minnesota-based Rally Studios, the astonishing 90-second sequence, called Right Up Our Alley, comprises a single shot that glides through Bryant Lake Bowl and Theater in Minneapolis. The film, which has so far been viewed more than five million times on Twitter alone, was shot using a first-person-view (FPV) Cinewhoop quadcopter, a small, zippy drone that’s used, as the name suggests, to capture cinematic footage.”

Here’s their corporate website and the original tweet.

Oscar Liang has a great tutorial on Cinewhoops.

Johnny FPV has a great first person view overview.

My take: ever had dreams of flying? This might be even better.

How NFTs will unleash the power of the Blockchain

NFT. WTF?

Let’s break this down to the individual letters.

F = Fungible. “Fungible” assets are exchangeable for similar items. We can swap the dollars in each other’s pockets or change a $10 bill into two $5 bills without breaking a sweat.

T = Token. Specifically, a cryptographic token validated by the blockchain decentralized database.

N = Non. Duh.

So NFT is a Non-Fungible Token, or in other words, a unique asset that is validated by the blockchain. This solves the real-world problem of vouching for the provenance of that Van Gogh in your attic; in the digital world, the blockchain records changes in the price and ownership, etc. of an asset in a distributed ledger that can’t be hacked. (Just don’t lose your crypto-wallet.)

Early 2021 has seen an explosion in marketplaces for the creation and trading of NFTs. Like most asset bubbles, it’s all tulips until you need to sell and buyers are suddenly scarce.

But I believe NFTs hold the key to unleashing the power of the blockchain for film distribution.

Cathy Hackl of Forbes writes about the future of NFTs:

“Non-fungible tokens are blockchain assets that are designed to not be equal. A movie ticket is an example of a non-fungible token. A movie ticket isn’t a ticket to any movie, anytime. It is for a very specific movie and a very specific time. Ownership NFTs provide blockchain security and convenience, but for a specific asset with a specific value.”

What if there was an NFT marketplace dedicated to streaming films? Filmmakers would mint a series of NFTs and each viewer would redeem one NFT to stream the movie. This would allow for frictionless media dissemination and direct economic compensation to filmmakers.

Here’s a tutorial on turning art in NFTs.

My take: while I think NFTs hold promise in film distribution, the key will be to lower the gas price; the fee paid when creating NFTs in the first place.

Digital Humans coming soon!

Epic Games and Unreal Engine have announced MetaHuman Creator, coming later in 2021.

MetaHuman Creator is a cloud-streamed app designed to take real-time digital human creation from weeks or months to less than an hour, without compromising on quality. It works by drawing from an ever-growing library of variants of human appearance and motion, and enabling you to create convincing new characters through intuitive workflows that let you sculpt and craft the result you want. As you make adjustments, MetaHuman Creator blends between actual examples in the library in a plausible, data-constrained way. You can choose a starting point by selecting a number of preset faces to contribute to your human from the diverse range in the database.”

Right now, you can start with 18 different bodies and 30 hair styles.

When you’re happy with your human, you can download the asset via Quixel Bridge, fully rigged and ready for animation and motion capture in Unreal Engine, and complete with LODs. You’ll also get the source data in the form of a Maya file, including meshes, skeleton, facial rig, animation controls, and materials.”

Got that? See documentation.

The takeaway is that your digital humans can live in your Unreal Engine environment. Is this the future of movies?

My take: This reminds me of my experiments in machinima ten years ago. I used a video game called The Movies that had a character generator (that would sync mouth movements with pre-recorded audio,) environments and scenes to record shots I would then assemble into movies. See Cowboys and Aliens (The Harper Version) for one example. You know, in these COVID times, I wonder if Unreal Engine’s ability to mash together video games and VFX will become a safer way to create entertainment that does not require scores of people to film together in the same studio at the same time.

Shoot your next film in Virtual Unreality

Oakley Anderson-Moore reports for No Film School on How One Studio Is Thriving During COVID (and Why It’s a Big Deal for Indies).

(The studio tour proper starts just before 14 minutes in this promotional video.)

“During the pandemic, one studio stayed open when most others closed. How? L.A. Castle Studios has developed ‘a better way to shoot.’ And owner Tim Pipher believes it’s the way of the future — perhaps no more so than for independent film. ‘I guess some of it comes down to luck,’ explained Pipher to No Film School. His studio has been slammed with work in the midst of the shutdowns. ‘COVID or no COVID, we think we’ve got a better way to shoot.'”

What sets this green-screen studio apart from others is the ability to shoot with a live-composited set.

Simply put, you and your actor can now create inside virtual reality.

How is this possible? It’s achieved by marrying movie making and video game 3D environments. The core software is Epic GamesUnreal Engine.

See the Unreal Engine website and its Marketplace.

Check out L.A. Castle Studios.

My take: I love this technology! Basically, it’s Star Trek‘s Holodeck with green instead of black walls. Keep in mind, as a filmmaker, you still have to address every other component other than location: for instance casting, costumes, makeup, props, blocking, lighting, shot selection and performance. Do I know any Unreal Engine gurus?

Intel Labs creates photorealistic 3D VR from photos

Jacob Fox on PCGamesN suggests that new tech from Intel Labs could revolutionise VR gaming.

He describes:

“A new technique called Free View Synthesis. It allows you to take some source images from an environment (from a video recorded while walking through a forest, for example), and then reconstruct and render the environment depicted in these images in full ‘photorealistic’ 3D. You can then have a ‘target view’ (i.e. a virtual camera, or perspective like that of the player in a video game) travel through this environment freely, yielding new photorealistic views.”

David Heaney on Upload VR clarifies: “Researchers at Intel Labs have developed a system capable of digitally recreating a scene from a series of photos taken in it.

“Unlike with previous attempts, Intel’s method produces a sharp output. Even small details in the scene are legible, and there’s very little of the blur normally seen when too much of the output is crudely ‘hallucinated’ by a neural network.”

Read the full paper.

My take: this is fascinating! This could yield the visual version of 3D Audio.