Generative AI in VFX: Is it the future?

Suki looking forward cinematically with futuristic station behind

Fueled by a rapid improvement in technology, the global generative AI market was worth $1.2bn in 2022 and is expected to grow to a staggering $20.9bn by 2032. Something of a buzzword in 2023, generative AI refers to artificial intelligence that uses complex algorithms and neural networks to generate new content that’s similar in style and composition to the data it’s been trained on. Tools such as DALL-E and Midjourney have developed exponentially in recent months, making a massive impact on the creative industry and fuelling discussions over generative AI in VFX.

At Lux Aeterna, it’s vital that we investigate every cutting-edge technology that comes along, which is why we started experimenting with these tools in 2022. We’re bringing the power of generative AI to 3D VFX tools, to create a malleable exchange between these two worlds that can be applied to a wide range of use cases, giving us a new way of generating and improving assets, and creating unique visuals.

We are currently in an exploratory phase, carrying out a lot of R&D to find out how generative AI might be useful in future. For example, we’ve used these applications to help us develop our vision for experimental sci-fi film Reno, which we’re working on as part of the
MyWorld project.

Limitations of generative AI

While AI-based image generation tools may be capable of producing fantastic images, getting them to produce results that work for your needs can be a challenge, particularly in a high-level, professional context.And from a user perspective, an understanding of how these AI tools interpret prompts is essential, as is a wide vocabulary of terms. Not only do you need to be able to describe the scene, the subjects, the genre, the style, and the medium of a given idea, but you also need to use the words that the tool will understand and interpret in the way you intended. It can take hours or days to get to the best results.

One trend we’ve observed is that as these generative AI projects develop, many have sought to improve the capabilities of those systems by providing users more control over how material is generated. We see this as a recognition that the best results come from more human involvement in the process, particularly from artists and other creatives who understand how and where to utilise this technology.

What’s next?

The progress being made in the AI space is like nothing we’ve ever seen before. The tools and applications are evolving every day. Even if they stayed where they are right now, we would still see a massive impact. These tools are part of a rapidly developing area of technology that expands beyond images into text, code, 3D models, video, audio and more. And they don’t exist in isolation. They’re constantly intersecting with each other to extend their individual capabilities, so there’s a lot of interoperability.

We see a lot of potential in generative AI technologies to enhance and extend VFX and CGI capabilities, including everything from upscaling, colourisation, restoration, and texturing, to inpainting, outpainting, animation, and procedural generation.

“We’re right at the start of an age of generative AI systems and synthetic media,” says James Pollock, Creative Technologist at Lux Aeterna. “There’s going to be a big upheaval in the software we use to create assets and pull together shots as these developments are integrated and entirely new approaches arise.”

“Learning new tools and techniques comes with the territory, and everyone here at Lux Aeterna is really excited to see what happens next.”

Find out more about how Lux Aeterna is experimenting with AI tools for our My World.

Previous
Previous

A Day in the Life: Katie Hubbard

Next
Next

Meet the Team: Tav Flett