“I don’t believe humanity needs AI-generated actors.” – says Attila Tamás Áfra, recipient of next week’s Technical Oscar, about generative AI

Culture Global

Tonight, Hungarian IT expert Attila Tamás Áfra will receive a Technical Oscar at one of the highlight events closing Hollywood’s awards season. As per tradition, the award is granted annually by the Academy of Motion Picture Arts and Sciences for outstanding achievements in film technology. This is already the second such honor for the Transylvanian-born Hungarian expert—his work was previously recognized in 2021 as part of a development team. The prestigious awards will be handed out tonight in Los Angeles at the Academy Museum of Motion Pictures. Attila Tamás Áfra spoke with our Los Angeles-based reporter, Virág Vida, about ray tracing, artificial intelligence, and how a tech expert watches movies.

When did your interest in programming and digital visual effects begin?

Actually, I was programming even before high school. Later, during high school, I became really interested in how visual effects are generated—how programs can be used to create lifelike or aesthetically striking images and videos. I continued working on this subject in university, and even wrote my thesis on it. During my undergrad studies, I focused on image and video processing. At that point, it wasn’t yet about image creation, which became my main field later on. The focus was still on processing rather than creating motion pictures. Back then, the goal was simply to display images and videos—later it became about producing them.

Your career has progressed steadily, and in 2021, you received the prestigious Technical Oscar as part of the Intel team. Now in 2025, your name leads the list of awardees. This time it’s for a library you developed yourself.

That’s right, this is my own development, but the award is shared with Timo Aila, a researcher from NVIDIA. Although we’ve never worked together, I was inspired by his work. He published a paper that became a milestone in our field—it’s the foundational technology on which I built my project. Officially, it’s an open-source library that uses AI to enhance image quality. So, we’re receiving the award jointly, although we didn’t collaborate directly—unlike the 2021 award, which I shared with my colleagues at Intel.

As smartphone users, we’ve become used to our devices automatically improving image quality or allowing us to edit photos. Is your software similar in practice? How would you explain to a layperson what exactly you’re being awarded for?

The “library” is essentially a toolkit developers can integrate into larger systems. And yes, the analogy with cameras on smartphones is fitting. My program filters out the grainy noise we often see in photos taken in low light. But in this case, we’re not dealing with real photos—we’re processing virtual images. The thing is, noise similar to what we see in real photos can also occur in digitally rendered images—though for different reasons, the visual result is similar and can be treated in a similar way. So, you can think of it as a virtual camera’s image-enhancement method. It saves a lot of time without sacrificing image quality. That’s basically what it’s all about.

One keyword in the project description is “ray tracing.” What does that mean in the context of your work?

These visual effects—like those seen in animated movies—are now created by simulating light. Ray tracing means following the path of light. That was the focus of the technology that earned me my first award. To produce bright, clear images—just like in real life—the lens of the camera needs to gather a lot of light. The simulation works the same way: we simulate and collect a lot of light, which means a lot of computation… and that’s time-consuming. The more powerful the computer, the more light rays it can simulate. But it’s expensive and time-intensive. That’s where this new technology comes in—it removes the disruptive image noise and generates a nearly ideal image, as if—if it were real—we had let the camera expose the shot for a very long time.

AI is a hot topic in Hollywood, raising ethical and moral questions about how far its use should go in film and TV. What’s your professional opinion? Could technology completely take over filmmaking and cause the death of cinema as an art form? Here in Hollywood, people fear that once AI can generate perfect moving images, we won’t need actors, sets—anything at all.

That’s a great question and a very complex issue, which is hard to answer simply. In public discourse, AI seems like a single concept, but it’s actually an umbrella term for many similar yet distinct technologies. One type—what the art world fears—appears to create something new out of nothing: a new image, person, character, or location. In the film world, the worry is that AI-generated characters might replace actors, because it’s cheaper, faster, more reliable. I think that’s a very dark vision of the future, and personally, I don’t want to see that world. I don’t believe humanity needs or wants that. That’s not where AI’s true value lies. I find more value in the type of AI that helps or corrects rather than replaces. The technology I was awarded for does just that—it doesn’t create anything new, it enhances existing images. We could do the same without AI, just slower and not quite as well. So, AI essentially has two faces: the generative kind that seemingly creates something new, and the kind that’s simply a smarter algorithm—doing what we already could do, but better and faster. I don’t think that threatens the artistic value of any product.

Roughly what proportion of today’s films involve AI? As viewers, should we assume that pretty much every film undergoes some kind of visual correction?

AI might not be in every film, but I’d say some kind of visual effect is inevitably present in all productions, even in seemingly ordinary environments where there’s no sci-fi story, no fantasy elements—just for the sake of budget or simplicity. For instance, you don’t need to shoot in Tokyo—you can film in Budapest in front of a green screen and replace the background later. That’s much easier and cheaper.

I understand you’re a big film buff in your personal life as well. When you sit down to watch a movie or series, how do you manage to enjoy it without analyzing it professionally?

That’s a fun question, because I enjoy both types of films—those with jaw-dropping visual effects and those with none at all, or at least none that are noticeable. Of course, it’s impossible for me not to see the technical side in high-tech productions. (Laughs) Sometimes I’m amazed at the incredible VFX, and other times I think, “hmm, that could’ve been polished a bit more.” I think there are two kinds of movies: those where the technology is front and center, like Avatar, which was a milestone and hugely inspirational for me to pursue this career. Then there are those where the technology is hidden, simply supporting the visual world. I love those too, and in some ways, they’re easier for me to watch because the tech doesn’t distract me. (Laughs)

Will you attend this year’s award ceremony in person?

That was the plan… The last time I won, it was during COVID, so the ceremony was held virtually and we received the award by mail. The Technical Oscars are a bit different from the traditional Oscars—they’re more scientific in nature and come in three forms. In 2021, we received the Technical Achievement Award, which comes with a certificate. There’s also a version with a golden plaque shaped like the Oscar statuette. In some cases, an actual Oscar statue is awarded—but only one per year, to the single most outstanding project. I hoped to attend this year’s ceremony in Los Angeles, though it was postponed from January to late April due to the wildfires, but unfortunately, I can’t.

– Virág Vida –

Leave a Reply

Your email address will not be published. Required fields are marked *